Canonical Correlation Analysis (CCA)

I am not completely satisfied with the expositions of CCA that I’ve come across, so I decided to write one that reflects my own intuition.

CCA is useful in the case where you observe two random variables that are both noisy linear functions of some underlying latent random variable, and you want to use this fact to help you guess the latent variable. Formally, assume
\[
x = T_X z + n_x, \quad \text{ and } \quad y = T_Y z + n_y,
\]
where, without loss of generality, we assume that the entries of \(z\) are uncorrelated and unit variance. Here \(T_X\) and \(T_Y\) are matrices whose image spaces may be of different dimension, and \(n_x\) and \(n_y\) are white noise. For convenience, we assume that \(z\) is mean-zero, so that \(x\) and \(y\) are also.

In order to recover \(z\) from \(x\) and \(y\), we phrase the following question: find two transformations \(\Phi_X\) and \(\Phi_Y\) so that the entries of \(\Phi_X x\) are uncorrelated and unit variance, as are those of \(\Phi_Y y\), and the correlation of \(\Phi_X x\) with \(\Phi_Y y\) is maximized.

We could use PCA to whiten \(x\) and \(y\) individually to get two different noisy estimates of \(z\), but this would ignore the fact that knowing both \(x\) and \(y\) gives us more information on \(z\). Indeed, the requirement that the correlation of \(\Phi_X x\) and \(\Phi_Y y\) be maximized should tend to choose \(\Phi_X\) and \(\Phi_Y\) which remove the directions containing the noise \(n_x\) and \(n_y\).

CCA can then be formulated as the following:
\[
\max_{\Phi_X, \Phi_Y} \mathbb{E} \langle \Phi_X x, \Phi_Y y \rangle \text{ subject to }
\mathbb{E} \Phi_X x (\Phi_X x)^T = \mathbb{E} \Phi_Y y (\Phi_Y y)^T = I,
\]
or equivalently,
\[
\max_{\Phi_X, \Phi_Y} \text{Trace}(\Phi_Y C_{y x} \Phi_X^T) \text{ subject to } \Phi_X C_{xx} \Phi_X^T = \Phi_Y C_{yy} \Phi_Y^T = I,
\]
where \(C_{xx}, C_{yy}\) are the covariance matrices of \(x\) and \(y\) and \(C_{yx}\) is the cross-covariance matrix of \(y\) and \(x\) given by \(C_{yx} = \mathbb{E} (yx^T).\)

Since \(C_{xx}\) and \(C_{yy}\) are full-rank, by taking \(\Phi_X^\prime = \Phi_X C_{xx}^{1/2}\) and \(\Phi_Y^\prime = \Phi_Y C_{yy}^{1/2}, \)this program can be transformed to
\[
\max_{\Phi_X^\prime, \Phi_Y^\prime} \text{Trace}(\Phi_Y^\prime C_{yy}^{-1/2} C_{yx} C_{xx}^{-1/2} \Phi_X^{\prime T}) \text{ subject to } \Phi_x^\prime \Phi_x^{\prime T} = \Phi_Y^\prime \Phi_Y^{\prime T} = I.
\]

From the standard characterization of the SVD, it is clear that the solution \(\Phi_X^\prime\) and \(\Phi_Y^\prime\) are given by the top right and left singular vectors of the matrix \(C_{yy}^{-1/2} C_{yx} C_{xx}^{-1/2}.\) Equivalently, they are the top eigenvectors of the two matrices \(C_{xx}^{-1/2} C_{xy} C_{yy}^{-1} C_{yx} C_{xx}^{-1/2}\) and \(C_{yy}^{-1/2} C_{yx} C_{xx}^{-1} C_{xy} C_{yy}^{-1}.\) The eigenvectors of these two unwieldy matrices are also the top eigenvectors of the generalized eigenvalue problems given by
\begin{align*}
C_{xy} C_{yy}^{-1} C_{yx} \phi_i^x & = \lambda_i C_{xx} \phi_i^x \\
C_{yx} C_{xx}^{-1} C_{xy} \phi_i^y & = \lambda_i C_{yy} \phi_i^y.
\end{align*}
Here the \(\lambda_i\) are the squared canonical correlations, and the \(\phi^x_i, \phi^y_i\) are the canonical correlation basis vectors.

None of the non-variational characterizations given so far seem terribly efficiently, since they all involve two matrix inversion. It turns out that CCA can be done by using a QR decomposition instead. See Björck and Golub, 1973 for the details of the algorithm, and the connection between CCA and the principle angles between subspaces.

Decision time: MacPorts vs Homebrew vs Fink

My work macbook pro recently crapped out on me during an update of the OS (apparently something has a tendency to go wrong with the video card or its driver or something similar during this particular update for this particular model … sigh) so I’ve had the joy of reinstalling my personal ecosystem of software again.

One of the crucial decisions for me is whether to use MacPorts, HomeBrew, or Fink to allow me to manage the installation of non-trivial Unix packages. This post is really just to remind myself that at this go-round, I chose MacPorts because of these posts. Essentially, Fink is older and MacPorts is its successor, so therefore better in my simplistic mind, and apparently it’s a lot easier to setup a SciPy install with MacPorts.

Adagrad and projections onto ellipsoids

((Caveat! I am not sure the manipulations done in this post are correct, but the gist is certainly there.))

One of my favorite optimization techniques is Adagrad, a first-order technique that approximates the Hessian by using all the gradients up to that point. It calls for updates of the form:
\[
x_{t+1} = \Pi_{\mathcal{X}}^{G_t^{1/2}} (x_t – \eta G_{t}^{-1/2} g_t),
\]
or more practically for high-dimensional problems,
\[
x_{t+1} = \Pi_{\mathcal{X}}^{\text{diag}(G_t)^{1/2}} (x_t – \eta \text{diag}(G_{t})^{-1/2} g_t).
\]
Here, \(g_t\) denotes the gradient at step \(t\), \(\eta\) is a fixed stepsize, \(\mathcal{X}\) is the constraint set for your optimization problem,
\[
G_t = \sum_{i=1}^t g_t g_t^T,
\]
and \(\Pi_{\mathcal{X}}^A\) is the projection onto the constraint set with respect to a distance function defined by a positive-definite matrix \(A\):
\[
\Pi_{\mathcal{X}}^A (y) := \text{argmin}_x \langle x – y, A (x-y) \rangle = \|A^{1/2} (x-y)\|_2^2.
\]
By convention \(\Pi_{\mathcal{X}} = \Pi_{\mathcal{X}}^I\) denotes the Euclidean projection operator.

The neat thing about Adagrad is that it chooses different effective step sizes for each variable, rather than using one fixed step size for all of them. I’ve found that Adagrad outperforms sgd in my non-constrained optimization problems.

Now I need to use adagrad on a ball constrained problem: \(\mathcal{X} = \{ x : \|x\|_2 \leq B \}\). Computing the associated projection operator \(\Pi_{\mathcal{X}}^A)\) was a nice enough exercise in basic convex optimization that it seems worth recording here. Actually, given the ubiquity of this constraint set, I’m surprised the authors didn’t seem to provide this as an example in their paper (I may have missed it, since I only read the absolute minimum to get the gist of the idea and then skimmed the rest).

Adagrad’s connection to Euclidean projection onto ellipsoids

Our first step is to observe that
\[
\min_{\|x\|_2 \leq B} \|A^{1/2} (x- y)\|_2^2 = \min_{\|A^{-1/2} x\| \leq B} \|x – A^{1/2}y\|_2^2
\]
since \(A^{1/2}\) is invertible, and that the unique minimizer of the first program is a linear transformation of the unique minimizer of the second. Concretely,
\begin{align*}
\Pi_{\|x\|_2 \leq B}^{A}(y) & = A^{-1/2} \text{argmin}_{\|A^{-1/2} x\| \leq B} \|x – A^{1/2}y\|_2^2 \\
& = A^{-1/2} \Pi_{\|A^{-1/2} x\|_2 \leq B} (A^{1/2} y).
\end{align*}

The Lagrangian dual to Euclidean projection onto an ellipsoid

Thus, it suffices to determine the Euclidean projection operator onto an ellipsoid, \(\Pi_{\|Q x\|_2 \leq B}(z)\), where \(Q\) is positive-semidefinite. To do so, let’s write out the optimization problem that defines this projection operator:
\[
\min_{x^T Q^T Q x \leq B^2} (x – z)^T(x-z).
\]
The associated Lagrangian is:
\begin{align*}
L(x,\lambda) &= (x-z)^T(x-z) – \lambda ( B^2 – x^TQ^TQx) \\
& = x^T( I + \lambda Q^TQ) x – 2z^T x + z^Tz – \lambda B^2.
\end{align*}
Recall that \(L(x,\lambda)\) is a lower bound on the objective whenever \(x\) is feasible and \(\lambda \geq 0.\) To find a uniform lower bound, we minimize \(L\) with respect to \(x\) by setting \(\nabla_x L = 0\). This occurs when
\[
x = (I + \lambda Q^TQ)^{-1} z,
\]
and gives the uniform lower bound
\[
g(\lambda) = \min_x L(x, \lambda) = z^T[ I – (I + \lambda Q^TQ)^{-1} ]z – \lambda B^2.
\]
The optimization problem we’re looking at satisfies strong duality, so we know that maximizing \(g\) over the set of nonnegative \(\lambda\) gives the same optimal value of the original problem, and corresponding to the optimal \(\lambda\) there is an optimal \(x\). We now find the optimal \(\lambda\) and show how to recover the optimal \(x\).

From the dual optimal point to the primal optimal point

First write the SVD of \(Q^TQ\) as \(Q^TQ = U \Sigma U^T.\) It follows that
\[
I – (I + \lambda Q^TQ)^{-1} = \left[ \frac{\lambda \sigma_{i}}{1 + \lambda \sigma_{i}} \right]_{ii},
\]
where \(\sigma_i = \Sigma_{ii}.\) The optimal \(\lambda\) then is the nonnegative \(\lambda\) which maximizes
\begin{align*}
g(\lambda) & = z^T[ I – (I + \lambda Q^TQ)^{-1} ]z – \lambda B^2 \\
& = \sum\nolimits_i (U^Tz)_i^2 \frac{\lambda \sigma_{i}}{1 + \lambda \sigma_{i}} – \lambda B^2.
\end{align*}

Observe that \(g(\lambda)\) is concave on \(\mathbb{R}^+,\) so its maximum occurs at \(0\) or the point where \(g^\prime(\lambda) = 0\). Thus, if the equation
\[
\sum\nolimits_i (U^Tz)_i^2 \frac{\sigma_i}{(1 + \lambda \sigma_i)^2} = \|Q(I + \lambda Q^TQ)^{-1} z\|_2^2 = B^2
\]
has a nonnegative solution, that solution is the optimal \(\lambda\). If not, then the optimal \(\lambda\) is \(0.\)

Given this optimal \(\lambda,\) the corresponding optimal \(x\) is given by
\[
x = \Pi_{\|Qx\|_2 \leq B}(z) = (I + \lambda Q^TQ)^{-1} z.
\]

From the Euclidean projection on an ellipsoid back to the Adagrad projector

Putting all the pieces together, we get the following expression for the projection operator needed in Adagrad:
\begin{align*}
\Pi_{\|x\|_2 \leq B}^A(y) & = A^{-1/2} \Pi_{\|A^{-1/2}x\|_2 \leq B}(A^{1/2}y) \\
& = A^{-1/2} ( I + \lambda A^{-1})^{-1} A^{1/2} y \\
& = (I + \lambda A^{-1})^{-1}y = A(A + \lambda I)^{-1}y,
\end{align*}
where \(\lambda\) is either the nonnegative solution to the nonlinear equation
\begin{align*}
\sum\nolimits_i (\Sigma^{1/2} V^T y)_{n-i+1}^2 \frac{\sigma_i(A^{-1})}{(1 + \lambda \sigma_i(A^{-1}))^2} & \\
& = \|(I + \lambda A^{-1})^{-1} y \|_2^2 \\
& = \|A(A + \lambda I)^{-1} y\|_2^2 = B^2,
\end{align*}
where \(A = U \Sigma U^T,\) or if such a solution does not exist, \(0.\)

Back of the envelope calculations of how fast your computer can do linear algebra operations

Let’s talk about CPU speed, practically. By practically, I mean, how fast can your CPU do linear algebra operations. And by linear algebra operations, I mean matrix-matrix multiplies.

First, you need to calculate how many FLOPS your computer can do. The following formula comes in handy:
\[
\text{nFLOPS} = \text{cores} \cdot \frac{\text{clock cycles}}{\text{second}} \cdot \frac{\text{FLOPS}}{\text{cycle}}.
\]
You probably already know the number of cores in your computer, and the number of clock cyles. The interesting thing here is the number of FLOPS per cycle: this depends on the architecture of your CPU and what exactly you take to be the size of a float.

It’s standard to take a float to consist of 32 bits, so the number of FLOPS per cycle depends on how many multiples of 32 bits can fit into your registers. SSE capable CPUs have 128 bit registers, so can do 4 FLOPS per cycle (this is the most common set of CPUs). AVX capable CPUs have 256 bit registers, so can do 8 FLOPS per cycle (e.g. the latest Macbook Pros are AVX capable).

Putting these bits together, I get that my workstation, which has 2 hexa-core SSE-capable CPUS each running at 2 GHz achieves
\[
\text{nFLOPS} = (2*6) * (2*10^9)*4 = 96 \text{GFLOPS}.
\]

The cost of a matrix-matrix multiply of two \(n\)-by-\(n\) matrices is essentially \(n^3\) floating point operations. Thus it should take this workstation about \(\frac{n^3}{96} * 10^{-9}\) seconds to do this multiply.

E.g., in my case, the predicted time of squaring two \(16\text{K} \times 16\text{K}\) matrices is about 42.6 seconds. A quick Matlab check shows it does take about 43 seconds.

A useful trick for computing gradients w.r.t. matrix arguments, with some examples

I’ve spent hours this week and last week computing, recomputing, and checking expressions for matrix gradients of functions. It turns out that except in the simplest of cases, the most painfree method for finding such gradients is to use the Frechet derivative (this is one of the few concrete benefits I derived from the differential geometry course I took back in grad school).

Remember that the Frechet derivative of a function \(f : X \rightarrow \mathbb{R}\) at a point \(x\) is defined as the unique linear operator \(d\) that is tangent to \(f\) at \(x\), i.e. that satisfies
\[
f(x+h) = f(x) + d(h) + o(\|h\|).
\]
This definition of differentiability makes sense whenever \(X\) is a normed linear space. If \(f\) has a gradient, then the Frechet derivative exists and the gradient satisfies the relation \(d(h) = \langle \nabla f(x), h \rangle.\)

Simple application

As an example application, lets compute the gradient of the function
\[
f(X) = \langle A, XX^T \rangle := \mathrm{trace}(A^T XX^T) = \sum_{ij} A_{ij} (XX^T)_{ij}
\]
over the linear space of \(m\) by \(n\) real-valued matrices equipped with the Frobenius norm. First we can expand out \(f(X+H)\) as
\[
f(X + H) = \langle A, (X+H)(X+H)^T \rangle = \langle A, XX^T + XH^T + HX^T + HH^T \rangle
\]
Now we observe that the terms which involve more than one power of \(H\) are \(O(\|H\|^2) = o(\|H\|)\) as \(H \rightarrow 0\), so
\[
f(X + H) = f(X) + \langle A, XH^T + HX^T \rangle + o(\|H\|).
\]
It follows that
\[
d(H) = \langle A, XH^T + HX^T \rangle = \mathrm{trace}(A^TXH^T) + \mathrm{trace}(A^THX^T),
\]
which is clearly a linear function of \(H\) as desired. To write this in a way that exposes the gradient, we use the
cyclicity properties of the trace, and exploit its invariance under transposes to see that
\begin{align}
d(H) & = \mathrm{trace}(HX^TA) + \mathrm{trace}(X^TA^T H) \\
& = \mathrm{trace}(X^TAH) + \mathrm{trace}(X^TA^T H) \\
& = \langle AX, H \rangle + \langle A^TX, H \rangle \\
& = \langle (A + A^T)X, H \rangle.
\end{align}
The gradient of \(f\) at \(X\) is evidently \((A + A^T)X\).

More complicated application

If you have the patience to work through a lot of algebra, you could probably calculate the above gradient component by component using the standard rules of differential calculus, then back out the simple matrix expression \((A + A^T)X\). But what if we partitioned \(X\) into \(X = [\begin{matrix}X_1^T & X_2^T \end{matrix}]^T\) and desired the derivative of
\[
f(X_1, X_2) = \mathrm{trace}\left(A \left[\begin{matrix} X_1 \\ X_2 \end{matrix}\right] \left[\begin{matrix}X_1 \\ X_2 \end{matrix} \right]^T\right)
\]
with respect to \(X_2\)? Then the bookkeeping necessary becomes even more tedious if you want to compute component by component derivatives (I imagine, not having attempted it). On the other hand, the Frechet derivative route is not significantly more complicated.

Some basic manipulations allow us to claim
\begin{align}
f(X_1, X_2 + H) & = \mathrm{trace}\left(A \left[\begin{matrix} X_1 \\ X_2 + H \end{matrix}\right] \left[\begin{matrix}X_1 \\ X_2 + H \end{matrix} \right]^T\right) \\
& = f(X_1, X_2) + \mathrm{trace}\left(A \left[\begin{matrix} 0 & X_1 H^T \\
H X_2^T & H X_2^T + X_2 H^T + H H^T \end{matrix} \right]\right)
\end{align}
Once again we drop the \(o(\|H\|)\) terms to see that
\[
d(H) = \mathrm{trace}\left(A \left[\begin{matrix} 0 & X_1 H^T \\
H X_2^T & H X_2^T + X_2 H^T \end{matrix} \right]\right).
\]
To find a simple expression for the gradient, we partition \(A\) (conformally with our partitioning of \(X\) into \(X_1\) and \(X_2\)) as
\[
A = \left[\begin{matrix} A_1 & A_2 \\ A_3 & A_4 \end{matrix} \right].
\]
Given this partitioning,
\begin{align}
d(H) & = \mathrm{trace}\left(\left[\begin{matrix}
A_2 H X_1^T & \\
& A_3 X_1 H^T + A_4 H X_2^T + A_4 X_2 H^T
\end{matrix}\right] \right) \\
& = \langle A_2^TX_1, H \rangle + \langle A_3X_1, H \rangle + \langle A_4^T X_2, H \rangle + \langle A_4X_2, H \rangle \\
& = \langle (A_2^T + A_3)X_1 + (A_4^T + A_4)X_2, H \rangle.
\end{align}
The first equality comes from noting that the trace of a block matrix is simply the trace of its diagonal parts, and the second comes from manipulating the traces using their cyclicity and invariance to transposes.

Thus \(\nabla_{X_2} f(X_1, X_2) = (A_2^T + A_3)X_1 + (A_4^T + A_4)X_2.\)

A masterclass application

Maybe you didn’t find the last example convincing. Here’s a function I needed to compute the matrix gradient for— a task which I defy you to accomplish using standard calculus operations—:
\[
f(V) = \langle 1^T K^T, \log(1^T \mathrm{e}^{VV^T}) \rangle = \log(1^T \mathrm{e}^{VV^T})K1.
\]
Here, \(K\) is an \(n \times n\) matrix (nonsymmetric in general), \(V\) is an \(n \times d\) matrix, and \(1\) is a column vector of ones of length \(n\). The exponential \(\mathrm{e}^{VV^T}\) is computed entrywise, as is the \(\log\).

To motivate why you might want to take the gradient of this function, consider the situation that \(K_{ij}\) measures how similar items \(i\) and \(j\) are in a nonsymmetric manner, and the rows of \(V\) are coordinates for representations of the items in Euclidean space. Then \((1^T K)_j\) measures how similar item \(j\) is to all the items, and
\[
(1^T \mathrm{e}^{VV^T})_j = \sum_{\ell=1}^n \mathrm{e}^{v_\ell^T v_j}
\]
is a measure of how similar the embedding \(v_j\) is to the embeddings of all the items. Thus, if we constrain all the embeddings to have norm 1, maximizing \(f(V)\) with respect to \(V\) ensures that the embeddings capture the item similarities in some sense. (Why do you care about this particular sense? That’s another story altogether.)

Ignoring the constraints (you could use a projected gradient method for the optimization problem), we’re now interested in finding the gradient of \(f\). In the following, I use the notation \(A \odot B\) to indicate the pointwise product of two matrices.
\begin{align}
f(V + H) & = \langle 1^T K, \log(1^T \mathrm{e}^{(V+H)(V+H)^T} \rangle \\
& = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} \odot \mathrm{e}^{VH^T} \odot \mathrm{e}^{HV^T} \odot \mathrm{e}^{HH^T} ]) \rangle
\end{align}
One can use the series expansion of the exponential to see that
\begin{align}
\mathrm{e}^{VH^T} & = 11^T + VH^T + o(\|H\|), \\
\mathrm{e}^{HV^T} & = 11^T + HV^T + o(\|H\|), \text{ and}\\
\mathrm{e}^{HH^T} & = 11^T + o(\|H\|).
\end{align}
It follows that
\begin{multline}
f(V + H) = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} \odot (11^T + VH^T + o(\|H\|)) \\
\odot (11^T + HV^T + o(\|H\|)) \odot (11^T + o(\|H\|)) ]) \rangle.
\end{multline}
This readily simplifies to
\begin{align}
f(V + H) & = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} \odot(11^T + VH^T + HV^T + o(\|H\|) )]) \rangle \\
& = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} + e^{VV^T} \odot (VH^T + HV^T) + o(\|H\|) )]) \rangle
\end{align}
Now recall the linear approximation of \(\log\):
\[
\log(x) = \log(x_0) + \frac{1}{x_0} (x-x_0) + o(|x- x_0|^2).
\]
Apply this approximation pointwise to conclude that
\begin{multline}
f(V + H) = \langle 1^T K, \log(1^T \mathrm{e}^{VV^T}) + \\
\{1^T \mathrm{e}^{VV^T}\}^{-1}\odot (1^T [\mathrm{e}^{VV^T} \odot (VH^T + HV^T) + o(\|H\|)]) \rangle,
\end{multline}
where \(\{x\}^{-1}\) denotes the pointwise inverse of a vector.
Take \(D\) to be the diagonal matrix with diagonal entries given by \(1^T \mathrm{e}^{VV^T}\). We have shown that
\[
f(V + H) = f(V) + \langle K^T1, D^{-1} [\mathrm{e}^{VV^T} \odot (VH^T + HV^T)]1 \rangle + o(\|H\|),
\]
so
\begin{align}
d(H) & = \langle K^T1, D^{-1} [\mathrm{e}^{VV^T} \odot (VH^T + HV^T)]1 \rangle \\
& = \langle D^{-1}K^T 11^T, \mathrm{e}^{VV^T} \odot (VH^T + HV^T) \rangle \\
& = \langle \mathrm{e}^{VV^T} \odot D^{-1}K^T 11^T, (VH^T + HV^T) \rangle.
\end{align}
The second inequality follows from the standard properties of inner products and the third from the observation that
\[
\langle A, B\odot C \rangle = \sum_{ij} A_{ij}*B_{ij}*C_{ij} = \langle B \odot A, C \rangle.
\]
Finally, manipulations in the vein of the two preceding examples allow us to claim that
\[
\nabla_V f(V) = [\mathrm{e}^{VV^T} \odot (11^T K D^{-1} + D^{-1} K^T 11^T)] V.
\]

As a caveat, note that if instead \(f(V) = \log(1^T \mathrm{e}^{VV^T} ) K^T 1\), then one should substitute \(K\) for \(K^T\) in the last expression.

A workaround for installing SQBLib

I spent about an hour today getting Carlos Becker’s SQBLib package for gradient boosted tree regression in matlab working. The issue is that it depends on liblbfgs, and following the instruction Carlos gave for compiling liblbfgs resulted in errors when MATLAB tried to link liblbfgs into his code.

Specifically, I got a mysterious error like
/usr/bin/ld: /home/agittens/Downloads/liblbfgs-1.10/lib/lbfgs.o: relocation R_X86_64_32S against `.text' can not be used when making a shared object; recompile with -fPIC.

See the original install instructions. Here’s the fix that worked for me on a 64-bit Linux box:

  1. Run ./configure --disable-shared --enable-static --enable-sse2 from the liblbfgs root directory
  2. Run make --dry-run to find the gcc command that generates liblbfgs.o and modify it to add the -fPIC compiler option. In my case, that resulted in the line
    gcc -DHAVE_CONFIG_H -I.. -I. -I../include -msse2 -DUSE_SSE -O3 -ffast-math -fPIC -Wall -c lbfgs.c -o lbfgs.o'
  3. Move into the lib/ directory and execute this commnad
  4. Edit the GLN... case of the switch statement in the compileMex.m file to replace the current
    %s/lib/.libs/lbfgs.o portion of the mex call with %s/lib/lbfgs.o
  5. Run compileMex(<liblbfgs root directory>)

Quick note on the Chen, Chi, Goldsmith covariance sketching paper

NB: I will update this post as I read the paper, in case it turns out that the first issue I raised is not legitimately a concern.

Covariance estimation (and the natural extension, precision estimation) has always been an interesting topic for me because it (can) represent a concise, concrete, and very broadly applicable instance of applied nonasymptotic random matrix theory. Likewise, I’m also quite interested in matrix sketching algorithms. Thus I was very excited to see the latest preprint by Chen, Chi, and Goldsmith on arxiv which presents a convex optimization algorithm for recovering covariance matrices from randomized sketches obtained in a streaming manner. Their convex optimization problem is essentially the PhaseLift formulation used for recovering phase from magnitude, but their proof shows that it works for covariance matrix recovery.

I have only just started reading this paper, and I’m still excited, but I have two concerns already: first, it is not clear to me that the algorithm they propose is actually the algorithm they provide a guarantee for! At the least, the text must be corrected to make it clear that this is indeed the case. To be more precise, their algorithm is to compute a few random sketching vectors ahead of time, then as the rows of the matrix come in, compute the magnitude of the projection of each row onto *one* randomly chosen sketching vector. The measurement model described mathematically seems to compute the magnitude of the projection of each row onto *each* of the sketching vectors. Big difference there.

Second, their algorithm provides a Frobenius norm guarantee, which is par for the course, but they make claims about things like subspace detection, for which afaik, Frobenius norm guarantees are too weak to really be of interest. But here, this may be a matter of preference, and practitioners probably don’t care about sharp guarantees as long as it works in practice and has at least a semblance of theoretical support.

Sampling uniformly from the set of partitions into a fixed number of nonempty sets

It’s easy to sample uniformly from the set of partitions of a set: you pick a number of bins using an appropriate exponential distribution, then randomly i.i.d. toss each element of the set into one of those bins. At the end of this procedure, the nonempty bins will constitute your uniformly sampled random partition. [literature ref: “Generation of a random partition of a finite set by an urn model” by Stam, 1983; pretty pictures ref: see this blog post].

Is there a similarly efficient and simple algorithm for sampling uniformly from the set of partitions into a fixed number of nonempty sets? The only algorithm I’m aware of takes advantage of the fact that the number of such partitions is given by \(\left\{n \atop p \right\}\), a Stirling number of the second kind, if the set has \(n\) elements and we desire \(p\) nonempty subsets in the partition. In particular, we have the identity
\[
\left\{ {n \atop p} \right\} = \left\{ {n-1 \atop p-1} \right\} + p \left\{ {n-1 \atop p} \right\}
\]
that comes from observing that the only two ways to construct a partition of \(n\) elements into \(p\) nonempty sets are to: either partition the first \(n-1\) elements into \(p-1\) nonempty sets and take the remaining singleton as our final set in the partition, or we partition the first \(n-1\) elements into \(p\) nonempty sets and place the remaining element into any of these \(p\) sets.

This observation leads to a straightforward recursive sampling procedure: with probability \(\left\{ {n-1 \atop p-1} \right\}/\left\{ {n \atop p} \right\}\), use the first procedure with a randomly sampled partition of the first \(n-1\) elements into \(p-1\) nonempty sets, otherwise use the second procedure with a randomly sampled partition of the first \(n-1\) elements into \(p\) nonempty sets.

Unfortunately, this is not an efficient procedure for several reasons. In particular, it requires computing Stirling numbers, and taking their ratio. When \(n,k\) are large, this will require both computational time and, more of a practical impediment, arbitrary-precision arithmetic. A straightforward implementation also relies on recursion, which is infeasible when \(n\) is large. Clearly one can implement this algorithm without using recursion; one can also use an asymptotic expansion of the Stirling numbers to approximate the ratio \(\left\{ {n-1 \atop p-1} \right\}/\left\{ {n \atop p} \right\}\) when \(n,k\) are large … but this comes at the cost of some unspecified inaccuracy and just doesn’t feel right.

So the question remains: is there an efficient and simple way to sample uniformly from the set of partitions into a fixed number of nonempty sets?

I miss Mathematica

Why? Because Mathics is not up to helping me determine if indeed
\[
f(\{A_1, \ldots, A_p\}) = \frac{(n-p)!^2}{n! p!} \left(\frac{1}{p} \right)^{n-p} \frac{|A_1|^2 \cdots |A_p|^2}{|A_1|!\cdots |A_p|!}
\]
is a pmf over the set of partitions of the set \(\{1, \ldots, n\}\) into \(p \leq n\) nonempty sets. In particular, the sets \(A_1, \ldots, A_p\) in the formula above satisfy \(|A_1| + \ldots +|A_p| = n\) and \(|A_i| \geq 1\) for all \(i.\)

I feel like I could empirically test this easily in Mathematica, but OMG trying to do it in Matlab is a real pain, so I gave up. Combinatorics or set manipulation in Matlab in general is an exercise in trying to make a smoothie with a grater: you can do it, eventually, but it’s going to take forever and make a mess.

The convergence rate of the OLS estimator for linear regression

A lot of machine learning and modern statistics concerns the rate at which certain statistical estimators converge to the quantities they are estimating. For instance, in classification, you use a collection of observations of covariates \(\mathbf{x}_i \in \mathbb{R}^p \) and classifications \(y_i \in \{\pm 1\}\) to fit an estimated classification function \(h : \mathbb{R}^p \rightarrow \{\pm 1\},\) and the relevant question is how close is this estimated function to the actual classification function as the number of observations increase?

In this post, I’ll consider the consistency rate of the simple ordinary least-squares estimator for linear regression coefficients. The estimators of more practical interest in a modern setting (high-dimensional with fewer observations than variables) are more complicated, but OLS estimators are a natural starting point, and comparatively much easier to work with and understand.

To establish the setting, assume that our observations take the form \(y_i = \mathbf{x}_i^T \mathbf{\beta} + \varepsilon_i,\) where \(\mathbf{x}_i\) are our observations, and \(\varepsilon_i\) are i.i.d. \(\mathcal{N}(0,1/n)\) noise that are independent of the covariates. We can write this as the matrix–vector equation
\[
\mathbf{y} = \mathbf{X} \mathbf{\beta} + \mathbf{\varepsilon}.
\]
The rows of the \(n \times p\) matrix \(\mathbf{X}\) constitute the observed covariates.

Given data \(\mathbf{X}\) and \(\mathbf{y}\) that we expect conform to this model, the OLS estimator for the regression coefficients is \(\hat{\mathbf{\beta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y},\) if we assume that \(\mathbf{X}\) is full-rank. We are interested in how fast \(\hat{\mathbf{\beta}}\) converges to \(\mathbf{\beta}\) as the number of observations \(p\) is increased. It seems most natural to consider the \(\ell_2\) error since we are dealing with least-squares estimation.

If \(\mathbf{X}\) is full-rank with \(p\)-largest singular value bounded away from \(0\), then \(\mathbf{X}^\dagger \mathbf{X} = \mathbf{I}\) and we can estimate
\[
\|\hat{\mathbf{\beta}} – \mathbf{\beta} \|_2 = \|(\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{\varepsilon}\|_2 \leq \|\mathbf{X}^\dagger\|_2 \|\mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{\varepsilon}\|_2.
\]
Since \(\mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T\) is a projection onto a \(p\) dimensional-subspace of \(\mathbb{R}^n,\) we see that in this case
\[
\|\hat{\mathbf{\beta}} – \mathbf{\beta} \|_2 \lesssim \sigma_p(\mathbf{X})^{-1} \sqrt{p/n}.
\]

It’s natural to ask whether this bound is optimal. To see that it is, consider the case of an orthogonal design, i.e. \(\mathbf{X}^T \mathbf{X} = \mathbf{I}_p.\) Then
\[
\|\hat{\mathbf{\beta}} – \mathbf{\beta} \|_2 = \|\mathbf{X}^T \mathbf{\varepsilon}\|_2.
\]
Since \(\mathbf{X}^T \mathbf{\varepsilon}\) is a \(\mathbf{N}(0, \mathbf{I}_p/n)\) r.v., we see that the right-hand side quantity concentrates around \(\sqrt{p/n}\).