Truncated SVD … how?

This question has been bothering me off and on for several months now: how *exactly* is the truncated SVD computed, and what is the cost? I’ve gathered that most methods are based on Krylov subspaces, and that implicitly restarted Arnoldi processes seem to be the most popular proposal … in what miniscule amount of literature I’ve been able to find on the subject (like only two or three papers, tops, that actually focus on computing truncated SVDs, as opposed to the multitudes that propose using them). However, I’ve no idea if that’s the case, or just the impression I have.

But mostly, I want to know the compute time of the most efficient truncated SVD algorithm. Why? A naive estimate of getting the rank-\(k\) truncated SVD of an \(m\times n\) matrix using Krylov subspace techniques is \(\mathrm{O}(mnk)\), but in practice the compute time actually depends on properties of the matrix. On the other hand, basically all randomized low-rank techniques take around that many operations but without depending on the properties of the matrix. Soooo, it’d be nice to know when randomized techniques are competitive. Since there’s so much literature on them, I assume they are competitive (and the Halko–Martinsson–Tropp survey paper has some plots that show speedups).

  • The right singular vectors of $A$ are equivalent to the eigenvectors of the symmetric matrix $A^*A$, with the same ordering. One approach I’ve been personally working on is iteratively finding a k-dimensional subspace that best approximates the first k orthonormal eigenvectors of $A^*A$, using a posteriori error estimates for symmetric eigenvalue problems to choose refinement and coarsening. I’m working on this in the infinite dimensional case where “$A^*A$” is the hessian of a (possibly nonlinear) functional on a function space, and the finite subspaces are discretizations, but it should be applicable in the discrete linear case. I can send you a reference when I publish it, if you want.

    • swiftset

      Thanks, I’d appreciate a reference when you have one! I understand the connection of the SVD with the symmetric eigenvalue problem, but that doesn’t help with determining the run time of truncated SVD algorithms, since the iterative algorithms for eigenvalue problems also have runtimes (afaik) that depend on the properties of the matrix.

      • Yeah, the convergence time will depend on the regularity of the eigenfunctions so it may not be directly applicable without further knowledge about the relevant operator/matrix.