Tensor recovery and completion — a novel approach to recovering low-rank tensors that does not involve matrix unfolding, and has reasonable sample complexities (better than square deal)
Partial sharing of latent factors in multiview data — explicitly take advantage of the fact that not only should information be shared across modes, but also each mode should have its own unique information
Analysis of a nonconvex low-rank matrix estimator — not sure what the regularizer is, but looks readable, analysis perhaps uses the restricted isometry assumption
Exponential families on Manifolds — fun light reading. natural adaption of exponential families to (Lie?) manifolds; not much experimental results
Exponential families in high dimensions — learning convergence rate under sparsity assumptions. useful for anything I do? looks like fun/informative reading, though
Graph partitioning for parallel machine learning — uses submodularity. how compares to the sampling scheme from Cyclades?
Column selection from matrices with missing data — looks like a long read, but worthwhile, and well written. READ
Minkowski symmetrization of star-shaped sets — looks readable; applicable to using Householder rotations to do range space estimation?
Distributed preconditioning for sparse linear systems — A Saad paper. Is this useful for us anywhere? Nice to read to soak in some knowledge on preconditioning
Another preconditioner for sparse systems — Another Saad paper. Nice to read to get some preconditioner knowledge
Using bootstrap to test equality of covariance matrices — relevant to the science problems we’re working on (e.g. diagnostics)? might be worth reading just for general statistical sophistication; they apply the method to a biology problem
A second-order active set algorithm for l1-regularized optimization — with short, sweet, convergence analysis. looks like a fun, informative optimization read
Boosting in linear regression is related to subgradient optimization — how is this related to AdaBoost and Forward Stagewise Regression are First-Order Convex Optimization Methods by the same authors?
And an oldie:
Duality between subgradient and conditional gradient methods — exactly as stated. Looks like a nice piece of work in terms of convergence analyses and ideas to purpose, as well as general pedagogical value. A Bach paper.