×

Robust low-rank matrix estimation. (English) Zbl 1412.62068

The authors consider so called matrix completion problems, that means there is a high-dimensional matrix with \(p\) rows and \(q\) columns but with only \(n<pq\) observed (noisy) entries and the challenge is now to predict the missing entries. This uniform sampling matrix completion problem is often discussed in statistical papers but the used estimators which are optimal WRT quadratic loss are not robust. In this paper the authors consider so called robust nuclear norm (trace norm) penalized estimators which are optimal WRT an absolute value loss function or WRT Huber’s loss function (with given tuning parameter). Under some assumptions on the sparsity of the problem and on the regularity of the risk functions they present so called oracle inequalities for these estimators. “An oracle inequality relates the performance of a real estimator with that of an ideal estimator which relies on perfect information supplied by an oracle, and which is not available in practice.” (from [E. J. Candès, Acta Numerica 15, 257–325 (2006; Zbl 1141.62001)]). Moreover the asymptotic behaviour of the estimators is investigated and simulation studies are added.

MSC:

62H12 Estimation in multivariate analysis
62J05 Linear regression; mixed models
62F30 Parametric inference under constraints
62F35 Robustness and adaptive procedures (parametric inference)

Citations:

Zbl 1141.62001

Software:

CVX

References:

[1] Bühlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, Heidelberg. · Zbl 1273.62015
[2] Cambier, L. and Absil, P.-A. (2016). Robust low-rank matrix completion by Riemannian optimization. SIAM J. Sci. Comput.38 S440–S460. · Zbl 1352.65149 · doi:10.1137/15M1025153
[3] Candès, E. J. and Plan, Y. (2010). Matrix completion with noise. Proc. IEEE98 925–936.
[4] Candès, E. J., Li, X., Ma, Y. and Wright, J. (2011). Robust principal component analysis? J. ACM58 11. · Zbl 1327.62369
[5] Chandrasekaran, V., Sanghavi, S., Parrilo, P. A. and Willsky, A. S. (2011). Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim.21 572–596. · Zbl 1226.90067 · doi:10.1137/090761793
[6] Chen, Y., Jalali, A., Sanghavi, S. and Caramanis, C. (2013). Low-rank matrix recovery from errors and erasures. IEEE Trans. Inform. Theory59 4324–4337.
[7] Cherapanamjeri, Y., Gupta, K. and Jain, P. (2016). Nearly-optimal robust matrix completion. Preprint. Available at arXiv:1606.07315.
[8] CVX Research Inc. (2012). CVX: Matlab Software for Disciplined Convex Programming, version 2.0. Available at http://cvxr.com/cvx.
[9] Elsener, A. and van de Geer, S. (2018). Supplement to “Robust low-rank matrix estimation.” DOI:10.1214/17-AOS1666SUPP. · Zbl 1412.62068
[10] Foygel, R., Shamir, O., Srebro, N. and Salakhutdinov, R. R. (2011). Learning with the weighted trace-norm under arbitrary sampling distributions. Adv. Neural Inf. Process. Syst. 2133–2141.
[11] Klopp, O. (2014). Noisy low-rank matrix completion with general sampling distribution. Bernoulli 282–303. · Zbl 1400.62115 · doi:10.3150/12-BEJ486
[12] Klopp, O., Lounici, K. and Tsybakov, A. B. (2016). Robust matrix completion. Probab. Theory Related Fields 1–42. · Zbl 1383.62167 · doi:10.1007/s00440-016-0736-y
[13] Koltchinskii, V., Lounici, K. and Tsybakov, A. B. (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist.39 2302–2329. · Zbl 1231.62097 · doi:10.1214/11-AOS894
[14] Lafond, J. (2015). Low rank matrix completion with exponential family noise. J. Mach. Learn. Res.: Workshop and Conference Proceedings. COLT 2015 Proceedings40 1–20.
[15] Li, X. (2013). Compressed sensing and matrix completion with constant proportion of corruptions. Constr. Approx.37 73–99. · Zbl 1258.93076 · doi:10.1007/s00365-012-9176-9
[16] Negahban, S. and Wainwright, M. J. (2011). Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Ann. Statist.39 1069–1097. · Zbl 1216.62090 · doi:10.1214/10-AOS850
[17] Negahban, S. and Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. J. Mach. Learn. Res.13 1665–1697. · Zbl 1436.62204
[18] Rohde, A. and Tsybakov, A. B. (2011). Estimation of high-dimensional low-rank matrices. Ann. Statist.39 887–930. · Zbl 1215.62056 · doi:10.1214/10-AOS860
[19] Srebro, N., Rennie, J. and Jaakkola, T. S. (2004). Maximum-margin matrix factorization. In Proceedings of the NIPS Conference 1329–1336. Vancouver.
[20] Srebro, N. and Shraibman, A. (2005). Rank, trace-norm and max-norm. In Learning Theory 545–560. Springer, Berlin. · Zbl 1137.68563
[21] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 267–288. · Zbl 0850.62538
[22] van de Geer, S. (2001). Least squares estimation with complexity penalties. Math. Methods Statist.10 355–374. · Zbl 1005.62043
[23] van de Geer, S.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.