×

Smooth singular value thresholding algorithm for low-rank matrix completion problem. (English) Zbl 07854606

Summary: The matrix completion problem is to predict missing entries of a data matrix using the low-rank approximation of the observed entries. Typical approaches to matrix completion problem often rely on thresholding the singular values of the data matrix. However, these approaches have some limitations. In particular, a discontinuity is present near the thresholding value, and the thresholding value must be manually selected. To overcome these difficulties, we propose a shrinkage and thresholding function that smoothly thresholds the singular values to obtain more accurate and robust estimation of the data matrix. Furthermore, the proposed function is differentiable so that the thresholding values can be adaptively calculated during the iterations using Stein unbiased risk estimate. The experimental results demonstrate that the proposed algorithm yields a more accurate estimation with a faster execution than other matrix completion algorithms in image inpainting problems.

MSC:

65F55 Numerical methods for low-rank matrix approximation; matrix compression
15A83 Matrix completion problems

Software:

ADMiRA
Full Text: DOI

References:

[1] H. Abdi, Partial least squares regression and projection on latent structure regression (PLS Regression), Comput. Stat. 2 (2010), no. 1, 97-106.
[2] J.-F. Cai, E. J. Candès, and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim. 20 (2010), no. 4, 1956-1982. https://doi.org/10.1137/ 080738970 · Zbl 1201.90155 · doi:10.1137/080738970
[3] E. J. Candès, J. K. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math. 59 (2006), no. 8, 1207-1223. https: //doi.org/10.1002/cpa.20124 · Zbl 1098.94009 · doi:10.1002/cpa.20124
[4] E. J. Candès, C. A. Sing-Long, and J. D. Trzasko, Unbiased risk estimates for singu-lar value thresholding and spectral estimators, IEEE Trans. Signal Process. 61 (2013), no. 19, 4643-4657. https://doi.org/10.1109/TSP.2013.2270464 · Zbl 1393.94187 · doi:10.1109/TSP.2013.2270464
[5] A. Cui, J. Peng, H. Li, C. Zhang, and Y. Yu, Affine matrix rank minimization problem via non-convex fraction function penalty, J. Comput. Appl. Math. 336 (2018), 353-374. https://doi.org/10.1016/j.cam.2017.12.048 · Zbl 1391.90488 · doi:10.1016/j.cam.2017.12.048
[6] M. Fornasier, H. Rauhut, and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim. 21 (2011), no. 4, 1614-1640. https://doi.org/10.1137/100811404 · Zbl 1236.65044 · doi:10.1137/100811404
[7] M. Gavish and D. L. Donoho, Optimal shrinkage of singular values, IEEE Trans. Inform. Theory 63 (2017), no. 4, 2137-2152. https://doi.org/10.1109/TIT.2017.2653801 · Zbl 1366.94100 · doi:10.1109/TIT.2017.2653801
[8] P. C. Hansen, Discrete inverse problems, Fundamentals of Algorithms, 7, SIAM, Philadelphia, PA, 2010. https://doi.org/10.1137/1.9780898718836 · Zbl 1197.65054 · doi:10.1137/1.9780898718836
[9] P. Jain, R. Meka, and I. Dhillon, Guaranteed rank minimization via singular value projection, Adv. Neural Info. Proc. Syst. (2010), 937-945.
[10] J. Josse and S. Sardy, Adaptive shrinkage of singular values, Stat. Comput. 26 (2016), no. 3, 715-724. https://doi.org/10.1007/s11222-015-9554-9 · Zbl 1505.62207 · doi:10.1007/s11222-015-9554-9
[11] K. Lee and Y. Bresler, ADMiRA: atomic decomposition for minimum rank approxima-tion, IEEE Trans. Inform. Theory 56 (2010), no. 9, 4402-4416. https://doi.org/10. 1109/TIT.2010.2054251 · Zbl 1366.94112 · doi:10.1109/TIT.2010.2054251
[12] H. Y. Li, Q. Zhang, A. Cui, and J. Peng, Minimization of fraction function penalty in compressed sensing, IEEE Trans. Neural Netw. Learn. Syst. 31 (2020), no. 5, 1626-1637.
[13] W. Li, L. Zhao, Z. Lin, D. Xu, and D. Lu, Non-local image inpainting using low-rank matrix completion, Comput. Graph. Forum 34 (2015), no. 6, 111-122.
[14] H. Liu, Z. Ma, J. Han, Z. Chen, and Z. Zheng, Regularized partial least squares for multi-label learning, J. Mach. Learn. Cyber. 9 (2018), 335-346. https://doi.org/10. 1007/s13042-016-0500-8 · doi:10.1007/s13042-016-0500-8
[15] K. Mohan and M. Fazel, Iterative reweighted algorithms for matrix rank minimization, J. Mach. Learn. Res. 13 (2012), 3441-3473. · Zbl 1436.65055
[16] L. T. Nguyen, J. Kim, and B. Shim, Low-rank matrix completion: a contemporary sur-vey, IEEE Access 7 (2019), 94215-94237.
[17] B. Recht, M. Fazel, and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev. 52 (2010), no. 3, 471-501. https://doi.org/10.1137/070697835 · Zbl 1198.90321 · doi:10.1137/070697835
[18] C. M. Stein, Estimation of the mean of a multivariate normal distribution, Ann. Statist. 9 (1981), no. 6, 1135-1151. · Zbl 0476.62035
[19] P. Symeonidis and A. Zioupos, Matrix and Tensor Factorization Techniques for Rec-ommender Systems, Springer 2016. https://doi.org/10.1007/978-3-319-41357-0 · Zbl 1358.68005 · doi:10.1007/978-3-319-41357-0
[20] M. Verbanck, J. Josse, and F. Husson, Regularised PCA to denoise and visualise data, Stat. Comput. 25 (2015), no. 2, 471-486. https://doi.org/10.1007/s11222-013-9444-y · Zbl 1331.62298 · doi:10.1007/s11222-013-9444-y
[21] Y. Wan, Q. Chen, and Y. Yang, Robust impulse noise variance estimation based on image histogram, IEEE Sig. Proc. Letter 17 (2010), no. 5, 485-488.
[22] Geunseop Lee Division of Global Business and Technology Hankuk University of Foreign Studies Yongin 17035, Korea Email address: geunseop.lee@hufs.ac.kr
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.