×

An efficient semi-proximal ADMM algorithm for low-rank and sparse regularized matrix minimization problems with real-world applications. (English) Zbl 1515.65161

Summary: With the development of technology and the arrival of the era of big data, a large number of complex data structures have been generated, which makes matrix minimization become important and necessary. However, in the process of analyzing those data, there exist some latent structures such as low-rank and sparse decomposition. To address this issue, in this paper, we propose and study a novel low-rank and sparse regularized matrix minimization. Then an efficient semi-proximal alternating direction method of multipliers (sPADMM) is developed from the dual perspective. In theoretical analysis, it has proved that the sequence generated by the sPADMM converges to a global minimizer. Furthermore, the non-asymptotic statistical property is derived which suggests that the gap between the corresponding estimators and the true values is bounded with high probability and their consistency is guaranteed under rather weak regularity conditions. We also prove that the sequence generated by the proposed algorithm converges to the true value with high probability. Finally, extensive numerical experiments are conducted to verify the superiority of the proposed method in wide applications such as signal processing, image reconstruction, and video denoising.

MSC:

65K05 Numerical mathematical programming methods
90C25 Convex programming
94A12 Signal theory (characterization, reconstruction, filtering, etc.)
Full Text: DOI

References:

[1] Jordan, M.; Mitchell, T., Machine learning: Trends, perspectives, and prospects, Science, 349, 6245, 255-260 (2015) · Zbl 1355.68227
[2] Zhou, H.; Li, L., Regularized matrix regression, J. R. Stat. Soc. Ser. B Stat. Methodol., 76, 2, 463-483 (2014) · Zbl 07555458
[3] Xu, J.; Chi, E.; Lange, K., Generalized linear model regression under distance-to-set penalties, Adv. Neural Inf. Process. Syst., 30 (2017)
[4] Li, L.; Zhang, X., Parsimonious tensor response regression, J. Amer. Statist. Assoc., 112, 519, 1131-1146 (2017)
[5] Zheng, Q.; Zhu, F.; Qin, J.; Chen, B.; Heng, P., Sparse support matrix machine, Pattern Recognit., 76, 715-726 (2018)
[6] Zhao, H.; Zheng, Q.; Ma, K.; Li, H.; Zheng, Y., Deep representation-based domain adaptation for nonstationary EEG classification, IEEE Trans. Neural Netw. Learn. Syst., 32, 2, 535-545 (2021)
[7] Wang, L.; Zhang, Z.; Dunson, D., Symmetric bilinear regression for signal subgraph estimation, IEEE Trans. Signal Process., 67, 7, 1929-1940 (2019) · Zbl 1458.62157
[8] Zheng, Q.; Wang, Y.; Heng, P., Multitask feature learning meets robust tensor decomposition for EEG classification, IEEE Trans. Cybern., 51, 4, 2242-2252 (2021)
[9] Caruana, R., Multitask learning, Mach. Learn., 28, 1, 41-75 (1997)
[10] Rohde, A.; Tsybakov, A., Estimation of high-dimensional low-rank matrices, Ann. Statist., 39, 2, 887-930 (2011) · Zbl 1215.62056
[11] Wang, J.; Xu, G.; Li, C.; Wang, Z.; Yan, F., Surface defects detection using non-convex total variation regularized RPCA with kernelization, IEEE Trans. Instrum. Meas., 70, 1-13 (2021)
[12] Otazo, R.; Candes, E.; Sodickson, D. K., Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components, Magn. Reson. Med., 73, 3, 1125-1136 (2015)
[13] Bouwmans, T.; Sobral, A.; Javed, S.; Jung, S.; Zahzah, E., Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset, Comput. Sci. Rev., 23, 1-71 (2017) · Zbl 1398.68572
[14] Yousefi, B.; Castanedo, C.; Maldague, X., Measuring heterogeneous thermal patterns in infrared-based diagnostic systems using sparse low-rank matrix approximation: Comparative study, IEEE Trans. Instrum. Meas., 70, 1-9 (2021)
[15] Xiu, X.; Yang, Y.; Kong, L.; Liu, W., Laplacian regularized robust principal component analysis for process monitoring, J. Process Control, 92, 212-219 (2020)
[16] Yu, W.; Zhao, C., Low-rank characteristic and temporal correlation analytics for incipient industrial fault detection with missing data, IEEE Trans. Ind. Inform., 17, 9, 6337-6346 (2021)
[17] Fu, Y.; Luo, C.; Bi, Z., Low-rank joint embedding and its application for robust process monitoring, IEEE Trans. Instrum. Meas., 70, 1-13 (2021)
[18] Chandrasekaran, V.; Sanghavi, S.; Parrilo, P.; Willsky, A., Rank-sparsity incoherence for matrix decomposition, SIAM J. Optim., 21, 2, 572-596 (2011) · Zbl 1226.90067
[19] Candès, E. J.; Li, X.; Ma, Y.; Wright, J., Robust principal component analysis?, J. ACM, 58, 3, 1-37 (2011) · Zbl 1327.62369
[20] Tao, M.; Yuan, X., Recovering low-rank and sparse components of matrices from incomplete and noisy observations, SIAM J. Optim., 21, 1, 57-81 (2011) · Zbl 1218.90115
[21] Bühlmann, P.; Van De Geer, S., Statistics for High-Dimensional Data: Methods, Theory and Applications (2011), Springer Science and Business Media · Zbl 1273.62015
[22] Fan, J.; Kong, L.; Wang, L.; Xiu, N., Variable selection in sparse regression with quadratic measurements, Statist. Sinica, 28, 3, 1157-1178 (2018) · Zbl 1394.62091
[23] Kong, D.; An, B.; Zhang, J.; Zhu, H., L2rm: Low-rank linear regression models for high-dimensional matrix responses, J. Amer. Statist. Assoc. (2019)
[24] Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J., Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3, 1, 1-122 (2011) · Zbl 1229.90122
[25] Chen, C.; He, B.; Ye, Y.; Yuan, X., The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent, Math. Program., 155, 1, 57-79 (2016) · Zbl 1332.90193
[26] Yang, L.; Pong, T.; Chen, X., Alternating direction method of multipliers for a class of nonconvex and nonsmooth problems with applications to background/foreground extraction, SIAM J. Imaging Sci., 10, 1, 74-110 (2017) · Zbl 1364.90278
[27] Li, X.; Sun, D.; Toh, K., A highly efficient semismooth Newton augmented Lagrangian method for solving Lasso problems, SIAM J. Optim., 28, 1, 433-458 (2018) · Zbl 1392.65062
[28] Han, D., A survey on some recent developments of alternating direction method of multipliers, J. Oper. Res. Soc. China, 1-52 (2022) · Zbl 1499.90158
[29] Vaswani, N.; Bouwmans, T.; Javed, S.; Narayanamurthy, P., Robust subspace learning: Robust PCA, robust subspace tracking, and robust subspace recovery, IEEE Signal Process. Mag., 35, 4, 32-55 (2018)
[30] Hu, Z.; Nie, F.; Wang, R.; Li, X., Low rank regularization: A review, Neural Netw., 136, 218-232 (2021)
[31] Tian, Y.; Zhang, Y., A comprehensive survey on regularization strategies in machine learning, Inf. Fusion., 80, 146-166 (2022)
[32] Rockafellar, R. T.; Wets, R., Variational Analysis (1998), Springer: Springer New York · Zbl 0888.49001
[33] Parikh, N.; Boyd, S., Proximal algorithms, Found. Trends Optim., 1, 3, 127-239 (2014)
[34] Donoho, D., De-noising by soft-thresholding, IEEE Trans. Inform. Theory, 41, 3, 613-627 (1995) · Zbl 0820.62002
[35] Tibshirani, R., Regression shrinkage and selection via the Lasso, J. R. Stat. Soc. Ser. B Stat. Methodol., 58, 1, 267-288 (1996) · Zbl 0850.62538
[36] Shang, P.; Kong, L., Regularization parameter selection for the low rank matrix recovery, J. Optim. Theory Appl., 189, 3, 772-792 (2021) · Zbl 1475.90123
[37] Cai, J.-F.; Candès, E. J.; Shen, Z., A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20, 4, 1956-1982 (2010) · Zbl 1201.90155
[38] Oh, T.-H.; Matsushita, Y.; Tai, Y.-W.; Kweon, I. S., Fast randomized singular value thresholding for low-rank optimization, IEEE Trans. Pattern Anal. Mach. Intell., 40, 2, 376-391 (2017)
[39] Ma, S.; Goldfarb, D.; Chen, L., Fixed point and bregman iterative methods for matrix rank minimization, Math. Program., 128, 1, 321-353 (2011) · Zbl 1221.65146
[40] Hu, W.; Lu, Y.; Ren, J., A fixed-point proximity algorithm for recovering low-rank components from incomplete observation data with application to motion capture data refinement, J. Comput. Appl. Math., 410, Article 114224 pp. (2022) · Zbl 1487.65044
[41] Rockafellar, R. T., Convex Analysis (1970), Princeton University Press: Princeton University Press Princeton · Zbl 0229.90020
[42] Fazel, M.; Pong, T.; Sun, D.; Tseng, P., Hankel matrix rank minimization with applications to system identification and realization, SIAM J. Matrix Anal. Appl., 34, 3, 946-977 (2013) · Zbl 1302.90127
[43] Recht, B.; Fazel, M.; Parrilo, P., Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52, 3, 471-501 (2010) · Zbl 1198.90321
[44] Rigollet, P.; Hütter, J.-C., High dimensional statistics, (Lect. Notes Course 18S997, vol. 813 (2015)), 46
[45] Chen, C.; Jiao, S.; Zhang, S.; Liu, W.; Feng, L.; Wang, Y., TripImputor: Real-time imputing taxi trip purpose leveraging multi-sourced Urban data, IEEE Trans. Intell. Transp. Syst., 19, 10, 3292-3304 (2018)
[46] Y. Wang, P. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, P. Ishwar, CDnet 2014: An expanded change detection benchmark dataset, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit, 2014, pp. 387-394.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.