×

A two-metric variable scaled forward-backward algorithm for \(\ell_0\) optimization problem and its applications. (English) Zbl 07899573

Summary: In this paper, a two-metric variable scaled forward-backward algorithm is proposed to solve the nonconvex noncontinuous composite optimization problems with \(\ell_0\) regularization. We first explore using a class of locally Lipschitz scale folded concave functions to approach the \(\ell_0\). Then, a convex half-absolute method is proposed to precisely approximate these nonconvex nonsmooth functions. A double iterative algorithm is considered to solve the convex-relaxed composite optimization problems, where the inner iterative subproblem is solved using a well-designed two-metric variable scaled forward-backward algorithm. Specifically, while the variable metric in the forward step is derived by quasi-Newton methods, in the backward implicit step, however, the scaled matrix is chosen naturally derived by the inherent nonconvex properties of the established folded concave regularization function. The convergence of the proposed algorithm is analyzed, and the effectiveness of the proposed approach is validated by extensive numerical experiments, including sparse signal recovery, breast cancer data classification, and image restoration.

MSC:

65-XX Numerical analysis
Full Text: DOI

References:

[1] Hastie, T., Tibshirani, R., Wainwright, M.: Statistical learning with sparsity (2015) · Zbl 1319.68003
[2] Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489-509 (2006). doi:10.1109/TIT.2005.862083 · Zbl 1231.94017
[3] Aharon, M., Elad, M., Bruckstein, A.: K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54, 4311-4322 (2006). doi:10.1109/TSP.2006.881199 · Zbl 1375.94040
[4] Chouzenoux, E., Jezierska, A., Pesquet, J.C., Talbot, H.: A majorize-minimize subspace approach for \(\ell_2-\ell_0\) image regularization. SIAM J. Imag. Sci. 6, 563-591 (2013). doi:10.1137/11085997X · Zbl 1281.65030
[5] Louizos, C., Welling, M., Kingma, D.P.: Learning sparse neural networks through \(\ell_0\) regularization. 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings (2018)
[6] Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137, 91-129 (2013). doi:10.1007/s10107-011-0484-9 · Zbl 1260.49048
[7] Candès, EJ, Compressive sampling, International Congress of Mathematicians, ICM, 2006, 3, 1433-1452, 2006 · Zbl 1130.94013
[8] Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289-1306 (2006). doi:10.1109/TIT.2006.871582 · Zbl 1288.94016
[9] Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 111-119 (2009). doi:10.1007/s10208-009-9045-5
[10] Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471-501 (2010). doi:10.1137/070697835 · Zbl 1198.90321
[11] Elad, M.: Sparse and redundant representations: from theory to applications in signal and image processing (2010) · Zbl 1211.94001
[12] Bonettini, S., Benfenati, A., Ruggiero, V.: Scaling techniques for \(\epsilon \)-subgradient methods. SIAM J. Optimizat. 26 (2016). doi:10.1137/14097642X · Zbl 1347.65106
[13] Davis, D., Drusvyatskiy, D., MacPhee, K.J., Paquette, C.: Subgradient methods for sharp weakly convex functions. J. Optimizat. Theory Appl. 179 (2018). doi:10.1007/s10957-018-1372-8 · Zbl 1461.65140
[14] Renegar, J.: Efficient subgradient methods for general convex optimization. SIAM J. Optimizat. 26 (2016). doi:10.1137/15M1027371 · Zbl 1351.90129
[15] Tong, T., Ma, C., Chi, Y.: Low-rank matrix recovery with scaled subgradient methods: fast and robust convergence without the condition number. IEEE Trans. Signal Process. 69 (2021). doi:10.1109/TSP.2021.3071560 · Zbl 1543.65063
[16] Bonnans, J.F., Gilbert, J.C., Lemaréchal, C., Sagastizábal, C.A.: A family of variable metric proximal methods. Math. Program. 68 (1995). doi:10.1007/BF01585756 · Zbl 0832.90102
[17] Burke, J.V., Qian, M.: Variable metric proximal point algorithm for monotone operators. SIAM J. Control Optimizat. 37 (1998). doi:10.1137/s0363012992235547
[18] Chouzenoux, E., Pesquet, J.C., Repetti, A.: Variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function. J. Optim. Theory Appl. 162, 107-132 (2014). doi:10.1007/s10957-013-0465-7 · Zbl 1318.90058
[19] Chen, X., Fukushima, M.: Proximal quasi-Newton methods for nondifferentiable convex optimization. Mathematical Programming, Series B 85, 313-334 (1999). doi:10.1007/s101070050059 · Zbl 0946.90111
[20] Fuentes, M., Malick, J., Lemaréchal, C.: Descentwise inexact proximal algorithms for smooth optimization. Comput. Optim. Appl. 53, 755-769 (2012). doi:10.1007/s10589-012-9461-3 · Zbl 1264.90160
[21] Fukushima, M., Qi, L.: A globally and superlinearly convergent algorithm for nonsmooth convex minimization. SIAM J. Optim. 6, 298-321 (1996). doi:10.1137/S1052623494278839
[22] Burke, J.V., Qian, M.: On the superlinear convergence of the variable metric proximal point algorithm using Broyden and BFGS matrix secant updating. Mathematical Programming, Series B 88, 157-181 (2000). doi:10.1007/PL00011373 · Zbl 1028.90086
[23] Bonettini, S., Porta, F., Ruggiero, V., Zanni, L.: Variable metric techniques for forward-backward methods in imaging. J. Comput. Appl. Math. 385, 113192 (2021). doi:10.1016/j.cam.2020.113192 · Zbl 1471.65048
[24] Pock, T., Chambolle, A.: Diagonal preconditioning for first order primal-dual algorithms in convex optimization. Proceedings of the IEEE International Conference on Computer Vision, 1762-1769 (2011). doi:10.1109/ICCV.2011.6126441
[25] Becker, S.; Fadili, MJ, A quasi-Newton proximal splitting method, Adv. Neural. Inf. Process. Syst., 4, 2618-2626, 2012
[26] Jiang, F., Cai, X., Han, D.: The indefinite proximal point algorithms for maximal monotone operators. Optimization 70 (2021). doi:10.1080/02331934.2020.1751158 · Zbl 1476.90310
[27] He, H., Cai, X., Han, D.: A class of nonlinear proximal point algorithms for variational inequality problems. Int. J. Comput. Math. 92 (2015). doi:10.1080/00207160.2014.940333 · Zbl 1317.65145
[28] Beck, A.: First-order methods in optimization. Soc. Ind. Appl. Math. (2017) · Zbl 1384.65033
[29] Anh, P.N., Thang, T.V., Thach, H.T.C.: A subgradient proximal method for solving a class of monotone multivalued variational inequality problems. Numerical Algorithms 1-22 (2021). doi:10.1007/s11075-021-01119-4
[30] J.D. Lee, Y.S., M.A.Saunders: Proximal Newton-type methods for minimizing composite functions. SIAM J. Optimizat. 24(3), 1420-1443 (2014) · Zbl 1306.65213
[31] Clarke, F.H.: Optimization and nonsmooth analysis. Society for Industrial and Applied Mathematics (1990). doi:10.1137/1.9781611971309 · Zbl 0696.49002
[32] A. Bagirov, N.K., Mäkelä, M.M.: Introduction to nonsmooth optimization: theory, practice and software. Spring (2014) · Zbl 1312.90053
[33] Rockafellar, R.T.: Conjugate duality and optimization (1974) · Zbl 0296.90036
[34] Ochs, P.; Dosovitskiy, A.; Brox, T.; Pock, T., On iteratively reweighted algorithms for nonsmooth nonconvex optimization in computer vision, SIAM J. Imag. Sci., 8, 1, 331-372, 2015 · Zbl 1326.65078 · doi:10.1137/140971518
[35] Ochs, P.; Fadili, J.; Brox, T., Non-smooth non-convex Bregman minimization: unification and new algorithms, J. Optim. Theory Appl., 181, 244-278, 2019 · Zbl 1416.49015 · doi:10.1007/s10957-018-01452-0
[36] Repetti, A.; Wiaux, Y., Variable metric forward-backward algorithm for composite minimization problems, SIAM J. Optim., 31, 2, 1215-1241, 2021 · Zbl 1468.90098 · doi:10.1137/19M1277552
[37] Geiping, J.; Moeller, M., Composite optimization by nonconvex majorization-minimization, SIAM J. Imag. Sci., 11, 4, 2494-2528, 2018 · Zbl 1419.90089 · doi:10.1137/18M1171989
[38] Malek-Mohammadi, M., Koochakzadeh, A., Babaie-Zadeh, M., Jansson, M., Rojas, C.R.: Successive concave sparsity approximation for compressed sensing. IEEE Trans. Signal Process. 64, 5657-5671 (2016). doi:10.1109/TSP.2016.2585096 · Zbl 1414.94397
[39] Davidon, W.C.: Variable metric method for minimization. SIAM J. Optimizat. 1 (1991). doi:10.1137/0801001 · Zbl 0752.90062
[40] Fletcher, R., Powell, M.J.D.: A rapidly convergent descent method for minimization. Comput. J. 6 (1963). doi:10.1093/comjnl/6.2.163 · Zbl 0132.11603
[41] Chong, E.K.P., Żak, S.H.: An introduction to optimization ed. 4. (2013) · Zbl 1266.90001
[42] Bertsekas, D.P.: Nonlinear programming, Third Edition (2016) · Zbl 1360.90236
[43] Armijo, L., Minimization of functions having Lipschitz continuous first partial derivatives, Pac. J. Math., 16, 1, 1-3, 1966 · Zbl 0202.46105 · doi:10.2140/pjm.1966.16.1
[44] Rockafellar, R.T., Wets, R.J.-B.: Variational analysis (2017)
[45] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2, 183-202 (2009). doi:10.1137/080716542 · Zbl 1175.94009
[46] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348-1360 (2001). doi:10.1198/016214501753382273 · Zbl 1073.62547
[47] Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27, 265-274 (2009). doi:10.1016/j.acha.2009.04.002 · Zbl 1174.94008
[48] Blumensath, T., Davies, M.E.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14, 629-654 (2008). doi:10.1007/s00041-008-9035-z · Zbl 1175.94060
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.