×

Low-CP-rank tensor completion via practical regularization. (English) Zbl 1495.65062

Summary: Dimension reduction is analytical methods for reconstructing high-order tensors that the intrinsic rank of these tensor data is relatively much smaller than the dimension of the ambient measurement space. Typically, this is the case for most real world datasets in signals, images and machine learning. The CANDECOMP/PARAFAC (CP, aka Canonical Polyadic) tensor completion is a widely used approach to find a low-rank approximation for a given tensor. In the tensor model [the last two authors, “Tensor completion via the CP decomposition”, in: Proceedings of the 2018 52nd Asilomar conference on signals, systems, and computers. Los Alamitos, CA: IEEE Computer Society. 845–849 (2018; doi:10.1109/ACSSC.2018.8645405)], a sparse regularization minimization problem via \(\ell_1\) norm was formulated with an appropriate choice of the regularization parameter. The choice of the regularization parameter is important in the approximation accuracy. Due to the emergence of the massive data, one is faced with an onerous computational burden for computing the regularization parameter via classical approaches [S. Gazzola and M. S. Landman, “Krylov methods for inverse problems: surveying classical, and introducing new, algorithmic approaches”, GAMM-Mitt. 43, No. 4, e202000017, 31 p. (2020; doi:10.1002/gamm.202000017)] such as the weighted generalized cross validation (WGCV) [J. Chung et al., ETNA, Electron. Trans. Numer. Anal. 28, 149–167 (2008; Zbl 1171.65029)], the unbiased predictive risk estimator [C. M. Stein, Ann. Stat. 9, 1135–1151 (1981; Zbl 0476.62035); C. R. Vogel, Computational methods for inverse problems. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM) (2002; Zbl 1008.65103)], and the discrepancy principle [V. A. Morozov, Sov. Math., Dokl. 7, 414–417 (1966; Zbl 0187.12203); translation from Dokl. Akad. Nauk SSSR 167, 510–512 (1966)]. In order to improve the efficiency of choosing the regularization parameter and leverage the accuracy of the CP tensor, we propose a new algorithm for tensor completion by embedding the flexible hybrid method [S. Gazzola, “Flexible Krylov methods for \(\ell^p\) regularization”, talk given at the conference on modern challenges in imaging in the footsteps of Allan MacLeod Cormack. 05–09 Aug 2019, Tufts University, Medford, USA (2019)] into the framework of the CP tensor. The main benefits of this method include incorporating the regularization automatically and efficiently as well as improving accuracy in the reconstruction and algorithmic robustness. Numerical examples from image reconstruction and model order reduction demonstrate the efficacy of the proposed algorithm.

MSC:

65F55 Numerical methods for low-rank matrix approximation; matrix compression
15A69 Multilinear algebra, tensor calculus
15A83 Matrix completion problems

Software:

Matlab

References:

[1] Acar, E., Çamtepe, S.A., Krishnamoorthy, M.S., Yener, B.: Modeling and multiway analysis of chatroom tensors, ISI’05. Springer, Berlin, Heidelberg (2005). doi:10.1007/11427995_21
[2] Acar, E.; Dunlavy, DM; Kolda, TG, A scalable optimization approach for fitting canonical tensor decompositions, J. Chemom., 25, 67-86 (2011) · doi:10.1002/cem.1335
[3] Acar, E.; Dunlavy, DM; Kolda, TG; Mørup, M., Scalable tensor factorizations for incomplete data, Chemom. Intell. Lab. Syst., 106, 41-56 (2011) · doi:10.1016/j.chemolab.2010.08.004
[4] Andersson, CA; Bro, R., Improving the speed of multi-way algorithms: Part i. tucker3, Chemom. Intell. Lab. Syst., 42, 93-103 (1998) · doi:10.1016/S0169-7439(98)00010-0
[5] Beck, A.; Teboulle, M., A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2, 183-202 (2009) · Zbl 1175.94009 · doi:10.1137/080716542
[6] Beckmann, C.; Smith, S., Tensorial extensions of independent component analysis for multisubject fmri analysis, NeuroImage, 25, 294-311 (2005) · doi:10.1016/j.neuroimage.2004.10.043
[7] Beylkin, G.; Mohlenkamp, MJ, Algorithms for numerical analysis in high dimensions, SIAM J. Sci. Comput., 26, 2133-2159 (2005) · Zbl 1085.65045 · doi:10.1137/040604959
[8] Binev, P.; Cohen, A.; Dahmen, W.; DeVore, R.; Petrova, G.; Wojtaszczyk, P., Convergence rates for greedy algorithms in reduced basis methods, SIAM J. Math. Anal., 43, 1457-1472 (2011) · Zbl 1229.65193 · doi:10.1137/100795772
[9] Biswas, SK; Milanfar, P., Linear support tensor machine with lsk channels: pedestrian detection in thermal infrared images, IEEE Trans. Image Process., 26, 4229-4242 (2017) · Zbl 1409.94039 · doi:10.1109/TIP.2017.2705426
[10] Björck, Å., A bidiagonalization algorithm for solving large and sparse ill-posed systems of linear equations, BIT Numer. Math., 28, 659-670 (1988) · Zbl 0658.65041 · doi:10.1007/BF01941141
[11] Boelens, AM; Venturi, D.; Tartakovsky, DM, Tensor methods for the boltzmann-bgk equation, J. Comput. Phys., 421, 109744 (2020) · Zbl 07508367 · doi:10.1016/j.jcp.2020.109744
[12] Bro, R., Parafac tutorial and applications, Chemom. Intell. Lab. Syst., 38, 149-171 (1997) · doi:10.1016/S0169-7439(97)00032-4
[13] Candes, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. arXiv:abs/0903.1476 (2009) · Zbl 1366.15021
[14] Chadwick, P.: Principles of continuum mechanics by m. n. l. narasimhan. John Wiley & Sons. 1993. 567 pp. isbn 0 471 54000 5. £48.95. Journal of Fluid Mechanics, 293, 404-404 (1995). doi:10.1017/S0022112095211765
[15] Chung, J., Nagy, J.G., O’leary, D.P.: A weighted gcv method for lanczos hybrid regularization. Electr. Trans. Numer. Anal. 28, 2008 (2008) · Zbl 1171.65029
[16] de Silva, V., Lim, L.-H.: Tensor rank and the ill-posedness of the best low-rank approximation problem. arXiv Mathematics e-prints. arXiv:math/0607647 (2006)
[17] Erichson, NB; Manohar, K.; Brunton, SL; Kutz, JN, Randomized cp tensor decomposition, Mach. Learn. Sci. Technol., 1, 025012 (2020) · doi:10.1088/2632-2153/ab8240
[18] Figueiredo, MAT; Bioucas-Dias, JM; Nowak, RD, Majorization-minimization algorithms for wavelet-based image restoration, IEEE Trans. Image Process., 16, 2980-2991 (2007) · doi:10.1109/TIP.2007.909318
[19] Gazzola, S.: Flexible krylov methods for lp regularization · Zbl 1436.65043
[20] Gazzola, S.; Sabaté Landman, M., Krylov methods for inverse problems: surveying classical, and introducing new, algorithmic approaches, GAMM-Mitteilungen, 43, e202000017 (2020) · Zbl 1541.65023 · doi:10.1002/gamm.202000017
[21] Geng, L., Nie, X., Niu, S., Yin, Y., Lin, J.: Structural compact core tensor dictionary learning for multispec-tral remote sensing image deblurring. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 2865-2869 (2018). doi:10.1109/ICIP.2018.8451531
[22] Ghassemi, M., Shakeri, Z., Sarwate, A.D., Bajwa, W.U.: STARK: Structured Dictionary Learning Through Rank-one Tensor Recovery. arXiv:1711.04887 (2017)
[23] Giryes, R.; Elad, M.; Eldar, YC, The projected gsure for automatic parameter tuning in iterative shrinkage methods, Appl. Comput. Harmon. Anal., 30, 407-422 (2011) · Zbl 1210.94015 · doi:10.1016/j.acha.2010.11.005
[24] Gorodnitsky, I., Rao, B.: A new iterative weighted norm minimization algorithm and its applications. In: [1992] IEEE Sixth SP Workshop on Statistical Signal and Array Processing, pp. 412-415. IEEE (1992)
[25] Hansen, P.C.: Discrete inverse problems: insight and algorithms. In: SIAM (2010) · Zbl 1197.65054
[26] Hesthaven, JS; Gottlieb, S.; Gottlieb, D., Spectral Methods for Time-Dependent Problems (2007), Cambridge: Cambridge University Press, Cambridge · Zbl 1111.65093 · doi:10.1017/CBO9780511618352
[27] Hou, M.: Tensor-based regression models and applications (2017)
[28] Knuth, DE, The Art of Computer Programming, Seminumerical Algorithms (1997), New York: Addison-Wesley Longman Publishing Co., Inc, New York · Zbl 0191.18001
[29] Kolda, T.G., Bader, B.W., Kenny, J.P.: Higher-order web link analysis using multilinear algebra. In: ICDM 2005: Proceedings of the 5th IEEE International Conference on Data Mining, pp. 242-249 (2005). doi:10.1109/ICDM.2005.77
[30] Kruskal, J., Three-way arrays: rank and uniqueness of trilinear decompositions, with applications to arithmetic complexity and statistics, Linear Algebra Appl., 18, 95-138 (1977) · Zbl 0364.15021 · doi:10.1016/0024-3795(77)90069-6
[31] Li, N., Kindermann, S., Navasca, C.: Some convergence results on the regularized alternating least-squares method for tensor decomposition. arXiv:abs/1109.3831 (2011) · Zbl 1261.65041
[32] Li, N.; Kindermann, S.; Navasca, C., Some convergence results on the regularized alternating least-squares method for tensor decomposition, Linear Algebra Appl., 438, 796-812 (2013) · Zbl 1261.65041 · doi:10.1016/j.laa.2011.12.002
[33] Liu, J.; Musialski, P.; Wonka, P.; Ye, J., Tensor completion for estimating missing values in visual data, IEEE Trans. Pattern Anal. Mach. Intell., 35, 208-220 (2013) · doi:10.1109/TPAMI.2012.39
[34] Lorentz, GG; Golitschek, MV; Makovoz, Y., Constructive Approximation: Advanced Problems (1996), Berlin: Springer, Berlin · Zbl 0910.41001 · doi:10.1007/978-3-642-60932-9
[35] Makantasis, K.; Doulamis, AD; Doulamis, ND; Nikitakis, A., Tensor-based classification models for hyperspectral data analysis, IEEE Trans. Geosci. Remote Sens., 56, 6884-6898 (2018) · doi:10.1109/TGRS.2018.2845450
[36] Morozov, V.A.: On the solution of functional equations by the method of regularization. In: Doklady Akademii Nauk, vol. 167, pp. 510-512. Russian Academy of Sciences (1966) · Zbl 0187.12203
[37] Navasca, C., De Lathauwer, L., Kindermann, S.: reducing technique for tensor decomposition. In: 2008 16th European Signal Processing Conference, pp. 1-5. IEEE (2008)
[38] Nouy, A.: Low-rank tensor methods for model order reduction. arXiv:1511.01555 (2015)
[39] Ohlberger, M.; Smetana, K., Approximation of skewed interfaces with tensor-based model reduction procedures: application to the reduced basis hierarchical model reduction approach, J. Comput. Phys., 321, 1185-1205 (2016) · Zbl 1349.74007 · doi:10.1016/j.jcp.2016.06.021
[40] O’Leary, D.P., Simmons, J.A.: A bidiagonalization-regularization procedure for large scale discretizations of ill-posed problems. SIAM J. Sci. Stat. Comput. 2, 474-489 (1981) · Zbl 0469.65089
[41] Parikh, N.; Boyd, S., Proximal algorithms, Found. Trends Optim., 1, 123-231 (2014)
[42] Pinkus, A., N-widths in Approximation Theory (2012), Berlin: Springer, Berlin · Zbl 0551.41001
[43] Rodrıguez, P., Wohlberg, B.: An efficient algorithm for sparse representations with lp data fidelity term. In: Proceedings of 4th IEEE Andean Technical Conference (ANDESCON) (2008)
[44] Rozza, G.; Huynh, DBP; Patera, AT, Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations, Archiv. Comput. Methods Eng., 15, 1 (2007) · doi:10.1007/BF03024948
[45] Sanogo, F., Navasca, C.: Tensor completion via the cp decomposition. In: 2018 52nd Asilomar Conference on Signals, Systems, and Computers, pp. 845-849 (2018). doi:10.1109/ACSSC.2018.8645405
[46] Selesnick, I., Sparse regularization via convex analysis, IEEE Trans. Signal Process., 65, 4481-4494 (2017) · Zbl 1414.94545 · doi:10.1109/TSP.2017.2711501
[47] Smilde, A., Bro, R., Geladi, P.: Multi-way analysis with applications in the chemical sciences (2004)
[48] Stein, CM, Estimation of the mean of a multivariate normal distribution, Ann. Stat., 9, 1135-1151 (1981) · Zbl 0476.62035 · doi:10.1214/aos/1176345632
[49] Trefethen, L.N.: Spectral methods in MATLAB. In: SIAM (2000) · Zbl 0953.68643
[50] Uschmajew, A., Local convergence of the alternating least squares algorithm for canonical tensor approximation, SIAM J. Matrix Anal. Appl., 33, 639-652 (2012) · Zbl 1252.65085 · doi:10.1137/110843587
[51] Vogel, C.R.: Computational methods for inverse problems. In: SIAM (2002) · Zbl 1008.65103
[52] Walczak, B.; Massart, D., Dealing with missing data: part i, Chemom. Intell. Lab. Syst., 58, 15-27 (2001) · doi:10.1016/S0169-7439(01)00131-9
[53] Wang, X., Navasca, C.: Adaptive low rank approximation of tensors. In: Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCVW, Santiago, Chile, 2015) · Zbl 1499.65247
[54] Wang, X.; Navasca, C., Low-rank approximation of tensors via sparse optimization, Numer. Linear Algebra Appl., 25, 2183-2202 (2018) · Zbl 1499.65247 · doi:10.1002/andp.19053221004
[55] Xu, X.; Wu, Q.; Wang, S.; Liu, J.; Sun, J.; Cichocki, A., Whole brain fmri pattern analysis based on tensor neural network, IEEE Access, 6, 29297-29305 (2018) · doi:10.1109/ACCESS.2018.2815770
[56] Zhang, J.: Design and Application of Tensor Decompositions to Problems in Model and Image Compression and Analysis, PhD thesis, Tufts University (2017)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.