×

An optimal statistical and computational framework for generalized tensor estimation. (English) Zbl 1486.62161

Summary: This paper describes a flexible framework for generalized low-rank tensor estimation problems that includes many important instances arising from applications in computational imaging, genomics, and network analysis. The proposed estimator consists of finding a low-rank tensor fit to the data under generalized parametric models. To overcome the difficulty of nonconvexity in these problems, we introduce a unified approach of projected gradient descent that adapts to the underlying low-rank structure. Under mild conditions on the loss function, we establish both an upper bound on statistical error and the linear rate of computational convergence through a general deterministic analysis. Then we further consider a suite of generalized tensor estimation problems, including sub-Gaussian tensor PCA, tensor regression, and Poisson and binomial tensor PCA. We prove that the proposed algorithm achieves the minimax optimal rate of convergence in estimation error. Finally, we demonstrate the superiority of the proposed framework via extensive experiments on both simulated and real data.

MSC:

62H12 Estimation in multivariate analysis
15A69 Multilinear algebra, tensor calculus
62H25 Factor analysis and principal components; correspondence analysis
62C20 Minimax procedures in statistical decision theory
90C26 Nonconvex programming, global optimization

References:

[1] Ahmed, A., Recht, B. and Romberg, J. (2014). Blind deconvolution using convex programming. IEEE Trans. Inf. Theory 60 1711-1732. · Zbl 1360.94057 · doi:10.1109/TIT.2013.2294644
[2] Anandkumar, A., Ge, R., Hsu, D., Kakade, S. M. and Telgarsky, M. (2014). Tensor decompositions for learning latent variable models. J. Mach. Learn. Res. 15 2773-2832. · Zbl 1319.62109
[3] ARROYO, J., ATHREYA, A., CAPE, J., CHEN, G., PRIEBE, C. E. and VOGELSTEIN, J. T. (2021). Inference for multiple heterogeneous networks with a common invariant subspace. J. Mach. Learn. Res. 22 Paper No. 142. · Zbl 07415085
[4] BAHADORI, M. T., YU, Q. R. and LIU, Y. (2014). Fast multivariate spatio-temporal analysis via low rank tensor learning. In Advances in Neural Information Processing Systems 3491-3499.
[5] BAKER, Y., TANG, T. M. and ALLEN, G. I. (2020). Feature selection for data integration with mixed multiview data. Ann. Appl. Stat. 14 1676-1698. · Zbl 1498.62191 · doi:10.1214/20-AOAS1389
[6] BARAK, B. and MOITRA, A. (2016). Noisy tensor completion via the sum-of-squares hierarchy. In Conference on Learning Theory 417-445.
[7] BI, X., QU, A. and SHEN, X. (2018). Multilayer tensor factorization with applications to recommender systems. Ann. Statist. 46 3308-3333. · Zbl 1454.62511 · doi:10.1214/17-AOS1659
[8] CAI, T. T., LI, X. and MA, Z. (2016). Optimal rates of convergence for noisy sparse phase retrieval via thresholded Wirtinger flow. Ann. Statist. 44 2221-2251. · Zbl 1349.62019 · doi:10.1214/16-AOS1443
[9] Cai, C., Li, G., Poor, H. V. and Chen, Y. (2019). Nonconvex low-rank tensor completion from noisy data. In Advances in Neural Information Processing Systems 1861-1872.
[10] Candès, E. J., Li, X. and Soltanolkotabi, M. (2015). Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Trans. Inf. Theory 61 1985-2007. · Zbl 1359.94069 · doi:10.1109/TIT.2015.2399924
[11] Candes, E. J. and Plan, Y. (2010). Matrix completion with noise. Proc. IEEE 98 925-936.
[12] CANDÈS, E. J. and PLAN, Y. (2011). Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Trans. Inf. Theory 57 2342-2359. · Zbl 1366.90160 · doi:10.1109/TIT.2011.2111771
[13] Candès, E. J. and Recht, B. (2009). Exact matrix completion via convex optimization. Found. Comput. Math. 9 717-772. · Zbl 1219.90124 · doi:10.1007/s10208-009-9045-5
[14] CAO, Y. and XIE, Y. (2016). Poisson matrix recovery and completion. IEEE Trans. Signal Process. 64 1609-1620. · Zbl 1412.94024 · doi:10.1109/TSP.2015.2500192
[15] CAO, Y., ZHANG, A. and LI, H. (2020). Multisample estimation of bacterial composition matrices in metagenomics data. Biometrika 107 75-92. · Zbl 1435.62188 · doi:10.1093/biomet/asz062
[16] Chen, W.-K. (2019). Phase transition in the spiked random tensor with Rademacher prior. Ann. Statist. 47 2734-2756. · Zbl 1426.82022 · doi:10.1214/18-AOS1763
[17] Chen, Y. and Candes, E. (2015). Solving random quadratic systems of equations is nearly as easy as solving linear systems. In Advances in Neural Information Processing Systems 739-747.
[18] CHEN, Y. and CHI, Y. (2018). Harnessing structures in big data via guaranteed low-rank matrix estimation. ArXiv preprint. Available at arXiv:1802.08397.
[19] CHEN, H., RASKUTTI, G. and YUAN, M. (2019). Non-convex projected gradient descent for generalized low-rank tensor regression. J. Mach. Learn. Res. 20 Paper No. 5. · Zbl 1483.62091
[20] CHI, E. C. and KOLDA, T. G. (2012). On tensors, sparsity, and nonnegative factorizations. SIAM J. Matrix Anal. Appl. 33 1272-1299. · Zbl 1262.15029 · doi:10.1137/110859063
[21] Chi, Y., Lu, Y. M. and Chen, Y. (2019). Nonconvex optimization meets low-rank matrix factorization: An overview. IEEE Trans. Signal Process. 67 5239-5269. · Zbl 1543.90234 · doi:10.1109/TSP.2019.2937282
[22] CHI, E. C., GAINES, B. R., SUN, W. W., ZHOU, H. and YANG, J. (2020). Provable convex co-clustering of tensors. J. Mach. Learn. Res. 21 Paper No. 214. · Zbl 1529.62066
[23] DE LATHAUWER, L., DE MOOR, B. and VANDEWALLE, J. (2000a). A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21 1253-1278. · Zbl 0962.15005 · doi:10.1137/S0895479896305696
[24] DE LATHAUWER, L., DE MOOR, B. and VANDEWALLE, J. (2000b). On the best rank-1 and rank-\[({R_1},{R_2},\dots ,{R_N})\] approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21 1324-1342. · Zbl 0958.15026 · doi:10.1137/S0895479898346995
[25] FAN, J., GONG, W. and ZHU, Z. (2019). Generalized high-dimensional trace regression via nuclear norm regularization. J. Econometrics 212 177-202. · Zbl 1452.62536 · doi:10.1016/j.jeconom.2019.04.026
[26] Faust, K., Sathirapongsasuti, J. F., Izard, J., Segata, N., Gevers, D., Raes, J. and Huttenhower, C. (2012). Microbial co-occurrence relationships in the human microbiome. PLoS Comput. Biol. 8 e1002606.
[27] FAZEL, M. (2002). Matrix rank minimization with applications.
[28] FLORES, G. E., CAPORASO, J. G., HENLEY, J. B., RIDEOUT, J. R., DOMOGALA, D., CHASE, J., LEFF, J. W., VÁZQUEZ-BAEZA, Y., GONZALEZ, A. et al. (2014). Temporal variability is a personalized feature of the human microbiome. Genome Biol. 15 531. · doi:10.1186/s13059-014-0531-y
[29] FRIEDLAND, S. and LIM, L.-H. (2018). Nuclear norm of higher-order tensors. Math. Comp. 87 1255-1281. · Zbl 1383.15018 · doi:10.1090/mcom/3239
[30] GANDY, S., RECHT, B. and YAMADA, I. (2011). Tensor completion and low-\(n\)-rank tensor recovery via convex optimization. Inverse Probl. 27 025010. · Zbl 1211.15036 · doi:10.1088/0266-5611/27/2/025010
[31] GUHANIYOGI, R., QAMAR, S. and DUNSON, D. B. (2017). Bayesian tensor regression. J. Mach. Learn. Res. 18 Paper No. 79. · Zbl 1440.62253
[32] GUO, W., KOTSIA, I. and PATRAS, I. (2012). Tensor learning for regression. IEEE Trans. Image Process. 21 816-827. · Zbl 1373.62308 · doi:10.1109/TIP.2011.2165291
[33] HALL, E. C., RASKUTTI, G. and WILLETT, R. (2016). Inference of high-dimensional autoregressive generalized linear models. ArXiv preprint. Available at arXiv:1605.02693. · Zbl 1432.62228
[34] HAN, R., WILLETT, R. and ZHANG, A. R. (2022). Supplement to “An optimal statistical and computational framework for generalized tensor estimation.” https://doi.org/10.1214/21-AOS2061SUPP
[35] Han, Q., Xu, K. and Airoldi, E. (2015). Consistent estimation of dynamic and multi-layer block models. In International Conference on Machine Learning 1511-1520.
[36] HAO, B., ZHANG, A. and CHENG, G. (2020). Sparse and low-rank tensor estimation via cubic sketchings. IEEE Trans. Inf. Theory 66 5927-5964. · Zbl 1448.62072 · doi:10.1109/TIT.2020.2982499
[37] HENRIQUES, R. and MADEIRA, S. C. (2019). Triclustering algorithms for three-dimensional data analysis: A comprehensive survey. ACM Computing Surveys (CSUR) 51 95.
[38] HILLAR, C. J. and LIM, L.-H. (2013). Most tensor problems are NP-hard. J. ACM 60 Art. 45. · Zbl 1281.68126 · doi:10.1145/2512329
[39] HOFF, P. D. (2015). Multilinear tensor regression for longitudinal relational data. Ann. Appl. Stat. 9 1169-1193. · Zbl 1454.62481 · doi:10.1214/15-AOAS839
[40] HONG, D., KOLDA, T. G. and DUERSCH, J. A. (2020). Generalized canonical polyadic tensor decomposition. SIAM Rev. 62 133-163. · Zbl 1432.68385 · doi:10.1137/18M1203626
[41] HOPKINS, S. B., SHI, J. and STEURER, D. (2015). Tensor principal component analysis via sum-of-square proofs. In Proceedings of the 28th Conference on Learning Theory, COLT 3-6.
[42] Javanmard, A. and Montanari, A. (2018). Debiasing the Lasso: Optimal sample size for Gaussian designs. Ann. Statist. 46 2593-2622. · Zbl 1407.62270 · doi:10.1214/17-AOS1630
[43] JIANG, X., RASKUTTI, G. and WILLETT, R. (2015). Minimax optimal rates for Poisson inverse problems with physical constraints. IEEE Trans. Inf. Theory 61 4458-4474. · Zbl 1359.94106 · doi:10.1109/TIT.2015.2441072
[44] JOHNDROW, J. E., BHATTACHARYA, A. and DUNSON, D. B. (2017). Tensor decompositions and sparse log-linear models. Ann. Statist. 45 1-38. · Zbl 1367.62180 · doi:10.1214/15-AOS1414
[45] Jolliffe, I. T. (1986). Principal Component Analysis. Springer Series in Statistics. Springer, New York. · doi:10.1007/978-1-4757-1904-8
[46] Keshavan, R. H., Montanari, A. and Oh, S. (2010). Matrix completion from noisy entries. J. Mach. Learn. Res. 11 2057-2078. · Zbl 1242.62069
[47] Kolda, T. G. and Bader, B. W. (2009). Tensor decompositions and applications. SIAM Rev. 51 455-500. · Zbl 1173.65029 · doi:10.1137/07070111X
[48] Koltchinskii, V., Lounici, K. and Tsybakov, A. B. (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist. 39 2302-2329. · Zbl 1231.62097 · doi:10.1214/11-AOS894
[49] KRIVANEK, O.,DELLBY, N. and LUPINI, A. (1999). Towards sub-A electron beams. Ultramicroscopy 78 1-11.
[50] Kroonenberg, P. M. (2008). Applied Multiway Data Analysis. Wiley Series in Probability and Statistics. Wiley Interscience, Hoboken, NJ. · Zbl 1160.62002 · doi:10.1002/9780470238004
[51] Lei, J., Chen, K. and Lynch, B. (2020). Consistent community detection in multi-layer network data. Biometrika 107 61-73. · Zbl 1435.62235 · doi:10.1093/biomet/asz068
[52] LESIEUR, T., KRZAKALA, F. and ZDEBOROVÁ, L. (2017). Constrained low-rank matrix estimation: Phase transitions, approximate message passing and applications. J. Stat. Mech. Theory Exp. 7 073403. · Zbl 1462.62324 · doi:10.1088/1742-5468/aa7284
[53] LI, N. and LI, B. (2010). Tensor completion for on-board compression of hyperspectral images. In 2010 IEEE International Conference on Image Processing 517-520. IEEE, New York.
[54] LI, L. and ZHANG, X. (2017). Parsimonious tensor response regression. J. Amer. Statist. Assoc. 112 1131-1146. · doi:10.1080/01621459.2016.1193022
[55] Li, X., Xu, D., Zhou, H. and Li, L. (2018). Tucker tensor regression and neuroimaging analysis. Stat. Biosci. 10 520-545.
[56] Liu, J., Musialski, P., Wonka, P. and Ye, J. (2013). Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35 208-220.
[57] LUBICH, C., ROHWEDDER, T., SCHNEIDER, R. and VANDEREYCKEN, B. (2013). Dynamical approximation by hierarchical Tucker and tensor-train tensors. SIAM J. Matrix Anal. Appl. 34 470-494. · Zbl 1391.15087 · doi:10.1137/120885723
[58] MA, Z. and MA, Z. (2017). Exploration of large networks via fast and universal latent space model fitting. ArXiv preprint. Available at arXiv:1705.02372.
[59] MCMAHAN, H. B., HOLT, G., SCULLEY, D., YOUNG, M., EBNER, D., GRADY, J., NIE, L., PHILLIPS, T., DAVYDOV, E. et al. (2013). Ad click prediction: A view from the trenches. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1222-1230. ACM, New York.
[60] Montanari, A., Reichman, D. and Zeitouni, O. (2017). On the limitation of spectral methods: From the Gaussian hidden clique problem to rank one perturbations of Gaussian tensors. IEEE Trans. Inf. Theory 63 1572-1579. · Zbl 1366.94150 · doi:10.1109/TIT.2016.2637959
[61] Montanari, A. and Sun, N. (2018). Spectral algorithms for tensor completion. Comm. Pure Appl. Math. 71 2381-2425. · Zbl 1404.15023 · doi:10.1002/cpa.21748
[62] OYMAK, S., JALALI, A., FAZEL, M., ELDAR, Y. C. and HASSIBI, B. (2015). Simultaneously structured models with application to sparse and low-rank matrices. IEEE Trans. Inf. Theory 61 2886-2908. · Zbl 1359.94150 · doi:10.1109/TIT.2015.2401574
[63] PARK, D., KYRILLIDIS, A., CARAMANIS, C. and SANGHAVI, S. (2018). Finding low-rank solutions via nonconvex matrix factorization, efficiently and provably. SIAM J. Imaging Sci. 11 2165-2204. · Zbl 1419.90065 · doi:10.1137/17M1150189
[64] Pensky, M. and Zhang, T. (2019). Spectral clustering in the dynamic stochastic block model. Electron. J. Stat. 13 678-709. · Zbl 1415.62046 · doi:10.1214/19-ejs1533
[65] Perry, A., Wein, A. S. and Bandeira, A. S. (2020). Statistical limits of spiked tensor models. Ann. Inst. Henri Poincaré Probab. Stat. 56 230-264. · Zbl 1439.62073 · doi:10.1214/19-AIHP960
[66] RASKUTTI, G., YUAN, M. and CHEN, H. (2019). Convex regularization for high-dimensional multiresponse tensor regression. Ann. Statist. 47 1554-1584. · Zbl 1428.62324 · doi:10.1214/18-AOS1725
[67] RAUHUT, H., SCHNEIDER, R. and STOJANAC, Ž. (2015). Tensor completion in hierarchical tensor representations. In Compressed Sensing and Its Applications. Appl. Numer. Harmon. Anal. 419-450. Birkhäuser/Springer, Cham. · Zbl 1333.94023
[68] RAUHUT, H., SCHNEIDER, R. and STOJANAC, Ž. (2017). Low rank tensor recovery via iterative hard thresholding. Linear Algebra Appl. 523 220-262. · Zbl 1372.65130 · doi:10.1016/j.laa.2017.02.028
[69] Richard, E. and Montanari, A. (2014). A statistical model for tensor PCA. In Advances in Neural Information Processing Systems 2897-2905.
[70] SALMON, J., HARMANY, Z., DELEDALLE, C.-A. and WILLETT, R. (2014). Poisson noise reduction with non-local PCA. J. Math. Imaging Vision 48 279-294. · Zbl 1365.94050 · doi:10.1007/s10851-013-0435-6
[71] Sewell, D. K. and Chen, Y. (2015). Latent space models for dynamic networks. J. Amer. Statist. Assoc. 110 1646-1657. · Zbl 1373.62580 · doi:10.1080/01621459.2014.988214
[72] SHAH, D. (2018). Matrix estimation, latent variable model and collaborative filtering. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science. LIPIcs. Leibniz Int. Proc. Inform. 93 Art. No. 4. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern. · Zbl 1496.62096
[73] SHAN, L., LIN, L., SUN, C. and WANG, X. (2016). Predicting ad click-through rates via feature-based fully coupled interaction tensor factorization. Electronic Commerce Research and Applications 16 30-42.
[74] SIGNORETTO, M., VAN DE PLAS, R., DE MOOR, B. and SUYKENS, J. A. (2011). Tensor versus matrix completion: A comparison with application to spectral data. IEEE Signal Process. Lett. 18 403-406.
[75] SUN, W. W. and LI, L. (2017). STORE: Sparse tensor response regression and neuroimaging analysis. J. Mach. Learn. Res. 18 Paper No. 135. · Zbl 1442.62773
[76] SUN, R. and LUO, Z.-Q. (2015). Guaranteed matrix completion via nonconvex factorization. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science—FOCS 2015 270-289. IEEE Computer Soc., Los Alamitos, CA. · doi:10.1109/FOCS.2015.25
[77] Sun, W. W., Lu, J., Liu, H. and Cheng, G. (2017). Provable sparse tensor decomposition. J. R. Stat. Soc. Ser. B. Stat. Methodol. 79 899-916. · Zbl 1411.62158 · doi:10.1111/rssb.12190
[78] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267-288. · Zbl 0850.62538
[79] TIMMERMANN, K. E. and NOWAK, R. D. (1999). Multiscale modeling and estimation of Poisson processes with application to photon-limited imaging. IEEE Trans. Inf. Theory 45 846-862. · Zbl 0947.94005 · doi:10.1109/18.761328
[80] TOMIOKA, R. and SUZUKI, T. (2013). Convex tensor decomposition via structured Schatten norm regularization. In Advances in Neural Information Processing Systems 1331-1339.
[81] TOMIOKA, R., SUZUKI, T., HAYASHI, K. and KASHIMA, H. (2011). Statistical performance of convex tensor decomposition. In Advances in Neural Information Processing Systems 972-980.
[82] TU, S., BOCZAR, R., SIMCHOWITZ, M., SOLTANOLKOTABI, M. and RECHT, B. (2016). Low-rank solutions of linear matrix equations via procrustes flow. In International Conference on Machine Learning 964-973.
[83] WANG, M., FISCHER, J. and SONG, Y. S. (2017). Three-way clustering of multi-tissue multi-individual gene expression data using constrained tensor decomposition. BioRxiv 229245. · Zbl 1423.62152
[84] Wang, M. and Li, L. (2020). Learning from binary multiway data: Probabilistic tensor decomposition and its statistical optimality. J. Mach. Learn. Res. 21 Paper No. 154. · Zbl 1525.68127
[85] WANG, M. and ZENG, Y. (2019). Multiway clustering via tensor block models. In Advances in Neural Information Processing Systems 713-723.
[86] WEDDERBURN, R. W. M. (1974). Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. Biometrika 61 439-447. · Zbl 0292.62050 · doi:10.1093/biomet/61.3.439
[87] WEN, Z., YIN, W. and ZHANG, Y. (2012). Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4 333-361. · Zbl 1271.65083 · doi:10.1007/s12532-012-0044-1
[88] Willett, R. M. and Nowak, R. D. (2007). Multiscale Poisson intensity and density estimation. IEEE Trans. Inf. Theory 53 3171-3187. · Zbl 1325.94036 · doi:10.1109/TIT.2007.903139
[89] WILMOTH, J. R. and SHKOLNIKOV, V. (2006). Human mortality database. Available at http://www.mortality.org.
[90] Xia, D. and Yuan, M. (2019). On polynomial time methods for exact low-rank tensor completion. Found. Comput. Math. 19 1265-1313. · Zbl 1436.15031 · doi:10.1007/s10208-018-09408-6
[91] Xia, D., Yuan, M. and Zhang, C.-H. (2021). Statistically optimal and computationally efficient low rank tensor completion from noisy entries. Ann. Statist. 49 76-99. · Zbl 1473.62184 · doi:10.1214/20-AOS1942
[92] XU, Z., HU, J. and WANG, M. (2019). Generalized tensor regression with covariates on multiple modes. ArXiv preprint. Available at arXiv:1910.09499.
[93] YANKOVICH, A. B., ZHANG, C., OH, A., SLATER, T. J., AZOUGH, F., FREER, R., HAIGH, S. J., WILLETT, R. and VOYLES, P. M. (2016). Non-rigid registration and non-local principle component analysis to improve electron microscopy spectrum images. Nanotechnology 27 364001.
[94] YOKOTA, T., LEE, N. and CICHOCKI, A. (2017). Robust multilinear tensor rank estimation using higher order singular value decomposition and information criteria. IEEE Trans. Signal Process. 65 1196-1206. · Zbl 1414.94715 · doi:10.1109/TSP.2016.2620965
[95] YONEL, B. and YAZICI, B. (2020). A deterministic theory for exact non-convex phase retrieval. IEEE Trans. Signal Process. 68 4612-4626. · Zbl 1543.94466 · doi:10.1109/TSP.2020.3007967
[96] YU, M., GUPTA, V. and KOLAR, M. (2020). Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach. Electron. J. Stat. 14 413-457. · Zbl 1434.90161 · doi:10.1214/19-EJS1658
[97] Yuan, M. and Zhang, C.-H. (2016). On tensor completion via nuclear norm minimization. Found. Comput. Math. 16 1031-1068. · Zbl 1378.90066 · doi:10.1007/s10208-015-9269-5
[98] Zhang, A. (2019). Cross: Efficient low-rank tensor completion. Ann. Statist. 47 936-964. · Zbl 1416.62298 · doi:10.1214/18-AOS1694
[99] ZHANG, A., CAI, T. T. and WU, Y. (2018). Heteroskedastic PCA: Algorithm, optimality, and applications. ArXiv preprint. Available at arXiv:1810.08316.
[100] Zhang, A. and Han, R. (2019). Optimal sparse singular value decomposition for high-dimensional high-order data. J. Amer. Statist. Assoc. 114 1708-1725. · Zbl 1428.62262 · doi:10.1080/01621459.2018.1527227
[101] Zhang, A. and Xia, D. (2018). Tensor SVD: Statistical and computational limits. IEEE Trans. Inf. Theory 64 7311-7338. · Zbl 1432.62176 · doi:10.1109/TIT.2018.2841377
[102] ZHANG, A. R., LUO, Y., RASKUTTI, G. and YUAN, M. (2020). ISLET: Fast and optimal low-rank tensor regression via importance sketching. SIAM J. Math. Data Sci. 2 444-479. · Zbl 1484.65095 · doi:10.1137/19M126476X
[103] ZHAO, T., WANG, Z. and LIU, H. (2015). A nonconvex optimization framework for low rank matrix estimation. In Advances in Neural Information Processing Systems 559-567.
[104] ZHOU, H. (2017). Matlab TensorReg Toolbox Version 1.0. Available at https://hua-zhou.github.io/TensorReg/.
[105] Zhou, H., Li, L. and Zhu, H. (2013). Tensor regression with applications in neuroimaging data analysis. J. Amer. Statist. Assoc. 108 540-552. · Zbl 06195959 · doi:10.1080/01621459.2013.776499
[106] ZHU, Z., LI, Q., TANG, G. and WAKIN, M. B. (2017). The global optimization geometry of nonsymmetric matrix factorization and sensing. ArXiv preprint. Available at arXiv:1703.01256
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.