×

On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD. (English) Zbl 1297.94018

Summary: This article gives theoretical insights into the performance of K-SVD, a dictionary learning algorithm that has gained significant popularity in practical applications. The particular question studied here is when a dictionary \(\varPhi\in\mathbb R^{d\times K}\) can be recovered as local minimum of the minimisation criterion underlying K-SVD from a set of \(N\) training signals \(y_n=\varPhi x_n\). A theoretical analysis of the problem leads to two types of identifiability results assuming the training signals are generated from a tight frame with coefficients drawn from a random symmetric distribution. First, asymptotic results showing that in expectation the generating dictionary can be recovered exactly as a local minimum of the K-SVD criterion if the coefficient distribution exhibits sufficient decay. Second, based on the asymptotic results it is demonstrated that given a finite number of training samples \(N\), such that \(N/\log N=O(K^3 d)\), except with probability \(O(N^{-Kd})\) there is a local minimum of the K-SVD criterion within distance \(O(K N^{-1/4})\) to the generating dictionary.

MSC:

94A20 Sampling theory in information and communication theory
15A23 Factorization of matrices
68T05 Learning and adaptive systems in artificial intelligence
94A29 Source coding

Software:

CoSaMP; PDCO

References:

[1] Agarwal, A.; Anandkumar, A.; Jain, P.; Netrapalli, P.; Tandon, R., Learning sparsely used overcomplete dictionaries via alternating minimization (2013)
[2] Agarwal, A.; Anandkumar, A.; Netrapalli, P., Exact recovery of sparsely used overcomplete dictionaries (2013)
[3] Aharon, M.; Elad, M.; Bruckstein, A. M., K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., 54, 11, 4311-4322 (November 2006) · Zbl 1375.94040
[4] Aharon, M.; Elad, M.; Bruckstein, A. M., On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them, Linear Algebra Appl., 416, 48-67 (July 2006) · Zbl 1096.68042
[5] Arora, S.; Ge, R.; Moitra, A., New algorithms for learning incoherent and overcomplete dictionaries (2013)
[6] Blumensath, T.; Davies, M. E., Iterative thresholding for sparse approximations, J. Fourier Anal. Appl., 14, 5-6, 629-654 (2008) · Zbl 1175.94060
[7] Candès, E.; Demanet, L.; Donoho, D. L.; Ying, L., Fast discrete curvelet transforms, Multiscale Model. Simul., 5, 3, 861-899 (2006) · Zbl 1122.65134
[8] Candès, E.; Romberg, J.; Tao, T., Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52, 2, 489-509 (2006) · Zbl 1231.94017
[9] Chen, S. S.; Donoho, D. L.; Saunders, M. A., Atomic decomposition by basis pursuit, SIAM J. Sci. Comput. (1998)
[10] Christensen, O., An Introduction to Frames and Riesz Bases (2003), Birkhäuser · Zbl 1017.42022
[11] Daubechies, I., Ten Lectures on Wavelets, CBMS-NSF Lecture Notes (1992), SIAM · Zbl 0776.42018
[12] Daubechies, I.; DeVore, R. A.; Fornasier, M.; Güntürk, S., Iteratively reweighted least squares minimization for sparse recovery, Comm. Pure Appl. Math., 63, 1, 1-38 (January 2010) · Zbl 1202.65046
[13] Davis, G.; Mallat, S.; Avellaneda, M., Adaptive greedy approximations, (Constr. Approx., vol. 13 (1997), Springer-Verlag New York Inc.), 57-98 · Zbl 0885.41006
[14] Donoho, D. L., Compressed sensing, IEEE Trans. Inform. Theory, 52, 4, 1289-1306 (2006) · Zbl 1288.94016
[15] Donoho, D. L.; Elad, M.; Temlyakov, V. N., Stable recovery of sparse overcomplete representations in the presence of noise, IEEE Trans. Inform. Theory, 52, 1, 6-18 (January 2006) · Zbl 1288.94017
[16] Field, D. J.; Olshausen, B. A., Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature, 381, 607-609 (1996)
[17] Geng, Q.; Wang, H.; Wright, J., On the local correctness of \(\ell^1\)-minimization for dictionary learning (2011)
[18] Georgiev, P.; Theis, F. J.; Cichocki, A., Sparse component analysis and blind source separation of underdetermined mixtures, IEEE Trans. Neural Netw., 16, 4, 992-996 (2005)
[19] Gribonval, R.; Jenatton, R.; Bach, F.; Kleinsteuber, M.; Seibert, M., Sample complexity of dictionary learning and other matrix factorizations (2013)
[20] Gribonval, R.; Rauhut, H.; Schnass, K.; Vandergheynst, P., Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms, J. Fourier Anal. Appl., 14, 5, 655-687 (2008) · Zbl 1181.94045
[21] Gribonval, R.; Schnass, K., Dictionary identifiability - sparse matrix-factorisation via \(l_1\)-minimisation, IEEE Trans. Inform. Theory, 56, 7, 3523-3539 (July 2010) · Zbl 1366.62104
[23] Kreutz-Delgado, K.; Murray, J. F.; Rao, B. D.; Engan, K.; Lee, T.; Sejnowski, T. J., Dictionary learning algorithms for sparse representation, Neural Comput., 15, 2, 349-396 (2003) · Zbl 1047.68101
[24] Kreutz-Delgado, K.; Rao, B. D., FOCUSS-based dictionary learning algorithms, (Proc. SPIE, vol. 4119 (2000))
[25] Ledoux, M.; Talagrand, M., Probability in Banach Spaces. Isoperimetry and Processes (1991), Springer-Verlag: Springer-Verlag Berlin, Heidelberg, New York · Zbl 0748.60004
[26] Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G., Online learning for matrix factorization and sparse coding, J. Mach. Learn. Res., 11, 19-60 (2010) · Zbl 1242.62087
[27] Maurer, A.; Pontil, M., K-dimensional coding schemes in Hilbert spaces, IEEE Trans. Inform. Theory, 56, 11, 5839-5846 (2010) · Zbl 1366.94305
[28] Mehta, N. A.; Gray, A. G., On the sample complexity of predictive sparse coding (2012)
[29] Needell, D.; Tropp, J. A., CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal., 26, 3, 301-321 (2009) · Zbl 1163.94003
[30] Plumbley, M. D., Dictionary learning for \(\ell_1\)-exact sparse coding, (Davies, M. E.; James, C. J.; Abdallah, S. A., International Conference on Independent Component Analysis and Signal Separation, vol. 4666 (2007), Springer), 406-413 · Zbl 1173.94382
[31] DSP Rice University, Compressive sensing resources
[32] Rubinstein, R.; Bruckstein, A.; Elad, M., Dictionaries for sparse representation modeling, Proc. IEEE, 98, 6, 1045-1057 (2010)
[33] Schnass, K., Dictionary identification results for K-SVD with sparsity parameter 1, (SampTA13 (2013))
[34] Schnass, K., On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD (2013) · Zbl 1297.94018
[35] Schnass, K.; Vandergheynst, P., Average performance analysis for thresholding, IEEE Signal Process. Lett., 14, 11, 828-831 (2007)
[36] Skretting, K.; Engan, K., Recursive least squares dictionary learning algorithm, IEEE Trans. Signal Process., 58, 4, 2121-2130 (April 2010) · Zbl 1392.68365
[37] Spielman, D.; Wang, H.; Wright, J., Exact recovery of sparsely-used dictionaries, (Conference on Learning Theory (2012))
[38] Tropp, J. A., Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory, 50, 10, 2231-2242 (October 2004) · Zbl 1288.94019
[39] Tropp, J. A., On the conditioning of random subdictionaries, Appl. Comput. Harmon. Anal., 25, 1-24 (2008) · Zbl 1143.15026
[40] Vainsencher, D.; Mannor, S.; Bruckstein, A. M., The sample complexity of dictionary learning, J. Mach. Learn. Res., 12, 3259-3281 (2011) · Zbl 1280.68210
[41] Vershynin, R., Introduction to the non-asymptotic analysis of random matrices, (Eldar, Y.; Kutyniok, G., Compressed Sensing, Theory and Applications (2012), Cambridge University Press), Chapter 5
[42] Yaghoobi, M.; Blumensath, T.; Davies, M. E., Dictionary learning for sparse approximations with the majorization method, IEEE Trans. Signal Process., 57, 6, 2178-2191 (June 2009) · Zbl 1391.94456
[43] Zibulevsky, M.; Pearlmutter, B. A., Blind source separation by sparse decomposition in a signal dictionary, Neural Comput., 13, 4, 863-882 (2001) · Zbl 0985.94004
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.