×

Alpha-beta log-determinant divergences between positive definite trace class operators. (English) Zbl 1519.47040

Summary: This work presents a parametrized family of divergences, namely Alpha-Beta Log-Determinant (Log-Det) divergences, between positive definite unitized trace class operators on a Hilbert space. This is a generalization of the Alpha-Beta Log-Determinant divergences between symmetric, positive definite matrices to the infinite-dimensional setting. The family of Alpha-Beta Log-Det divergences is highly general and contains many divergences as special cases, including the recently formulated infinite-dimensional affine-invariant Riemannian distance and the infinite-dimensional Alpha Log-Det divergences between positive definite unitized trace class operators. In particular, it includes a parametrized family of metrics between positive definite trace class operators, with the affine-invariant Riemannian distance and the square root of the symmetric Stein divergence being special cases. For the Alpha-Beta Log-Det divergences between covariance operators on a Reproducing Kernel Hilbert Space (RKHS), we obtain closed form formulas via the corresponding Gram matrices.

MSC:

47B65 Positive linear operators and order-bounded operators
47L07 Convex sets and cones of operators
46E22 Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces)
Full Text: DOI

References:

[1] Arsigny, V.; Fillard, P.; Pennec, X.; Ayache, N., Geometric means in a novel vector space structure on symmetric positive-definite matrices, SIAM J. Matrix Anal. Appl., 29, 328-347 (2007) · Zbl 1144.47015 · doi:10.1137/050637996
[2] Barbaresco, F., Information geometry of covariance matrix: Cartan-Siegel homogeneous bounded domains, Mostow/Berger fibration and Frechet median, 199-255 (2013), New York · Zbl 1319.62072 · doi:10.1007/978-3-642-30232-9_9
[3] Bhatia, R.: Positive Definite Matrices. Princeton University Press, Princeton (2007) · Zbl 1133.15017
[4] Bhatia, R.: Matrix analysis, vol. 169. Springer, New York (2013) · Zbl 0863.15001
[5] Bini, DA; Iannazzo, B., Computing the Karcher mean of symmetric positive definite matrices, Linear Algebra Appl., 438, 1700-1710 (2013) · Zbl 1268.15007 · doi:10.1016/j.laa.2011.08.052
[6] Chebbi, Z.; Moakher, M., Means of Hermitian positive-definite matrices based on the log-determinant \(\alpha \)-divergence function, Linear Algebra Appl., 436, 1872-1889 (2012) · Zbl 1236.15060 · doi:10.1016/j.laa.2011.12.003
[7] Cherian, A.; Sra, S.; Banerjee, A.; Papanikolopoulos, N., Jensen-Bregman LogDet divergence with application to efficient similarity search for covariance matrices, IEEE Trans. Pattern Anal. Mach. Intell., 35, 2161-2174 (2013) · doi:10.1109/TPAMI.2012.259
[8] Cherian, A., Stanitsas, P., Harandi, M., Morellas, V., Papanikolopoulos, N.: Learning discriminative \(\alpha \beta \)-divergences for positive definite matrices. In The IEEE International Conference on Computer Vision (ICCV), Oct (2017)
[9] Cichocki, A.; Cruces, S.; Amari, S., Log-Determinant divergences revisited: Alpha-Beta and Gamma Log-Det divergences, Entropy, 17, 2988-3034 (2015) · Zbl 1338.94034 · doi:10.3390/e17052988
[10] Fan, K., On a theorem of Weyl concerning eigenvalues of linear transformations: II, Proc. Natl. Acad. Sci. USA, 36, 31 (1950) · Zbl 0041.00602 · doi:10.1073/pnas.36.1.31
[11] Formont, P.; Ovarlez, JP; Pascal, F., On the use of matrix information geometry for polarimetric SAR image classification, 257-276 (2013), New York · Zbl 1269.94004 · doi:10.1007/978-3-642-30232-9_10
[12] Harandi, M., Salzmann, M., Porikli, F.: Bregman divergences for infinite dimensional covariance matrices. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1003-1010, (2014)
[13] Hasegawa, H., \( \alpha \)-divergence of the non-commutative information geometry, Rep. Math. Phys., 33, 87-93 (1993) · Zbl 0806.62006 · doi:10.1016/0034-4877(93)90043-E
[14] Minh, H.Q.: Regularized divergences between covariance operators and Gaussian measures on Hilbert spaces. arXiv preprint arXiv:1904.05352, (2019)
[15] Jayasumana, S., Hartley, R., Salzmann, M., Hongdong, Li., Harandi, M.: Kernel methods on the Riemannian manifold of symmetric positive definite matrices. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 73-80, (2013)
[16] Jenčová, A., Geometry of quantum states: dual connections and divergence functions, Rep. Math. Phys., 47, 121-138 (2001) · Zbl 0984.81022 · doi:10.1016/S0034-4877(01)90008-4
[17] Jost, J.: Postmodern analysis. Springer, Berlin (1998) · Zbl 1097.26004 · doi:10.1007/978-3-662-03635-8
[18] Kittaneh, F.; Kosaki, H., Inequalities for the Schatten p-norm V, Publ. Res. Inst. Math. Sci., 23, 433-443 (1987) · Zbl 0627.47002 · doi:10.2977/prims/1195176547
[19] Kulis, B.; Sustik, MA; Dhillon, IS, Low-rank kernel learning with Bregman matrix divergences, J. Mach. Learn. Res., 10, 341-376 (2009) · Zbl 1235.68166
[20] Larotonda, G., Nonpositive curvature: A geometrical approach to Hilbert-Schmidt operators, Differ. Geom. Appl., 25, 679-700 (2007) · Zbl 1141.22006 · doi:10.1016/j.difgeo.2007.06.016
[21] Lawson, JD; Lim, Y., The geometric mean, matrices, metrics, and more, Am. Math. Mon., 108, 797-812 (2001) · Zbl 1040.15016 · doi:10.1080/00029890.2001.11919815
[22] Li, P., Wang, Q., Zuo, W., Zhang, L.: Log-Euclidean kernels for sparse representation and dictionary learning. In International Conference on Computer Vision (ICCV), pp. 1601 - 1608, (2013)
[23] Minh, Hà Quang, Affine-Invariant Riemannian Distance Between Infinite-Dimensional Covariance Operators, 30-38 (2015), Cham · Zbl 1376.94016
[24] Minh, HQ, Infinite-dimensional Log-Determinant divergences between positive definite trace class operators, Linear Algebra Appl., 528, 331-383 (2017) · Zbl 06950133 · doi:10.1016/j.laa.2016.09.018
[25] Minh, Hà Quang, Log-Determinant Divergences Between Positive Definite Hilbert-Schmidt Operators, 505-513 (2017), Cham · Zbl 1428.47031
[26] Minh, HQ; Murino, V., From covariance matrices to covariance operators: Data representation from finite to infinite-dimensional settings, 115-143 (2016), Cham · Zbl 1353.62077 · doi:10.1007/978-3-319-45026-1_5
[27] Minh, HQ; Murino, V., In Synthesis Lectures on Computer Vision (2017), San Rafael · Zbl 1380.68005 · doi:10.2200/S00801ED1V01Y201709COV011
[28] Minh, H.Q., San Biagio, M., Bazzani, L., Murino, V.: Approximate Log-Hilbert-Schmidt distances between covariance operators for image classification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June (2016)
[29] Minh, H.Q., San Biagio, M., Murino, V.: Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces. In Advances in Neural Information Processing Systems (NIPS), pp. 388-396, (2014)
[30] Mostow, GD, Some new decomposition theorems for semi-simple groups, Mem. Am. Math. Soc., 14, 31-54 (1955) · Zbl 0064.25901
[31] Ohara, A.; Eguchi, S., Geometry on positive definite matrices deformed by v-potentials and its submanifold structure, 31-55 (2014), Cham · Zbl 1317.94043 · doi:10.1007/978-3-319-05317-2_2
[32] Pennec, X.; Fillard, P.; Ayache, N., A Riemannian framework for tensor computing, Int. J. Comput. Vis., 66, 41-66 (2006) · Zbl 1287.53031 · doi:10.1007/s11263-005-3222-z
[33] Petryshyn, WV, Direct and iterative methods for the solution of linear operator equations in Hilbert spaces, Trans. Am. Math. Soc., 105, 136-175 (1962) · Zbl 0106.09301 · doi:10.1090/S0002-9947-1962-0145651-8
[34] Peypouquet, J.: Convex optimization in normed spaces: theory, methods and examples. Springer, New York (2015) · Zbl 1322.90004 · doi:10.1007/978-3-319-13710-0
[35] Pigoli, D.; Aston, J.; Dryden, IL; Secchi, P., Distances and inference for covariance operators, Biometrika, 101, 409-422 (2014) · Zbl 1452.62994 · doi:10.1093/biomet/asu008
[36] Simon, B., Notes on infinite determinants of Hilbert space operators, Adv. Math., 24, 244-273 (1977) · Zbl 0353.47008 · doi:10.1016/S0001-8708(77)80044-3
[37] Sra, S.: A new metric on the manifold of kernel matrices with application to matrix geometric means. In Advances in Neural Information Processing Systems (NIPS), pp. 144-152, (2012)
[38] Stanitsas, P., Cherian, A., Morellas, V., Papanikolopoulos, N.: Clustering positive definite matrices by learning information divergences. In 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 1304-1312, (2017)
[39] Tuzel, O.; Porikli, F.; Meer, P., Pedestrian detection via classification on Riemannian manifolds, IEEE Trans. Pattern Anal. Mach. Intell., 30, 1713-1727 (2008) · doi:10.1109/TPAMI.2008.75
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.