×

Feature extraction using constrained maximum variance mapping. (English) Zbl 1167.68437

Summary: An efficient feature extraction method named as Constrained Maximum Variance Mapping (CMVM) is developed. The proposed algorithm can be viewed as a linear approximation of multi-manifolds learning based approach, which takes the local geometry and manifold labels into account. The CMVM and the original manifold learning based approaches have a point in common that the locality is preserved. Moreover, the CMVM is globally maximizing the distances between different manifolds. After the local scatters have been characterized, the proposed method focuses on developing a linear transformation that can maximize the dissimilarities between all the manifolds under the constraint of locality preserving. Compared to most of the up-to-date manifold learning based methods, this trick makes contribution to pattern classification from two aspects. On the one hand, the local structure in each manifold is still kept; on the other hand, the discriminant information between manifolds can be explored. Finally, FERET face database, CMU PIE face database and USPS handwriting data are all taken to examine the effectiveness and efficiency of the proposed method. Experimental results validate that the proposed approach is superior to other feature extraction methods, such as linear discriminant analysis, locality preserving projection, unsupervised discriminant projection and maximum variance projection.

MSC:

68T10 Pattern recognition, speech recognition
68W05 Nonnumerical algorithms

Software:

CMU PIE; FERET
Full Text: DOI

References:

[1] Tenenbaum, J. B.; de Silva, V.; Langford, J. C., A global geometric framework for nonlinear dimensionality reduction, Science, 290, 2319-2323 (2000)
[2] Roweis, S. T.; Saul, L. K., Nonlinear dimensionality reduction by locally linear embedding, Science, 290, 2323-2326 (2000)
[3] Saul, L. K.; Roweis, S. T., Think globally, fit locally: unsupervised learning of low dimensional manifolds, J. Mach. Learn. Res., 1, 4, 119-155 (2003) · Zbl 1093.68089
[4] Belkin, M.; Niyogi, P., Laplacian eigenmaps and spectral techniques for embedding and clustering, (Dietterich, T. G.; Becker, S.; Ghahramani, Z., Advances in Neural Information Processing Systems, vol. 14 (2002), MIT Press: MIT Press Cambridge, MA, USA), 585-591
[5] Belkin, M.; Niyogi, P., Laplacian eigenmaps for dimensionality reduction and data representation, Neural Comput., 15, 6, 1373-1396 (2003) · Zbl 1085.68119
[6] Donoho, D. L.; Grimes, C. E., Hessian eigenmaps: locally linear embedding techniques for high-dimensional data, Proc. Natl. Acad. Arts Sci., 110, 5591-5596 (2003) · Zbl 1130.62337
[7] Weinberger, K. Q.; Saul, L. K., Unsupervised learning of image manifolds by semidefinite programming, Int. J. Comput. Vision, 70, 1, 77-90 (2006)
[8] Zhang, Z.; Zha, H., Principal manifolds and nonlinear dimensionality reduction by local tangent space alignment, SIAM J. Sci. Comput., 26, 1, 313-338 (2004) · Zbl 1077.65042
[9] N.D. Lawrence, J. Quiñonero-Candela, Local distance preservation in the GP-LVM through back constraints, in: Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, 2006.; N.D. Lawrence, J. Quiñonero-Candela, Local distance preservation in the GP-LVM through back constraints, in: Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, 2006.
[10] Jolliffe, I. T., Principal Component Analysis (1986), Springer: Springer New York · Zbl 1011.62064
[11] Belhumeour, P. N.; Hespanha, J. P.; Kriegman, D. J., Eigenfaces vs. fisherfaces: recognition using class specific linear projection, IEEE Trans. Pattern Anal. Mach. Intell., 19, 7, 711-720 (1997)
[12] MartõÂnez, A. M.; Kak, A. C., PCA versus LDA, IEEE Trans. Pattern Anal. Mach. Intell., 23, 2, 228-233 (2001)
[13] Scholkopf, B.; Smola, A.; Muller, K. R., Nonlinear component analysis as a kernel eigenvalue problem, Neural Comput., 10, 5, 1299-1319 (1998)
[14] Mika, S.; Ratsch, G.; Weston, J.; Scholkopf, B.; Smola, A.; Muller, K.-R., Constructing descriptive and discriminative nonlinear features: Rayleigh coefficients in kernel feature spaces, IEEE Trans. Pattern Anal. Mach. Intell., 25, 5, 623-628 (2003)
[15] Zheng, W.; Zhao, L.; Zou, C., Foley-Sammon optimal discriminant vectors using kernel approach, IEEE Trans. Neural Networks, 16, 1, 1-9 (2005)
[16] Y. Bengio, J.-F. Paiement, P. Vincent, Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering, Technical Report 1238, Université de Montreal, 2003.; Y. Bengio, J.-F. Paiement, P. Vincent, Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering, Technical Report 1238, Université de Montreal, 2003.
[17] M. Brand, Charting a manifold, in: Proceedings of the 15th Conference on Neural Information Processing Systems, 2002.; M. Brand, Charting a manifold, in: Proceedings of the 15th Conference on Neural Information Processing Systems, 2002.
[18] Yan, S.; Xu, D.; Zhang, B.; Zhang, H.-J., Graph embedding: a general framework for dimensionality reduction, IEEE Trans. Pattern Anal. Mach. Intell., 29, 1, 40-51 (2007)
[19] Yang, J.; Zhang, D.; Yang, J. Y.; Niu, B., Globally maximizing, locally minimizing: unsupervised discriminant projection with application to face and palm biometrics, IEEE Trans. Pattern Anal. Mach. Intell., 29, 4, 650-664 (2007)
[20] O. Kouropteva, O. Okun, M. Pietikainen, Supervised locally linear embedding algorithm for pattern recognition, Lecture Notes in Computer Science, vol. 2652, 2003, pp. 386-394.; O. Kouropteva, O. Okun, M. Pietikainen, Supervised locally linear embedding algorithm for pattern recognition, Lecture Notes in Computer Science, vol. 2652, 2003, pp. 386-394.
[21] D. Ridder, M. Loog, M. Reinders, Local fisher embedding, in: Proceedings of the 17th International Conference on Pattern Recognition, 2004.; D. Ridder, M. Loog, M. Reinders, Local fisher embedding, in: Proceedings of the 17th International Conference on Pattern Recognition, 2004.
[22] Vlassis, N.; Motomura, Y.; Krose, B., Supervised dimension reduction of intrinsically low dimensional data, Neural Comput., 14, 1, 191-215 (2002) · Zbl 1009.62001
[23] Geng, X.; Zhang, D. C.; Zhou, Z. H., Supervised nonlinear dimensionality reduction for visualization and classification, IEEE Trans. Syst. Man Cybern. B, 35, 6, 1098-1107 (2005)
[24] Zhao, H. T.; Sun, S. Y.; Jing, Z. L.; Yang, J. Y., Local structure based supervised feature extraction, Pattern Recognition, 39, 1546-1550 (2006) · Zbl 1095.68679
[25] Howlanda, P.; Wangb, J.; Parkc, H., Solving the small sample size problem in face recognition using generalized discriminant analysis, Pattern Recognition, 39, 277-287 (2006)
[26] Zheng, W.; Zhao, L.; Zou, C., An efficient algorithm to solve the small sample size problem for LDA, Pattern Recognition, 37, 1077-1079 (2004) · Zbl 1072.68557
[27] Ye, J.; Janardan, R.; Park, C. H.; Park, H., An optimization criterion for generalized discriminant analysis on undersampled problems, IEEE Trans. Pattern Anal. Mach. Intell., 26, 8, 982-994 (2004)
[28] Ye, J.; Li, Q., A two-stage linear discriminant analysis via QR-decomposition, IEEE Trans. Pattern Anal. Mach. Intell., 27, 6, 929-941 (2005)
[29] Chen, L. F.; Xu, H. Y.; Liao, M.; Ko, M. T.; Lin, J. C.; Yu, G. J., A new LDA-based face recognition system which can solve the small sample size problem, Pattern Recognition, 33, 1713-1726 (2000)
[30] Zhang, T.; Yang, J.; Wang, H.; Du, C., Maximum variance projection for face recognition, Opt. Eng., 46, 6, 1-8 (2007)
[31] Chang, H.; Yeung, D. Y., Robust locally linear embedding, Pattern Recognition, 39, 1053-1065 (2006) · Zbl 1096.68713
[32] X. He, P. Niyogi, Locality preserving projections, in: Advances in Neural Information Processing Systems 16 (NIPS 2003), Vancouver, Canada, 2003.; X. He, P. Niyogi, Locality preserving projections, in: Advances in Neural Information Processing Systems 16 (NIPS 2003), Vancouver, Canada, 2003.
[33] He, X.; Yang, S.; Hu, Y.; Niyogi, P.; Zhang, H. J., Face recognition using Laplacianfaces, IEEE Trans. Pattern Anal. Mach. Intell., 27, 3, 328-340 (2005)
[34] K.Q. Weinberger, L.K. Saul, An introduction to nonlinear dimensionality reduction by maximum variance unfolding, American Association for Artificial Intelligence \(\langle;\) www.aaai.org \(\rangle;\); K.Q. Weinberger, L.K. Saul, An introduction to nonlinear dimensionality reduction by maximum variance unfolding, American Association for Artificial Intelligence \(\langle;\) www.aaai.org \(\rangle;\)
[35] Vandenberghe, L.; Boy, S. P., Semidefinite programming, SIAM Rev., 38, 1, 49-95 (1996) · Zbl 0845.65023
[36] Phillips, P. J.; Moon, H.; Rizvi, S. A.; Rauss, P. J., The FERET evaluation methodology for face-recognition algorithms, IEEE Trans. Pattern Anal. Mach. Intell., 22, 10, 1090-1104 (2000), Yale University Face Database ⟨http://cvc.yale.edu/projects/yalefaces/yalefaces.html⟩, 2002
[37] P.J. Phillips, The Facial Recognition Technology (FERET) Database \(\langle;\) http://www.itl.nist.gov/iad/humanid/feret/feret_master.html \(\rangle;\); P.J. Phillips, The Facial Recognition Technology (FERET) Database \(\langle;\) http://www.itl.nist.gov/iad/humanid/feret/feret_master.html \(\rangle;\)
[38] Phillips, P. J.; Wechsler, H.; Huang, J.; Rauss, P., The FERET database and evaluation procedure for face recognition algorithms, image and vision computing, 16, 5, 295-306 (1998)
[39] Sim, T.; Baker, S.; Bsat, M., The CMU pose, illumination, and expression database, IEEE Trans. Pattern Anal. Mach. Intell., 25, 12, 1615-1618 (2003)
[40] T. Sim, S. Baker, M. Bsat, The CMU pose, illumination, and expression (PIE) database of human faces, Technical Report CMU-RI-TR-01-02, Robotics Institute, Carnegie Mellon University, January 2001.; T. Sim, S. Baker, M. Bsat, The CMU pose, illumination, and expression (PIE) database of human faces, Technical Report CMU-RI-TR-01-02, Robotics Institute, Carnegie Mellon University, January 2001.
[41] See \(\langle;\) http://www.cs.toronto.edu/ roweis/data.html \(\rangle;\); See \(\langle;\) http://www.cs.toronto.edu/ roweis/data.html \(\rangle;\)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.