×

Hyperspectral image classification using kernel Fukunaga-Koontz transform. (English) Zbl 1299.94009

Summary: This paper presents a novel approach for the hyperspectral imagery (HSI) classification problem, using Kernel Fukunaga-Koontz Transform (K-FKT). The Kernel based Fukunaga-Koontz Transform offers higher performance for classification problems due to its ability to solve nonlinear data distributions. K-FKT is realized in two stages: training and testing. In the training stage, unlike classical FKT, samples are relocated to the higher dimensional kernel space to obtain a transformation from non-linear distributed data to linear form. This provides a more efficient solution to hyperspectral data classification. The second stage, testing, is accomplished by employing the Fukunaga-Koontz Transformation operator to find out the classes of the real world hyperspectral images. In experiment section, the improved performance of HSI classification technique, K-FKT, has been tested comparing other methods such as the classical FKT and three types of support vector machines (SVMs).

MSC:

94A08 Image processing (compression, reconstruction, etc.) in information and communication theory
94A11 Application of orthogonal and other special functions
68T05 Learning and adaptive systems in artificial intelligence
Full Text: DOI

References:

[1] P. S. Thenkabail, J. G. Lyon, and A. Huete, Hyperspectral Remote Sensing of Vegetation, CRC Press, New York, NY, USA, 2012.
[2] L. Zhi, D. Zhang, J.-Q. Yan, Q.-L. Li, and Q.-L. Tang, “Classification of hyperspectral medical tongue images for tongue diagnosis,” Computerized Medical Imaging and Graphics, vol. 31, no. 8, pp. 672-678, 2007. · doi:10.1016/j.compmedimag.2007.07.008
[3] M. Kalacska and M. Bouchard, “Using police seizure data and hyperspectral imagery to estimate the size of an outdoor cannabis industry,” Police Practice and Research, vol. 12, no. 5, pp. 424-434, 2011. · doi:10.1080/15614263.2010.536722
[4] Z. Pan, G. Healey, M. Prasad, and B. Tromberg, “Face recognition in hyperspectral images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 12, pp. 1552-1560, 2003. · doi:10.1109/TPAMI.2003.1251148
[5] D. Manolakis, D. Marden, and G. A. Shaw, “Hyperspectral image processing forautomatic target detection applications,” Lincoln Laboratory Journal, vol. 14, no. 1, pp. 79-116, 2003.
[6] I. Faulconbridge, M. Pickering, and M. Ryan, “Unsupervised band removal leadingto improved classification accuracy of hyperspectral images,” in Proceedings of the 29th Australasian Computer Science Conference (ACSC ’06), V. Estivill-Castro and G. Dobbie, Eds., vol. 48 of CRPIT, pp. 43-48, Hobart, Australia, 2006.
[7] O. Rajadell, P. Garca-Sevilla, and F. Pla, “Textural features for hyperspectralpixel classification,” in Pattern Recognition and Image Analysis, H. Araujo, A. Mendona, A. Pinho, and M. Torres, Eds., vol. 5524 of Lecture Notes in Computer Science, pp. 208-216, Springer, Berlin, Germany, 2009.
[8] J. A. Benediktsson, J. A. Palmason, and J. R. Sveinsson, “Classification of hyperspectral data from urban areas based on extended morphological profiles,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 3, pp. 480-491, 2005. · doi:10.1109/TGRS.2004.842478
[9] J. Borges, J. Bioucas-Dias, and A. Maral, “Bayesian hyperspectral image segmentationwith discriminative class learning,” in Pattern Recognition and Image Analysis, J. Mart, J. Bened, A. Mendona, and J. Serrat, Eds., vol. 4477 of Lecture Notes in Computer Science, pp. 22-29, Springer, Berlin, Germany, 2007. · doi:10.1007/978-3-540-72847-4_5
[10] M. S. Alam, M. N. Islam, A. Bal, and M. A. Karim, “Hyperspectral target detection using Gaussian filter and post-processing,” Optics and Lasers in Engineering, vol. 46, no. 11, pp. 817-822, 2008. · doi:10.1016/j.optlaseng.2008.05.019
[11] S. Samiappan, S. Prasad, and L. M. Bruce, “Automated hyperspectral imagery analysis via support vector machines based multi-classifier system with non-uniform random feature selection,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS ’11), pp. 3915-3918, July 2011. · doi:10.1109/IGARSS.2011.6050087
[12] S. Dinc, Hyperspectral target recognition using multiclass kernel fukunagakoontztransform [M.S. thesis], Graduate School of Natural and Applied sciences, Yildiz Technical University, \DIstanbul, Turkey, 2011.
[13] Aviris hyperspectral data, 2013,http://aviris.jpl.nasa.gov/html/data.html.
[14] K. Fukunaga and W. L. G. Koontz, “Application of the Karhunen-Loéve expansionto feature selection and ordering,” IEEE Transactions on Computers, vol. 19, no. 4, pp. 311-318, 1970. · Zbl 0197.14604 · doi:10.1109/T-C.1970.222918
[15] S. Ochilov, M. S. Alam, and A. Bal, “Fukunaga-koontz transform based dimensionality reduction for hyperspectral imagery,” in Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, vol. 6233 of Proceedings of SPIE, April 2006. · doi:10.1117/12.666290
[16] Y.-H. Li and M. Savvides, “Kernel fukunaga-koontz transform subspaces for enhanced face recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’07), pp. 1-8, June 2007. · doi:10.1109/CVPR.2007.383398
[17] W. Zheng and Z. Lin, “A new discriminant subspace analysis approach for multi-class problems,” Pattern Recognition, vol. 45, no. 4, pp. 1426-1435, 2012. · Zbl 1231.68240 · doi:10.1016/j.patcog.2011.10.021
[18] R. Liu and H. Zhi, “Infrared point target detection with fisher linear discriminant and kernel fisher linear discriminant,” Journal of Infrared, Millimeter, and Terahertz Waves, vol. 31, no. 12, pp. 1491-1502, 2010. · doi:10.1007/s10762-010-9729-6
[19] D. Tuia and G. Camps-Valls, “Urban image classification with semisupervised multiscale cluster kernels,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 4, no. 1, pp. 65-74, 2011. · doi:10.1109/JSTARS.2010.2069085
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.