Abstract
Electroencephalography (EEG) may detect early changes in Alzheimer's disease (AD), a debilitating progressive neurodegenerative disease. We have developed an automated AD detection model using a novel directed graph for local texture feature extraction with EEG signals. The proposed graph was created from a topological map of the macroscopic connectome, i.e., neuronal pathways linking anatomo-functional brain segments involved in visual object recognition and motor response in the primate brain. This primate brain pattern (PBP)-based model was tested on a public AD EEG signal dataset. The dataset comprised 16-channel EEG signal recordings of 12 AD patients and 11 healthy controls. While PBP could generate 448 low-level features per one-dimensional EEG signal, combining it with tunable q-factor wavelet transform created a multilevel feature extractor (which mimicked deep models) to generate 8,512 (= 448 × 19) features per signal input. Iterative neighborhood component analysis was used to choose the most discriminative features (the number of optimal features varied among the individual EEG channels) to feed to a weighted k-nearest neighbor (KNN) classifier for binary classification into AD vs. healthy using both leave-one subject-out (LOSO) and tenfold cross-validations. Iterative majority voting was used to compute subject-level general performance results from the individual channel classification outputs. Channel-wise, as well as subject-level general results demonstrated exemplary performance. In addition, the model attained 100% and 92.01% accuracy for AD vs. healthy classification using the KNN classifier with tenfold and LOSO cross-validations, respectively. Our developed multilevel PBP-based model extracted discriminative features from EEG signals and paved the way for further development of models inspired by the brain connectome.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
Data and relevant material are available from the Datashare website: http://datashare.is.ed.ac.uk/handle/10283/2783 (Smith et al. 2017).
References
Abásolo D, Hornero R, Espino P, Alvarez D, Poza J (2006) Entropy analysis of the EEG background activity in Alzheimer’s disease patients. Physiol Meas 27:241
Bargshady G, Zhou X, Barua PD, Gururajan R, Li Y, Acharya UR (2022) Application of CycleGAN and transfer learning techniques for automated detection of COVID-19 using X-ray images. Pattern Recognit Lett 153:67–74
Cassani R, Estarellas M, San-Martin R, Fraga FJ, Falk TH (2018) Systematic review on resting-state EEG for Alzheimer’s disease diagnosis and progression assessment. Disease Mark 2018
Cinbis RG, Verbeek J, Schmid C (2011) Unsupervised metric learning for face identification in TV video. In: 2011 international conference on computer vision, Barcelona, 2011. IEEE, pp 1559–1566
Dogan A et al (2021) PrimePatNet87: Prime pattern and tunable q-factor wavelet transform techniques for automated accurate EEG emotion recognition. Comput Biol Med 138:104867
Escudero J, Abásolo D, Hornero R, Espino P, López M (2006) Analysis of electroencephalograms in Alzheimer’s disease patients with multiscale entropy. Physiol Meas 27:1091
Falk TH, Fraga FJ, Trambaiolli L, Anghinah R (2012) EEG amplitude modulation analysis for semi-automated diagnosis of Alzheimer’s disease. EURASIP J Adv Signal Process 2012:1–9. https://doi.org/10.1186/1687-6180-2012-192
Goldberger J, Hinton GE, Roweis S, Salakhutdinov RR (2004a) Neighbourhood components analysis. Adv Neural Inf Process Syst 17:513–520
Guo G, Wang H, Bell D, Bi Y, Greer K KNN model-based approach in classification. In: OTM confederated international conferences" on the move to meaningful internet systems", 2003. Springer, pp 986–996
Houmani N, Vialatte F, Gallego-Jutglà E, Dreyfus G, Nguyen-Michel V-H, Mariani J, Kinugawa K (2018) Diagnosis of Alzheimer’s disease with electroencephalography in a differential framework. PLoS ONE 13:e0193607
Huggins CJ et al (2021) Deep learning of resting-state electroencephalogram signals for three-class classification of Alzheimer’s disease, mild cognitive impairment and healthy ageing. J Neural Eng 18:046087
Isik AT (2010) Late onset Alzheimer’s disease in older people. Clin Interv Aging 5:307
Jain U, Nathani K, Ruban N, Raj ANJ, Zhuang Z, Mahesh VG Cubic SVM classifier based feature extraction and emotion detection from speech signals. In: 2018 international conference on sensor networks and signal processing (SNSP), 2018. IEEE, pp 386–391
Kabir HD, Khanam S, Khozeimeh F, Khosravi A, Mondal SK, Nahavandi S, Acharya UR (2022) Aleatory-aware deep uncertainty quantification for transfer learning. Comput Biol Med 143:105246
Kashefpoor M, Rabbani H, Barekatain M (2016) Automatic diagnosis of mild cognitive impairment using electroencephalogram spectral features. J Med Signals Sens 6:25
Khan MU et al (2022) Artificial neural network-based cardiovascular disease prediction using spectral features. Comput Electr Eng 101:108094
Khatun S, Morshed BI, Bidelman GM (2019) A single-channel EEG-based approach to detect mild cognitive impairment via speech-evoked brain responses. IEEE Trans Neural Syst Rehabil Eng 27:1063–1070
Kong Y, Wang T, Chu F (2018) Adaptive TQWT filter based feature extraction method and its application to detection of repetitive transients. Sci China Technol Sci 61:1556–1574
Kruger N et al (2012) Deep hierarchies in the primate visual cortex: What can we learn for computer vision? IEEE Trans Pattern Anal Mach Intell 35:1847–1871
Kunjan S et al (2021) The necessity of leave one subject out (LOSO) cross validation for EEG disease diagnosis. International conference on brain informatics. Springer, pp 558–567
Lamba PS, Virmani D, Castillo O (2020) Multimodal human eye blink recognition method using feature level fusion for exigency detection. Soft Comput 24:16829–16845
Li N, Jimenez R (2018) A logistic regression classifier for long-term probabilistic prediction of rock burst hazard. Nat Hazards 90:197–215
Mattson MP (2004) Pathways towards and away from Alzheimer’s disease. Nature 430:631–639
McBride J, Zhao X, Munro N, Smith C, Jicha G, Jiang Y (2013) Resting EEG discrimination of early stage Alzheimer’s disease from normal aging using inter-channel coherence network graphs. Ann Biomed Eng 41:1233–1242
McBride JC et al (2014) Spectral and complexity analysis of scalp EEG characteristics for mild cognitive impairment and early Alzheimer’s disease. Comput Methods Programs Biomed 114:153–163
Peterson LE (2009) K-nearest neighbor. Scholarpedia 4:1883
Poil S-S, De Haan W, van der Flier WM, Mansvelder HD, Scheltens P, Linkenkaer-Hansen K (2013) Integrative EEG biomarkers predict progression to Alzheimer’s disease at the MCI stage. Front Aging Neurosci 5:58
Puri D, Nalbalwar S, Nandgaonkar A, Kachare P, Rajput J, Wagh A, (2022a) Alzheimer’s disease detection using empirical mode decomposition and Hjorth parameters of EEG signal. IEEE, pp 23–28
Puri D, Nalbalwar S, Nandgaonkar A, Wagh A (2022b) Alzheimer’s disease detection from optimal electroencephalogram channels and tunable q-wavelet transform. Indo Journal of Elec Engg and Comp Sci 25:1420–1428
Puri D, Nalbalwar S, Nandgaonkar A, Wagh A (2022c) EEG-based diagnosis of alzheimer's disease using kolmogorov complexity. In: Applied information processing systems. Springer, pp 157–165
Ruiz-Gómez SJ, Gómez C, Poza J, Gutiérrez-Tobal GC, Tola-Arribas MA, Cano M, Hornero R (2018) Automated multiclass classification of spontaneous EEG activity in Alzheimer’s disease and mild cognitive impairment. Entropy 20:35
Selesnick IW (2011) Wavelet transform with tunable Q-factor. IEEE Trans Signal Process 59:3560–3575
Sharma K, Mukhopadhyay A (2021) Kernel naïve Bayes classifier-based cyber-risk assessment and mitigation framework for online gaming platforms. J Organ Comput Electron Commer 31:343–363
Sharma N, Kolekar MH, Jha K (2020) Iterative filtering decomposition based early dementia diagnosis using EEG with cognitive tests. IEEE Trans Neural Syst Rehabil Eng 28:1890–1898
Simons S, Espino P, Abásolo D (2018) Fuzzy entropy analysis of the electroencephalogram in patients with Alzheimer’s disease: is the method superior to sample entropy? Entropy 20:21
Siuly S, Alçin ÖF, Kabir E, Şengür A, Wang H, Zhang Y, Whittaker F (2020) A new framework for automatic detection of patients with mild cognitive impairment using resting-state EEG signals. IEEE Trans Neural Syst Rehabil Eng 28:1966–1976
Smith K, Abásolo D, Escudero J (2017) Accounting for the complex hierarchical topology of EEG phase-based functional connectivity in network binarisation. PLoS ONE 12:e0186164
Squires M, Tao X, Elangovan S, Gururajan R, Zhou X, Acharya UR (2022) A novel genetic algorithm based system for the scheduling of medical treatments. Expert Syst Appl 195:116464
Sridhar S, Manian V (2020) Eeg and deep learning based brain cognitive function classification. Computers 9:104
Suen CY, Lam L (2000) Multiple classifier combination methodologies for different output levels. International workshop on multiple classifier systems. Springer, pp 52–66
Tuncer T, Dogan S, Özyurt F, Belhaouari SB, Bensmail H (2020) Novel multi center and threshold ternary pattern based method for disease detection method using voice. IEEE Access 8:84532–84540
Tuncer T, Dogan S, Akbal E, Cicekli A, Rajendra Acharya U (2022) Development of accurate automated language identification model using polymer pattern and tent maximum absolute pooling techniques. Neural Comput Appl 34:1–14
WHO (Accessed on 7 Febr 2022 ) Dementia, https://www.who.int/news-room/fact-sheets/detail/dementia.
Xing X, Jia X, Meng MQ-H Bleeding detection in wireless capsule endoscopy image video using superpixel-color histogram and a subspace knn classifier. In: 2018 40th annual international conference of the ieee engineering in medicine and biology society (EMBC), 2018. IEEE, pp 1–4
Yin J, Cao J, Siuly S, Wang H (2019) An integrated MCI detection framework based on spectral-temporal analysis. Int J Autom Comput 16:786–799
Zhang Z (2016) Introduction to machine learning: k-nearest neighbors. Ann Transl Med 4:218
Zhang Y, Wang Y, Wang W, Liu B (2001) Doppler ultrasound signal denoising based on wavelet frames. IEEE Trans Ultrasonics, Ferroelectr Freq Control 48:709–716
Zuo W, Lu W, Wang K, Zhang H Diagnosis of cardiac arrhythmia using kernel difference weighted KNN classifier. In: 2008 computers in cardiology, 2008. IEEE, pp 253–256
Funding
The authors state that this work has not received any funding.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the study.
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix
Tunable Q-factor wavelet transform (TQWT)
As a representative wavelet transform expressing wavelets in the frequency domain, discrete-time signals of finite length are analyzed using radix-2 FFTs (Selesnick 2011). Compared to traditional Q-fixed wavelet transforms, TQWT has the excellent feature of estimating the Q factor easily and continuously adjusting it depending on the different oscillations of different signals. Moreover, successful error feature generation using TQWT-based parsing relies on appropriate TQWT parameters such as parsing level, Q factor, and redundancy (Kong et al. 2018). As noted, TQWT takes three parameters (Q: oscillatory value, r: redundancy, J: number of level). The first parameter is Q and it assigns number of oscillations. If Q is 1, there is no oscillations.
TQWT contains two filter banks and these two-channel filter banks consist of a low-pass and a high-pass channel. The low-pass channel has a low-pass filter (LPF) followed by a low-pass scaling factor (LCF). The high-pass channel has a high-pass filter (HPF) followed by a high-pass scaling factor (HCF). The ratio of the center frequency (CF) of each subband to the bandwidth is equal to the Q factor used to implement the TQWT. The following expressions can be used to calculate the MF and bandwidth of a subband (Selesnick 2011):
Equations 1 and 2 define the effect of scaling factors on the center frequency and bandwidth is observed. Herein j represents the subband number where 1 ≤ J ≤ J + 1 fs is the sampling frequency. The Q factor is controlled by R, and its connection is given in Eqs. 3 and 4 (Zhang et al. 2001).
Iterative neighborhood component analysis (INCA)
In order to explain INCA better, we first gave information about NCA in this section. Neighborhood component analysis (NCA) is a technique for reducing dimensions and selecting features. In machine learning applications, it is crucial to measure features. In classification studies, NCA is widely used as one of the most successful learning algorithms. Using NCA, classification operations are carried out by learning the projections of the vectors that optimize the nearest neighbor classifier's accuracy criteria. As another option, the NCA can select a linear projection that optimizes the performance of the nearest neighbor classifier within the projection area (Goldberger et al. 2004a).
NCA is a method that aims to learn the Mahalanobis distance to be used in classification and performs the classification using this distance. While the NCA method calculates Mahalanobis distance, it learns the projection matrix and prevents the inverse operation of the matrix.
This method uses the Mahalanobis distance formula described below to define the Mahalanobis distance.
In this equation, P is defined as the projection matrix that transforms the data. Thus, NCA moves from learning the Mahalanobis distance to learning the P matrix.
A data in the training data determines the class label by choosing another data as a neighbor. Data with transformed probability values are defined using softmax regression with euclidean distances.
Based on the stochastic selection rules, the probability of correctly classifying the data is calculated, and the set of data in the same class is displayed.
NCA then calculates the P matrix by maximizing the number of correctly classified points.
The f value that changes according to the P matrix gives rise to the following gradient rule.
Thus, a gradient-based optimizer is used.
INCA is an improved version of the NCA feature selector and it is an iterative version of the NCA. The main objective of the INCA feature selector is to choose the optimized number of features. To choose optimal feature vector, we used a loop and loss function. The steps of the INCA are given below.
Step 1 Calculate the qualified indexes by deploying the NCA algorithm.
Step 2 Define a loop range to decrease time complexity.
Step 3 Choose feature vectors iteratively by using the created loop.
Step 4 Apply the used loss function to the selected feature vectors and create a loss array.
Step 5 Select the feature vector with minimum loss value.
kNN
The KNN algorithm is one of the more common algorithms used in general and one of the most widely known and widely used algorithms. Given unlabeled test data, kNN finds the closest k in the data set and then assigns the most appropriate label (Guo et al. 2003).
KNN performs object classification according to the closest training examples. A majority vote of its neighbors classifies an object.
Two situations should be considered in the kNN algorithm. The first of these is the correct choice of k, which affects performance. When k values are large, it may ignore small but important patterns. The other case is to calculate the distance between test samples and neighbors (Zhang 2016). The most popular measure of distance in distance functions are euclidean, manhattan, and Minkowski.
X1, X2, …, Xn and Y1, Y2, …, Yn represent the feature vector where n is the dimension of the feature space. The mathematical representation of the Euclidean distance is given in Eq. 11.
The mathematical representation of the manhattan distance, which calculates the difference of two data points in absolute value, is given in Eq. 12.
The mathematical representation of the Minkowski distance, where p ∈ (0, ∞) for a constant p, is given in Eq. 13.
Mode-based majority voting
Majority voting relies on the principle of normalization, which is derived from the sum of the probabilities given by the classifiers. As a result of this classification, the highest probability class combination result is obtained within the normalized result (Suen and Lam 2000).
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Dogan, S., Baygin, M., Tasci, B. et al. Primate brain pattern-based automated Alzheimer's disease detection model using EEG signals. Cogn Neurodyn 17, 647–659 (2023). https://doi.org/10.1007/s11571-022-09859-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11571-022-09859-2