PCA and Gaussian noise in MLP neural network training improve generalization in problems with small and unbalanced data sets

IBV Da Silva, PJL Adeodato�- The 2011 International Joint�…, 2011 - ieeexplore.ieee.org
IBV Da Silva, PJL Adeodato
The 2011 International Joint Conference on Neural Networks, 2011ieeexplore.ieee.org
Machine learning approaches have been successfully applied for automatic decision
support in several domains. The quality of these systems, however, degrades severely in
classification problems with small and unbalanced data sets for knowledge acquisition.
Inherent to several real-world problems, data sets with these characteristics are the reality to
be tackled by learning algorithms, but the small amount of data affects the classifiers'
generalization power while the imbalance in class distribution makes the classifiers biased�…
Machine learning approaches have been successfully applied for automatic decision support in several domains. The quality of these systems, however, degrades severely in classification problems with small and unbalanced data sets for knowledge acquisition. Inherent to several real-world problems, data sets with these characteristics are the reality to be tackled by learning algorithms, but the small amount of data affects the classifiers' generalization power while the imbalance in class distribution makes the classifiers biased towards the larger classes. Previous work had addressed these data constraints with the addition of Gaussian noise to the input patterns' variables during the iterative training process of a MultiLayer perceptron (MLP) neural network (NN). This paper improves the quality of such classifier by decorrelating the input variables via a Principal Component Analysis (PCA) transformation of the original input space before applying additive Gaussian noise to each transformed variable for each input pattern. PCA transformation prevents the conflicting effect of adding decorrelated noise to correlated variables, an effect which increases with the noise level. Three public data sets from a well-known benchmark (Proben1) were used to validate the proposed approach. Experimental results indicate that the proposed methodology improves the performance of the previous approach being statistically better than the traditional training method (95% confidence) in further experimental set-ups.
ieeexplore.ieee.org
Showing the best result for this search. See all results