Abstract
In recent years, many approaches for achieving high performance by combining some classifiers have been proposed. We exploit many random replicates of samples in the bagging, and randomly chosen feature subsets in the random subspace method. In this paper, we introduce a method for selecting both samples and features at the same time and demonstrate the effectiveness of the method. This method includes a parametric bagging and a parametric random subspace method as special cases. In some experiments, this method and the parametric random subspace method showed the best performance.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Breiman, L.: Bagging predictors. Machine Learning Journal 2 24, 123–140 (1996)
Efron, B.: Bootstrap methods: Another look at the jackknife. Annals of Statistics 7, 1–26 (1979)
Freund, Y., Iyer, R., Shapire, E., Singer, Y.: An efficient boosting algorithm for combining preferences. The Jounal of Machine Learning Research 4, 933–969 (2003)
Ho, T.: The random subspace method for constructing decision forests. IEEE Trans. on Pattern Analysis and Machine Intelligence 20, 832–844 (1998)
Jiang, Y., Ling, J.J., Li, G., Dai, H., Zhou, Z.: Dependency Bagging. LNCS, pp. 491–500 (2005)
Skurichina, M., Duin, R.: Bagging, boosting and the random subspace method for linear classifiers. Pattern Analysis and Applications 5, 121–135 (2002)
Oh, S.: On the relationship between majority vote accuracy and dependency in multiple classifier systems. Pattern Recognition Letters 24, 359–363 (2003)
Ruta, D., Gabrys, B.: A theoretical analysis of the limits of majority voting errors for multiple classifier systems. Pattern Analysis and Applications 5, 333–350 (2002)
Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: a survey and categorisation. Information Fusion 6, 5–20 (2005)
Kuncheva, L., Whitaker, C.: Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy. Machine Learning 51, 181–207 (2004)
Skalak, D.: The sources of increased accuracy for two proposed boosting algorithms. In: Proc. American Association for Artificial Intelligence, AAAI 1996, Integrating Multiple Learned Models workshop (1996)
Blake, C., Merz, C.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/mlrepository.html
Friedman, J.: On Bias, Variance, 0/1-Loss, and the Curse-of-Dimensionality. Data Mining and Knowledge Discovery 1, 55–77 (1997)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Shirai, S., Kudo, M., Nakamura, A. (2008). Bagging, Random Subspace Method and Biding. In: da Vitoria Lobo, N., et al. Structural, Syntactic, and Statistical Pattern Recognition. SSPR /SPR 2008. Lecture Notes in Computer Science, vol 5342. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89689-0_84
Download citation
DOI: https://doi.org/10.1007/978-3-540-89689-0_84
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-89688-3
Online ISBN: 978-3-540-89689-0
eBook Packages: Computer ScienceComputer Science (R0)