Abstract
This paper describes how through simple means a genetic search towards optimal neural network architectures can be improved, both in the convergence speed as in the quality of the final result. This result can be theoretically explained with the Baldwin effect, which is implemented here not just by the learning process of the network alone, but also by changing the network architecture as part of the learning procedure. This can be seen as a combination of two different techniques, both helping and improving on simple genetic search.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
J.M. Baldwin; ‘A new factor in evolution.’ In: American Naturalist, 30, 441–451, 1896.
R.K. Belew; ‘When both individuals and populations search: adding simple learning to the genetic algorithm’. In: J.D. Schaffer (Ed.); Proceedings of the third International Conference on Genetic Algorithms, 34–41, Kaufmann, San Mateo, CA, 1989.
E.J.W. Boers and H. Kuiper; Biological Metaphors and the Design of Modular Artificial Neural Networks. MSc. Thesis, Leiden University, 1992.
E.J.W. Boers, H. Kuiper, B.L.M. Happel and I.G. Sprinkhuizen-Kuyper; ‘Designing modular artificial neural networks’. In: H.A. Wijshoff; Computing Science in The Netherlands: Proceedings (CSN’93), Ed.: H.A. Wijshoff, 87–96, Stichting Mathematisch Centrum, Amsterdam, 1993.
M.V. Borst; Local Structure Optimization in Evolutionairy Generated Neural Network Architectures. MSc. Thesis, Leiden University, 1994.
Y.L. Cun, J. Denker and S. Solla; ‘Optimal brain damage’. In: Advances in Neural information Processing Systems, 2, 598–605, 1990.
S.E. Fahlman and C. Lebiere; ‘The Cascaded-Correlation Learning Architecture’. In: Advances in Neural Information Processing Systems, 2, 524–532, 1990.
D.B. Fogel; ‘An introduction to simulated evolutionary optimization’. In: IEEE Transactions on Neural Networks, 5, 3–14, 1994.
M. Fréan; ‘The Upstart algorithm: a method for constructing and training feedforward neural networks’. In: Neural Computations, 2, 198–209, 1990.
B. Fritzke; ‘Growing cell structures — A self-organizing network for unsupervised and supervised Learning. TR-93-026, 1993.
F. Gruau; Neural Network Synthesis Using Cellular Encoding and the Genetic Algorithm. PhD. Thesis, l’Ecole Normale Supérieure de Lyon, 1994.
F. Gruau and D. Whitley; ‘Adding learning to the cellular development of neural networks: evolution and the Baldwin effect’. In: Evolutionary Computation, 1, 213–233, 1993.
B.L.M. Happel and J.M.J. Murre; ‘Design and evolution of modular neural network architectures’. In: Neural Networks, 7,985–1004, 1994.
S.A. Harp, T. Samad and A. Guha; ‘Towards the genetic synthesis of neural networks’. In: J.D. Schaffer (Ed.); Proceedings of the third International Conference on Genetic Algorithms (ICGA), 360–369, Kaufmann, San Mateo, CA, 1989.
G.E. Hinton and S.J. Nowlan; ‘How learning can guide evolution’. In: Complex Systems, 1, 495–502, 1987
H. Kitano; ‘Designing neural network using genetic algorithm with graph generation system’. Complex Systems, 4, 461–476, 1990.
M. Marchand, M. Golea and P. Ruján; ‘A convergence theorem for sequential learning in two-layer perceptrons’. In: Europhysics Letters, 11, 487–492, 1990.
M. Mezard and J.-P. Nadal; ‘Learning in feedforward layered networks: the Tiling algorithm’. In: Journal of Physics A, 22, 2191–2204, 1989.
E. Mjolsness; ‘Bayesian interference on visual grammars by neural nets that optimize’. Technical Report YALEU-DCS-TR-854, Yale University, 1990.
M. Mozer and P. Smolensky; ‘Skeletonization: a technique for trimming the fat from a network via relevance assessment’. In: Advances in Neural Information Processing Systems, 1,107–115, 1989.
C.W. Omlin and C.L. Giles; Pruning recurrent neural net-works for improved generalization performance. Revised Technical Report No. 93-6, Computer Science Department, Rensselaer Polytechnic Institute, Trov, N.Y., 1993.
J.G. Rueckl, K.R. Cave and S.M. Kosslyn; ‘Why are “what” and “where” processed by separate cortical visual systems? A computational investigation’. In: Journal of Cognitive Neuroscience, 1, 171–186, 1989.
D. Whitley, V.S. Gordon and K. Mathias; ‘Lamarckian evolution, the Baldwin effect and function optimization’. In: Y Davidor, H.-P. Schwefel and R. Männer (Eds.); Lecture Notes in Computer Science, 866, 6–15, Springer-Verlag, 1994
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1995 Springer-Verlag/Wien
About this paper
Cite this paper
Boers, E.J.W., Borst, M.V., Sprinkhuizen-Kuyper, I.G. (1995). Evolving Neural Networks Using the “Baldwin Effect”. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-7535-4_87
Download citation
DOI: https://doi.org/10.1007/978-3-7091-7535-4_87
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-82692-8
Online ISBN: 978-3-7091-7535-4
eBook Packages: Springer Book Archive