×

Experimental study on population-based incremental learning algorithms for dynamic optimization problems. (English) Zbl 1086.68596

Summary: Evolutionary algorithms have been widely used for stationary optimization problems. However, the environments of real world problems are often dynamic. This seriously challenges traditional evolutionary algorithms. In this paper, the application of Population-Based Incremental Learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic problems is investigated. Inspired by the complementarity mechanism in nature a Dual PBIL is proposed, which operates on two probability vectors that are dual to each other with respect to the central point in the genotype space. A diversity maintaining technique of combining the central probability vector into PBIL is also proposed to improve PBIL’s adaptability in dynamic environments. In this paper, a new dynamic problem generator that can create required dynamics from any binary-encoded stationary problem is also formalized. Using this generator, a series of dynamic problems were systematically constructed from several benchmark stationary problems and an experimental study was carried out to compare the performance of several PBIL algorithms and two variants of standard genetic algorithm. Based on the experimental results, we carried out algorithm performance analysis regarding the weakness and strength of studied PBIL algorithms and identified several potential improvements to PBIL for dynamic optimization problems.

MSC:

68T20 Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.)
90C39 Dynamic programming
90C59 Approximation methods and heuristics in mathematical programming

References:

[1] Bäck T (1998) On the behavior of evolutionary algorithms in dynamic fitness landscape. In: Proceedings of the 1998 IEEE international conference on evolutionary computation, pp 446–451
[2] Baker JE (1987) Reducing bias and inefficiency in the selection algorithms. In: Grefenstelle JJ (ed) Proceedings of the 2nd international conference on genetic algorithms. Lawrence Erlbaum Associates, pp 14-21
[3] Baluja S (1994) Population-based incremental learning: a method for integrating genetic search based function optimization and competitive learning. Technical report CMU-CS-94-163, Carnegie Mellon University, USA
[4] Baluja S, Caruana R (1995) Removing the genetics from the standard genetic algorithm. In: Proceedings of the 12th international conference on machine learning, pp 38–46
[5] Branke J (1999) Memory enhanced evolutionary algorithms for changing optimization problems. In: Proceedings of the 1999 congress on evolutionary computation 3:1875–1882
[6] Branke J, Kaußler T, Schmidt C, Schmeck H (2000) A multi-population approach to dynamic optimization problems. In: Adaptive computing in design and manufacturing
[7] Branke J (2001) Evolutionary approaches to dynamic optimization problems–updated survey. In: GECCO Workshop on evolutionary algorithms for dynamic optimization problems, pp 134–137
[8] Branke J (2002) Evolutionary optimization in dynamic environments. Kluwer, Dordrecht · Zbl 1047.68160
[9] Cobb HG (1990) An Investigation into the use of hypermutation as an adaptive operator in genetic algorithms having continuous, time-dependent nonstationary environments. Technical report AIC-90-001, Naval Research Laboratory, Washington
[10] Cobb HG, Grefenstette J (1993) Genetic algorithms for tracking changing environments. In: Proceedings of the 5th international conference on genetic algorithms, pp 523–530
[11] Dasgupta D, McGregor D (1992) Nonstationary function optimization using the structured genetic algorithm. In: Proceedings of the 2nd international conference on parallel problem solving from nature, pp 145–154
[12] Goldberg DE, Smith RE (1987) Nonstationary function optimization using genetic algorithms with dominance and diploidy. In: Grefenstelle JJ (ed) Proceedings of the 2nd international conference on genetic algorithms. Lawrence Erlbaum Associates, pp 59–68
[13] Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison-Wesley, Reading · Zbl 0721.68056
[14] González C, Lozano JA, Larrañaga P (2000) Analyzing the population based incremental learning algorithm by means of discrete dynamical systems. Complex Syst 12(4):465–479 · Zbl 1167.90688
[15] Grefenstette JJ (1992) Genetic algorithms for changing environments. In: Männer R, Manderick B (eds) Proceedings of the 2nd international conference on parallel problem solving from nature, pp 137–144
[16] Grefenstette JJ (1999) Evolvability in dynamic fitness landscapes: a genetic algorithm approach. In: Proceedings of the 1999 congress on evolutionary computation, vol 3, pp 2031–2038
[17] Höhfeld M, Rudolph G (1997) Towards a theory of population-based incremental learning. In: Proceedings of the 4th IEEE conference on evolutionary computation, pp 1–5
[18] Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor
[19] Larrañaga P, Lozano JA (2002) Estimation of distribution algorithms: a new tool for evolutionary computation. Kluwer, Dordrecht · Zbl 0979.00024
[20] Lewis J, Hart E, Ritchie G (1998) A comparison of dominance mechanisms and simple mutation on non-stationary problems. In: Proceedings of the 5th int conf on parallel problem solving from nature, pp 139–148
[21] Mitchell M, Forrest S, Holland JH (1992) The royal road for genetic algorithms: fitness landscapes and GA performance. In: Proceedings of the 1st European conference on artificial life, pp 245–254
[22] Mori N, Kita H, Nishikawa Y (1997) Adaptation to changing environments by means of the memory based thermodynamical genetic algorithm. In: Bäck T (ed) Proceedings of the 7th international conference on genetic algorithms. Morgan Kaufmann Publishers, pp 299–306
[23] Morrison RW, De Jong KA (1999) A test problem generator for non-stationary environments. In: Proceedings of the 1999 congress on evolutionary computation, vol 3, pp 2047–2053
[24] Morrison RW, De Jong KA (2000) Triggered hypermutation revisited. In: Proceedings of the 2000 congress on evolutionary computation, pp 1025–1032
[25] Mühlenbein H, Paab G (1996) From recombination of genes to the estimation of distributions I. Binary parameters. In: Voigt H-M, Ebeling W, Rechenberg I, Schwefel H-P (eds) Proceedings of the 4th international conference on parallel problem solving from nature, pp 178–187
[26] Ng KP, Wong KC (1995) A new diploid scheme and dominance change mechanism for non-stationary function optimisation. In: Eshelman LJ (ed) Proceedings of the 6th international conference on genetic algorithms
[27] Servais MP, de Jaer G, Greene JR (1997) Function optimization using multiple-base population based incremental learning. In: Proceedings of the 8th South African workshop on pattern recognition
[28] Trojanowski K, Michalewicz Z (2000) Evolutionary optimization in non-stationary environments. J Comp Sci Technol 1(2):93–124
[29] Whitley LD (1991) Fundamental principles of deception in genetic search. In: Rawlins GJE (ed) Foundations of genetic algorithms, vol 1, pp 221–241
[30] Yang S (2003) Non-stationary problem optimization using the primal-dual genetic algorithm. In: Proceedings of the 2003 congress on evolutionary computation, vol 3, pp 2246–2253
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.