×

Using automated algorithm configuration for parameter control. (English) Zbl 07809163

Chicano, Francisco (ed.) et al., Proceedings of the 17th ACM/SIGEVO workshop on foundations of genetic algorithms, FOGA 2023, Potsdam, Germany, August 30 – September 1, 2023. New York, NY: Association for Computing Machinery (ACM). 38-49 (2023).

MSC:

68T20 Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.)
68W50 Evolutionary algorithms, genetic algorithms (computational aspects)
90C59 Approximation methods and heuristics in mathematical programming

References:

[1] Steven Adriaensen, André Biedenkapp, Gresa Shala, Noor Awad, Theresa Eimer, Marius Lindauer, and Frank Hutter. 2022. Automated Dynamic Algorithm Configuration. Journal of Artificial Intelligence Research 75 (2022), 1633-1699. https://doi.org/10.1613/jair-1.13922 · Zbl 07639828
[2] Marcin Andrychowicz, Anton Raichuk, Piotr Stańczyk, Manu Orsini, Sertan Girgin, Raphaël Marinier, Leonard Hussenot, Matthieu Geist, Olivier Pietquin, Marcin Michalski, et al. 2021. What matters for on-policy deep actor-critic methods? a large-scale study. In International conference on learning representations.
[3] Denis Antipov, Maxim Buzdalov, and Benjamin Doerr. 2022. Fast Mutation in Crossover-Based Algorithms. Algorithmica 84, 6 (2022), 1724-1761. https://doi.org/10.1007/s00453-022-00957-5 · Zbl 1537.68227
[4] Denis Antipov, Benjamin Doerr, and Vitalii Karavaev. 2019. A tight runtime analysis for the (1 + (λ, λ)) GA on LeadingOnes. In Proc. of Foundations of Genetic Algorithms (FOGA’19). ACM, 169-182. https://doi.org/10.1145/3299904.3340317 · Zbl 1433.68641
[5] Denis Antipov, Benjamin Doerr, and Vitalii Karavaev. 2022. A Rigorous Runtime Analysis of the (1 + (λ, λ)) GA on Jump Functions. Algorithmica 84, 6 (2022), 1573-1602. https://doi.org/10.1007/s00453-021-00907-7 · Zbl 1537.68228
[6] Kirill Antonov, Maxim Buzdalov, Arina Buzdalova, and Carola Doerr. 2021. Blending Dynamic Programming with Monte Carlo Simulation for Bounding the Running Time of Evolutionary Algorithms. In Proc. of IEEE Congress on Evolutionary Computation (CEC’21). IEEE, 878-885. https://doi.org/10.1109/CEC45853.2021.9504775 Free version available at https://arxiv.org/abs/2102.11461.
[7] Thomas Bartz-Beielstein, Carola Doerr, Jakob Bossek, Sowmya Chandrasekaran, Tome Eftimov, Andreas Fischbach, Pascal Kerschke, Manuel López-Ibáñez, Katherine M. Malan, Jason H. Moore, Boris Naujoks, Patryk Orzechowski, Vanessa Volz, Markus Wagner, and Thomas Weise. 2020. Benchmarking in Optimization: Best Practice and Open Issues. CoRR abs/2007.03488 (2020). arXiv:2007.03488 https://arxiv.org/abs/2007.03488
[8] Anton Bassin and Maxim Buzdalov. 2020. An Experimental Study of Operator Choices in the (1 + (λ, λ)) Genetic Algorithm. In Proceedings of the International Conference on Mathematical Optimization Theory and Operations Research. Number 1275 in Communications in Computer and Information Science. 320-335. https://doi.org/10.1007/978-3-030-58657-7_26 · Zbl 1460.90209
[9] André Biedenkapp. 2022. Dynamic algorithm configuration by reinforcement learning. Ph.D. Dissertation. University of Freiburg, Germany. https://freidok.uni-freiburg.de/data/230869
[10] André Biedenkapp, H. Furkan Bozkurt, Theresa Eimer, Frank Hutter, and Marius Lindauer. 2020. Dynamic Algorithm Configuration: Foundation of a New Meta-Algorithmic Framework. In Proc. of European Conference on Artificial Intelligence (ECAI’20) (Frontiers in Artificial Intelligence and Applications, Vol. 325). IOS Press, 427-434. https://doi.org/10.3233/FAIA200122
[11] André Biedenkapp, Nguyen Dang, Martin S. Krejca, Frank Hlutter, and Carola Doerr. 2022. Theory-inspired parameter control benchmarks for dynamic algorithm configuration. In Proc. of Genetic and Evolutionary Computation Conference (GECCO). ACM, 766-775. https://doi.org/10.1145/3512290.3528846
[12] Edmund K. Burke, Michel Gendreau, Matthew R. Hyde, Graham Kendall, Gabriela Ochoa, Ender Özcan, and Rong Qu. 2013. Hyper-heuristics: a survey of the state of the art. J. Oper. Res. Soc. 64, 12 (2013), 1695-1724. https://doi.org/10.1057/jors.2013.71
[13] Nathan Buskulic and Carola Doerr. 2021. Maximizing Drift Is Not Optimal for Solving OneMax. Evol. Comput. 29, 4 (2021), 521-541. https://doi.org/10.1162/evco_a_00290
[14] Maxim Buzdalov and Benjamin Doerr. 2017. Runtime Analysis of the (1 + (λ, λ)) Genetic Algorithm on Random Satisfiable 3-CNF Formulas. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’17). ACM, 1343-1350.
[15] Maxim Buzdalov and Carola Doerr. 2020. Optimal Mutation Rates for the (1 + λ) EA on OneMax. In Proc. of Parallel Problem Solving from Nature (PPSN’20) (LNCS, Vol. 12270). Springer, 574-587. https://doi.org/10.1007/978-3-030-58115-2_40
[16] Maxim Buzdalov and Carola Doerr. 2021. Optimal static mutation strength distributions for the (1 + λ) evolutionary algorithm on OneMax. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’21). ACM, 660-668. https://doi.org/10.1145/3449639.3459389
[17] Leslie Pérez Cáceres, Manuel López-Ibáñez, Holger Hoos, and Thomas Stützle. 2017. An experimental study of adaptive capping in irace. In Learning and Intelligent Optimization: 11th International Conference, LION 11, Nizhny Novgorod, Russia, June 19-21, 2017, Revised Selected Papers. Springer, 235-250.
[18] Deyao Chen, Maxim Buzdalov, Carola Doerr, and Nguyen Dang. 2023. Code and data repository of this paper. https://github.com/de0ch/OLL.
[19] Nguyen Dang and Carola Doerr. 2019. Hyper-parameter tuning for the (1 + (λ, λ)) GA. In Proc. of Genetic and Evolutionary Computation Conference (GECO’19). ACM, 889-897. https://doi.org/10.1145/3321707.3321725
[20] Marcelo de Souza, Marcus Ritt, and Manuel López-Ibáñez. 2022. Capping methods for the automatic configuration of optimization algorithms. Comput. Oper. Res. 139 (2022), 105615. https://doi.org/10.1016/j.cor.2021.105615 · Zbl 1511.90343
[21] Benjamin Doerr. 2019. Analyzing randomized search heuristics via stochastic domination. Theoretical Computer Science 773 (2019), 115-137. https://doi.org/10.1016/j.tcs.2018.09.024 · Zbl 1451.68363
[22] Benjamin Doerr and Carola Doerr. 2018. Optimal Static and Self-Adjusting Parameter Choices for the (1+(λ, λ)) Genetic Algorithm. Algorithmica 80 (2018), 1658-1709. https://doi.org/10.1007/s00453-017-0354-9 · Zbl 1391.68100
[23] Benjamin Doerr and Carola Doerr. 2020. Theory of Parameter Control Mechanisms for Discrete Black-Box Optimization: Provable Performance Gains Through Dynamic Parameter Choices. In Theory of Evolutionary Computation: Recent Developments in Discrete Optimization. Springer, 271-321.
[24] Benjamin Doerr, Carola Doerr, and Franziska Ebel. 2015. From black-box complexity to designing new genetic algorithms. Theoretical Computer Science 567 (2015), 87-104. · Zbl 1314.68290
[25] Benjamin Doerr, Huu Phuoc Le, Régis Makhmara, and Ta Duy Nguyen. 2017. Fast genetic algorithms. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’17). ACM, 777-784. https://doi.org/10.1145/3071178.3071301
[26] Benjamin Doerr and Carola Winzen. 2014. Playing Mastermind with Constant-Size Memory. Theory of Computing Systems 55 (2014), 658-684. · Zbl 1319.68101
[27] Carola Doerr. 2020. Complexity Theory for Black-Box Optimization Heuristics. In Theory of Evolutionary Computation: Recent Developments in Discrete Optimization. Springer, 133-212.
[28] Carola Doerr and Markus Wagner. 2018. Simple on-the-fly parameter selection mechanisms for two classical discrete black-box optimization benchmark problems. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’18). ACM, 943-950. https://doi.org/10.1145/3205455.3205560
[29] Agoston Endre Eiben, Robert Hinterding, and Zbigniew Michalewicz. 1999. Parameter control in evolutionary algorithms. IEEE Transactions on Evolutionary Computation 3 (1999), 124-141.
[30] Theresa Eimer, André Biedenkapp, Maximilian Reimer, Steven Adriaensen, Frank Hutter, and Marius Lindauer. 2021. DACBench: A Benchmark Library for Dynamic Algorithm Configuration. In Proc. of International Joint Conference on Artificial Intelligence (IJCAI’21). ijcai.org, 1668-1674. https://doi.org/10.24963/ijcai.2021/230
[31] Álvaro Fialho, Luís Da Costa, Marc Schoenauer, and Michèle Sebag. 2010. Analyzing bandit-based adaptive operator selection mechanisms. Annals of Mathematics and Artificial Intelligence 60 (2010), 25-64. https://doi.org/10.1007/s10472-010-9213-y · Zbl 1226.68081
[32] Brian W. Goldman and William F. Punch. 2015. Fast and Efficient Black Box Optimization Using the Parameter-less Population Pyramid. Evolutionary Computation 23 (2015), 451-479.
[33] Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff. 2020. COCO: a platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software (2020), 1-31. https://doi.org/10.1080/10556788.2020.1808977 arXiv:https://doi.org/10.1080/10556788.2020.1808977
[34] Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. 2018. Deep reinforcement learning that matters. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.
[35] Frank Hutter, Manuel López-Ibánez, Chris Fawcett, Marius Lindauer, Holger H Hoos, Kevin Leyton-Brown, and Thomas Stützle. 2014. AClib: A benchmark library for algorithm configuration. In Learning and Intelligent Optimization: 8th International Conference, Lion 8, Gainesville, FL, USA, February 16-21, 2014. Revised Selected Papers 8. Springer, 36-40.
[36] Giorgos Karafotias, Mark Hoogendoorn, and A.E. Eiben. 2015. Parameter Control in Evolutionary Algorithms: Trends and Challenges. IEEE Transactions on Evolutionary Computation 19 (2015), 167-187.
[37] Stefan Kern, Sibylle D. Müller, Nikolaus Hansen, Dirk Büche, Jiri Ocenasek, and Petros Koumoutsakos. 2004. Learning probability distributions in continuous evolutionary algorithms - a comparative review. Natural Computing 3 (2004), 77-112. · Zbl 1074.68063
[38] Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov, and Carola Doerr. 2022. Per-run Algorithm Selection with Warmstarting using Trajectory-based Features. In Parallel Problem Solving from Nature (PPSN) (LNCS, Vol. 13398). Springer, 46-60. https://doi.org/10.1007/978-3-031-14714-2_4 Free version available at https://arxiv.org/abs/2204.09483.
[39] Per Kristian Lehre and Carsten Witt. 2012. Black-Box Search by Unbiased Variation. Algorithmica 64 (2012), 623-642. · Zbl 1264.68221
[40] Manuel López-Ibáñez, Jürgen Branke, and Luís Paquete. 2021. Reproducibility in Evolutionary Computation. ACM Trans. Evol. Learn. Optim. 1, 4 (2021), 14:1-14:21. https://doi.org/10.1145/3466624
[41] Manuel López-Ibáñez, Jérémie Dubois-Lacoste, Leslie Pérez Cáceres, Mauro Birattari, and Thomas Stützle. 2016. The irace package: Iterated racing for automatic algorithm configuration. Operations Research Perspectives 3 (2016), 43-58.
[42] Yasha Pushak and Holger H. Hoos. 2018. Algorithm Configuration Landscapes: -More Benign Than Expected?. In Proc. of Parallel Problem Solving from Nature (LNCS, Vol. 11102). Springer, 271-283. https://doi.org/10.1007/978-3-319-99259-4_22
[43] Yasha Pushak and Holger H. Hoos. 2022. AutoML Loss Landscapes. ACM Trans. Evol. Learn. Optim. 2, 3 (2022), 10:1-10:30. https://doi.org/10.1145/3558774
[44] Raymond Ros and Nikolaus Hansen. 2008. A simple modification in CMA-ES achieving linear time and space complexity. In Parallel Problem Solving from Nature – PPSN X. Number 5199 in Lecture Notes in Computer Science. 296-305.
[45] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal Policy Optimization Algorithms. CoRR abs/1707.06347 (2017). http://arxiv.org/abs/1707.06347
[46] Gresa Shala, André Biedenkapp, Noor Awad, Steven Adriaensen, Marius Lindauer, and Frank Hutter. 2020. Learning Step-Size Adaptation in CMA-ES. In Proc. of Parallel Problem Solving from Nature (PPSN’20) (LNCS, Vol. 12270). Springer, 691-706.
[47] Mudita Sharma, Alexandros Komninos, Manuel López-Ibáñez, and Dimitar Kazakov. 2019. Deep Reinforcement Learning-Based Parameter Control in Differential Evolution. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’19). ACM, 709-717. https://doi.org/10.1145/3321707.3321813
[48] David Speck, André Biedenkapp, Frank Hutter, Robert Mattmüller, and Marius Lindauer. 2021. Learning Heuristic Selection with Dynamic Algorithm Configuration. In Proc. of International Conference on Automated Planning and Scheduling (ICAPS). AAAI Press, 597-605. https://ojs.aaai.org/index.php/ICAPS/article/view/16008
[49] Michele Tessari and Giovanni Iacca. 2022. Reinforcement learning based adaptive metaheuristics. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 1854-1861.
[50] Hado Van Hasselt, Arthur Guez, and David Silver. 2016. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, Vol. 30.
[51] Ke Xue, Jiacheng Xu, Lei Yuan, Miqing Li, Chao Qian, Zongzhang Zhang, and Yang Yu. 2022. Multi-agent Dynamic Algorithm Configuration. In Advances in Neural Information Processing Systems 35 (NeurIPS’22). New Orleans, LA.
[52] Wenjie Yi, Rong Qu, and Licheng Jiao. 2023. Automated algorithm design using proximal policy optimisation with identified features. Expert Systems with Applications 216 (2023), 119461.
[53] Wenjie Yi, Rong Qu, Licheng Jiao, and Ben Niu. 2022. Automated Design of Metaheuristics Using Reinforcement Learning within a Novel General Search Framework. IEEE Transactions on Evolutionary Computation (2022).
[54] Yuchang Zhang, Ruibin Bai, Rong Qu, Chaofan Tu, and Jiahuan Jin. 2022. A deep reinforcement learning based hyper-heuristic for combinatorial optimisation with uncertainties. European Journal of Operational Research 300, 2 (2022), 418-427. · Zbl 1495.90168
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.