A review of research on reinforcement learning algorithms for multi-agents

K Hu, M Li, Z Song, K Xu, Q Xia, N Sun, P Zhou, M Xia�- Neurocomputing, 2024 - Elsevier
K Hu, M Li, Z Song, K Xu, Q Xia, N Sun, P Zhou, M Xia
Neurocomputing, 2024Elsevier
In recent years, multi-agent reinforcement learning techniques have been widely used and
evolved in the field of artificial intelligence. However, traditional reinforcement learning
methods have limitations such as long training time, large sample data requirements, and
highly delayed rewards. Therefore, this paper systematically and specifically studies the
MARL algorithm. Firstly, this paper uses Citespace software to visually analyze the existing
literature on multi-agent reinforcement learning and briefly indicates the research hotspots�…
Abstract
In recent years, multi-agent reinforcement learning techniques have been widely used and evolved in the field of artificial intelligence. However, traditional reinforcement learning methods have limitations such as long training time, large sample data requirements, and highly delayed rewards. Therefore, this paper systematically and specifically studies the MARL algorithm. Firstly, this paper uses Citespace software to visually analyze the existing literature on multi-agent reinforcement learning and briefly indicates the research hotspots and key research directions in this field. Secondly, the applications of traditional reinforcement learning algorithms under two task objects, namely single-agent and multi-agent systems, are described in detail. Then, the paper highlights the diverse applications, challenges, and corresponding solutions of MARL algorithmic techniques in the field of MAS. Finally, the paper points out future research directions based on the existing limitations of the algorithm. Through this paper, readers will gain a systematic and in-depth understanding of MARL algorithms and how they can be utilized to better address the various challenges posed by MAS.
Elsevier
Showing the best result for this search. See all results