Cumarl: Curiosity-based learning in multiagent reinforcement learning

DD Ningombam, B Yoo, HW Kim, HJ Song, S Yi�- IEEE Access, 2022 - ieeexplore.ieee.org
DD Ningombam, B Yoo, HW Kim, HJ Song, S Yi
IEEE Access, 2022ieeexplore.ieee.org
In this paper, we propose a novel curiosity-based learning algorithm for Multi-agent
Reinforcement Learning (MARL) to attain efficient and effective decision-making. We employ
the centralized training with decentralized execution framework (CTDE) and consider that
each agent has knowledge of the prior action distribution of others. To quantify the difference
in agents' knowledge, curiosity, we introduce conditional mutual information (CMI)
regularization and use the amount of information for updating decision-making policy. Then�…
In this paper, we propose a novel curiosity-based learning algorithm for Multi-agent Reinforcement Learning (MARL) to attain efficient and effective decision-making. We employ the centralized training with decentralized execution framework (CTDE) and consider that each agent has knowledge of the prior action distribution of others. To quantify the difference in agents’ knowledge, curiosity, we introduce conditional mutual information (CMI) regularization and use the amount of information for updating decision-making policy. Then, to deploy these learning frameworks in a large-scale MARL setting while acquiring high sample efficiency, we consider a Kullback-Leibler (KL) divergence-based prioritization of experiences. We evaluate the effectiveness of the proposed algorithm in three different levels of StarCraft Multi Agent Challenge (SMAC) scenarios using the PyMARL framework. The simulation-based performance analysis shows that the proposed technique significantly improves the test win rate compared to various state-of-the-art MARL benchmarks, such as the Optimistically Weighted Monotonic Value Function Factorization (OW_QMIX) and Learning Individual Intrinsic Reward (LIIR).
ieeexplore.ieee.org
Showing the best result for this search. See all results