DOP: OFF-POLICY MULTI-AGENT DECOMPOSED POLICY GRADIENTS

  • Yihan Wang
  • , Beining Han
  • , Tonghan Wang
  • , Heng Dong
  • , Chongjie Zhang

Research output: Contribution to conferencePaperpeer-review

Abstract

Multi-agent policy gradient (MAPG) methods recently witness vigorous progress. However, there is a significant performance discrepancy between MAPG methods and state-of-the-art multi-agent value-based approaches. In this paper, we investigate causes that hinder the performance of MAPG algorithms and present a multi-agent decomposed policy gradient method (DOP). This method introduces the idea of value function decomposition into the multi-agent actor-critic framework. Based on this idea, DOP supports efficient off-policy learning and addresses the issue of centralized-decentralized mismatch and credit assignment in both discrete and continuous action spaces. We formally show that DOP critics have sufficient representational capability to guarantee convergence. In addition, empirical evaluations on the StarCraft II micromanagement benchmark and multi-agent particle environments demonstrate that DOP outperforms both state-of-the-art value-based and policy-based multi-agent reinforcement learning algorithms. Demonstrative videos are available at https://sites.google.com/view/dop-mapg/.

Original languageEnglish
StatePublished - 2021
Event9th International Conference on Learning Representations, ICLR 2021 - Virtual, Online, Austria
Duration: May 3 2021May 7 2021

Conference

Conference9th International Conference on Learning Representations, ICLR 2021
Country/TerritoryAustria
CityVirtual, Online
Period05/3/2105/7/21

Fingerprint

Dive into the research topics of 'DOP: OFF-POLICY MULTI-AGENT DECOMPOSED POLICY GRADIENTS'. Together they form a unique fingerprint.

Cite this