EPISODIC REINFORCEMENT LEARNING WITH ASSOCIATIVE MEMORY

  • Guangxiang Zhu
  • , Zichuan Lin
  • , Guangwen Yang
  • , Chongjie Zhang

Research output: Contribution to conferencePaperpeer-review

27 Scopus citations

Abstract

Sample efficiency has been one of the major challenges for deep reinforcement learning. Non-parametric episodic control has been proposed to speed up parametric reinforcement learning by rapidly latching on previously successful policies. However, previous work on episodic reinforcement learning neglects the relationship between states and only stored the experiences as unrelated items. To improve sample efficiency of reinforcement learning, we propose a novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies. We build a graph on top of states in memory based on state transitions and develop a reverse-trajectory propagation strategy to allow rapid value propagation through the graph. We use the non-parametric associative memory as early guidance for a parametric reinforcement learning model. Results on the navigation domain and Atari games show our framework achieves significantly higher sample efficiency than state-of-the-art episodic reinforcement learning models.

Original languageEnglish
StatePublished - 2020
Event8th International Conference on Learning Representations, ICLR 2020 - Addis Ababa, Ethiopia
Duration: Apr 30 2020 → …

Conference

Conference8th International Conference on Learning Representations, ICLR 2020
Country/TerritoryEthiopia
CityAddis Ababa
Period04/30/20 → …

Fingerprint

Dive into the research topics of 'EPISODIC REINFORCEMENT LEARNING WITH ASSOCIATIVE MEMORY'. Together they form a unique fingerprint.

Cite this