Keep Doing What Worked: Behavior Modelling Priors for Offline Reinforcement LearningDownload PDF

Published: 20 Dec 2019, Last Modified: 05 May 2023ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We develop a method for stable offline reinforcement learning from logged data. The key is to regularize the RL policy towards a learned "advantage weighted" model of the data.
Abstract: Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control. In practice, however, standard off-policy algorithms fail in the batch setting for continuous control. In this paper, we propose a simple solution to this problem. It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task. Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources. We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.
Keywords: Reinforcement Learning, Off-policy, Multitask, Continuous Control
Data: [DeepMind Control Suite](https://paperswithcode.com/dataset/deepmind-control-suite), [MuJoCo](https://paperswithcode.com/dataset/mujoco)
Original Pdf: pdf
7 Replies

Loading