Disentangled Predictive Representation for Meta-Reinforcement LearningDownload PDF

Published: 22 Jul 2021, Last Modified: 05 May 2023URL 2021 PosterReaders: Everyone
Keywords: reinforcement learning, representation learning, unsupervised learning, meta-learning
TL;DR: We introduced a simple unsupervised framework for learning general representation for multi-task RL.
Abstract: A major challenge in reinforcement learning is the design of agents that are able to generalize across tasks that share common dynamics. A viable solution is meta-reinforcement learning, which identifies common structures among past tasks to be then generalized to new tasks (meta-test). Prior works learn meta-representation jointly while solving tasks, resulting in representations that not generalize well across policies, leading to sampling-inefficiency during meta-test phases. In this work, we introduce state2vec, an efficient and low-complexity unsupervised framework for learning disentangled representations that are more general. The state embedding vectors learned with state2vec capture the geometry of the underlying state space, resulting in high-quality basis functions for linear value function approximation.
1 Reply

Loading