Relational Concepts in Deep Reinforcement Learning: Emergence and RepresentationDownload PDF

30 Apr 2020 (modified: 05 May 2023)CARLA 2020Readers: Everyone
Keywords: Deep Reinforcement Learning, Concept Learning, Representations
Abstract: An intelligent agent in a complex environment will face a great number of diverse tasks. Instead of learning task-specific representations, we aim to re-use learned aspects in order to jump-start the acquisition of new tasks by building on available, abstract knowledge. This knowledge is meant to be grounded in the agents experience and acquired in a bottom-up process that is guided by intrinsic motivation. In short, our goal is not to make a robot understand human knowledge, but rather enabling it to accumulate and store its own generic concepts. For this purpose, we investigate the capabilities and limitations of current architectures to learn and represent bottom up relational concepts as one subtype of generic concepts. Using a powerful computation cluster with state of the art methods for distributed deep reinforcement learning, we are running large scale experiments to investigate the representations that are formed as well as the conditions that lead to them.
0 Replies

Loading