Learning to Grasp the Ungraspable with Emergent Extrinsic DexterityDownload PDF

Published: 12 May 2022, Last Modified: 22 Oct 2023ICRA 2022 Workshop: RL for Manipulation PosterReaders: Everyone
Keywords: Manipulation, Reinforcement Learning, Extrinsic Dexterity
TL;DR: We explore the use of reinforcement learning (RL) on the extrinsic dexterity with the task of "Occluded Grasping" and demonstrate policy generalization with real robot experiments.
Abstract: A robot can solve more complex manipulation tasks beyond the limitations of its body if it can utilize the external environment such as pushing the object against the table or a vertical wall. These behaviors are known as "Extrinsic Dexterity." Previous work in extrinsic dexterity usually relies on hand-crafted primitives or careful assumptions about contacts. In this work, we explore the use of reinforcement learning (RL) on the extrinsic dexterity with the task of "Occluded Grasping". The goal of the task is to grasp the object in configurations that are initially occluded; the robot must interact with the object and the extrinsic environment to move the object into a configuration from which these grasps can be achieved. To accomplish this task, we train a policy to co-optimize pre-grasp and grasping motions; this results in emergent behavior of pushing the object against the wall in order to rotate and then grasp it. We demonstrate the generality of the learned policy across environment variations in simulation and evaluate it on a real robot with zero-shot sim2real transfer.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2211.01500/code)
2 Replies

Loading