Integrating Force-based Manipulation Primitives with Deep Learning-based Visual Servoing for Robotic AssemblyDownload PDF

Published: 12 May 2022, Last Modified: 17 May 2023ICRA 2022 Workshop: RL for Manipulation PosterReaders: Everyone
Keywords: Deep Learning, Visual Servoing, Manipulation Primitives, Robotic Assembly, Reinforcement Learning
TL;DR: Combining Deep Learning-based Visual Servoing and Force-based Manipulation Primitives to achieve high-accuracy peg-in-hole insertion tasks.
Abstract: This paper explores the idea of combining Deep Learning-based Visual Servoing and dynamic sequences of force-based Manipulation Primitives for robotic assembly tasks. Most current peg-in-hole algorithms assume the initial peg pose is already aligned within a minute deviation range before a tight-clearance insertion is attempted. With the integration of tactile and visual information, highly-accurate peg alignment before insertion can be achieved autonomously. In the alignment phase, the peg mounted on the end-effector can be aligned automatically from an initial pose with large displacement errors to an estimated insertion pose with errors lower than 1.5 mm in translation and 1.5 deg in rotation, all in one-shot Deep Learning-Based Visual Servoing estimation. A dynamic sequence of Manipulation Primitives will then be automatically generated via Reinforcement Learning to finish the last stage of insertion.
2 Replies

Loading