News/Research

Learning to Smooth and Fold Real Fabric Using Dense Object Descriptors Trained on Synthetic Color Images

20 Apr, 2020

Learning to Smooth and Fold Real Fabric Using Dense Object Descriptors Trained on Synthetic Color Images

Are you interested in robotic manipulation?

Make sure to check out the latest article by Aditya Ganapathi et al., including our own Ken Golberg, on using dense object descriptors trained on synthetic color images to smooth and fold real fabric.

From article:

Robotic fabric manipulation is challenging due to the infinite dimensional configuration space and complex dynamics. In this paper, we learn visual representations of deformable fabric by training dense object descriptors that capture correspondences across images of fabric in various configurations. The learned descriptors capture higher level geometric structure, facilitating design of explainable policies. We demonstrate that the learned representation facilitates multistep fabric smoothing and folding tasks on two real physical systems, the da Vinci surgical robot and the ABB YuMi given high level demonstrations from a supervisor. The system achieves a 78.8 success rate across six fabric manipulation tasks. See https://tinyurl.com/fabric-descriptors for supplementary material and videos.

This paper makes 3 contributions: (1) applying dense object descriptors from [6, 39] to fabric from synthetically generated data, (2) simulation experiments demonstrating that the learned descriptors can be used to design fabric manipulation policies for smoothing and folding which are robust to unseen fabric configurations and colors and (3) physical experiments on both the da Vinci Research Kit (dVRK) and the ABB YuMi suggesting that the learned descriptors transfer effectively on two different robotic systems.

To read the full article click here.