News/Research

Ken Goldberg at ISRR 2022

19 Oct, 2022

Ken Goldberg at ISRR 2022

Check out the amazing work that Ken Goldberg has been up to with his various co-authors at the 2022 International Symposium on Robotics Research (ISRR)! Read about the papers below!

Efficiently Learning Single-Arm Fling Motions to Smooth Garments

Chen, Lawrence Yunliang, Huang, Huang, Novoseller, Ellen, Seita, Daniel, Ichnowski, Jeffrey, Laskey, Michael, Cheng, Richard, Kollar, Thomas, Goldberg, Ken

Recent work has shown that 2-arm "fling" motions can be effective for garment smoothing. We consider single-arm fling motions. Unlike 2-arm fling motions, which require little robot trajectory parameter tuning, single-arm fling motions are very sensitive to trajectory parameters. We consider a single 6-DOF robot arm that learns fling trajectories to achieve high garment coverage. Given a garment grasp point, the robot explores different parameterized fling trajectories in physical experiments. To improve learning efficiency, we propose a coarse-to-fine learning method that first uses a multi-armed bandit (MAB) framework to efficiently find a candidate fling action, which it then refines via a continuous optimization method. Further, we propose novel training and execution-time stopping criteria based on fling outcome uncertainty; the training-time stopping criterion increases data efficiency while the execution-time stopping criteria leverage repeated fling actions to increase performance. Compared to baselines, the proposed method significantly accelerates learning. Moreover, with prior experience on similar garments collected through self-supervision, the MAB learning time for a new garment is reduced by up to 87%. We evaluate on 36 real garments: towels, T-shirts, long-sleeve shirts, dresses, sweat pants, and jeans. Results suggest that using prior experience, a robot requires under 30 minutes to learn a fling action for a novel garment that achieves 60-94% coverage. Supplementary material can be found at https://sites.google.com/view/single-arm-fling.

Mechanical Search on Shelves with Efficient Stacking and Destacking of Objects

Huang, Huang, Fu, Letian, Danielczuk, Michael, Kim, Chung Min, Tam, Zachary, Ichnowski, Jeffrey, Angelova, Anelia, Ichter, Brian, Goldberg, Ken

Although stacking objects increases shelves storage efficiency, the lack of visibility and accessibility makes the mechanical search problem of revealing and extracting a target object difficult for robots. In this paper, we extend the lateral-access mechanical search problem to shelves with stacked items and introduce two novel policies--Distribution Area Reduction for Stacked Scenes (DARSS) and Monte Carlo Tree Search for Stacked Scenes (MCTSSS)---that use destacking and restacking actions. MCTSSS improves on prior lookahead policies by considering three future states after each potential action. Experiments with 3600 simulated and 18 physical trials with a Fetch robot equipped with a blade and suction cup suggest that these policies can reveal the target object with 82--100% success in simulation outperforming the baseline by up to 66%, and can achieve 67--100% success in physical experiments. DARSS outperforms MCTSSS on median number of steps to reveal the target, but MCTSSS has a higher success rate in physical experiments, suggesting its robustness to perception noise. See https://sites.google.com/berkeley.edu/stax-ray for supplementary material.

Multi-Object Grasping in the Plane

Agboh, Wisdom C., Ichnowski, Jeffrey, Goldberg, Ken, Dogar, Mehmet R

We consider a novel problem where multiple rigid convex polygonal objects rest in randomly placed positions and orientations on a planar surface visible from an overhead camera. The objective is to efficiently grasp and transport all objects into a bin using multi-object push-grasps, where multiple objects are pushed together to facilitate multi-object grasping. We provide necessary conditions for frictionless multi-object push-grasps and apply these to filter inadmissible grasps in a novel multi-object grasp planner. We find that our planner is 19 times faster than a Mujoco simulator baseline. We also propose a picking algorithm that uses both single- and multi-object grasps to pick objects. In physical grasping experiments comparing performance with a single-object picking baseline, we find that the frictionless multi-object grasping system achieves 13.6% higher grasp success and is 59.9% faster, from 212 PPH to 340 PPH. See url{https://sites.google.com/view/multi-object-grasping} for videos and code.