News/Research

Ken Goldberg at CoRL 2022

02 Nov, 2022

Ken Goldberg at CoRL 2022

The Conference on Robot Learning (CoRL) is an annual international conference focusing on the intersection of robotics and machine learning. CoRL 2022 will be held in Auckland, New Zealand from December 14 to 18, 2022. Our faculty Ken Goldberg will present two papers.

The first one is Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision, published with Ryan Hoque, Lawrence Yunliang Chen, Satvik Sharma, Karthik Dharmarajan, Brijen Thananjeyan, and Pieter Abbeel, which presents a formalism, algorithms, and a benchmark for interactive fleet learning: interactive learning with multiple robots and multiple humans. From the abstract:

Commercial and industrial deployments of robot fleets often fall back on remote human teleoperators during execution when robots are at risk or unable to make task progress. With continual learning, interventions from the remote pool of humans can also be used to improve the robot fleet control policy over time. A central question is how to effectively allocate limited human attention to individual robots. Prior work addresses this in the single-robot, single-human setting. We formalize the Interactive Fleet Learning (IFL) setting, in which multiple robots interactively query and learn from multiple human supervisors. We present a fully implemented open-source IFL benchmark suite of GPU-accelerated Isaac Gym environments for the evaluation of IFL algorithms. We propose Fleet-DAgger, a family of IFL algorithms, and compare a novel Fleet-DAgger algorithm to 4 baselines in simulation. We also perform 1000 trials of a physical block-pushing experiment with 4 ABB YuMi robot arms. Experiments suggest that the allocation of humans to robots significantly affects the performance of the fleet, and that our algorithm achieves up to 8.8x higher return on human effort than baselines. See https://sites.google.com/view/fleet-dagger for supplemental material.

The second one is DayDreamer: World Models for Physical Robot Learning, published with Philipp Wu, Alejandro Escontrela, Danijar Hafner, and Pieter Abbeel. From the abstract:

To solve tasks in complex environments, robots need to learn from experience. Deep reinforcement learning is a common approach to robot learning but requires a large amount of trial and error to learn, limiting its deployment in the physical world. As a consequence, many advances in robot learning rely on simulators. However, learning in simulation fails to capture the complexity of the real world, is prone to simulator inaccuracies, and the resulting behaviors do not adapt to changes in the real world. The Dreamer algorithm has recently shown great promise for learning from small amounts of interaction by planning within a learned world model, outperforming pure reinforcement learning in video games and simulated control domains. The world model learns to predict the outcomes of potential actions, reducing the amount of trial and error needed in the real environment. However, it is unknown whether Dreamer can facilitate faster learning on physical robots. In this paper, we apply Dreamer to four robots and tasks to learn online and directly in the real world, without any simulators. On a quadruped robot, Dreamer learns to roll off its back, stand up, and walk from scratch and without resets in only one hour. Afterwards, we manually push the robot, showing that the continuously learning robot adapts its behavior within 10 minutes to withstand pushes or quickly roll over and gets back on its feet. On two different robotic arms, Dreamer learns to pick and place multiple objects directly from camera images and sparse rewards, approaching human performance. On a wheeled robot, Dreamer learns to navigate to goal positions purely from camera inputs, automatically resolving ambiguity about the robot orientation by integrating sensory inputs over time. All experiments in this paper use the same algorithm and hyperparameters, demonstrating the generality and robustness of the approach. Results suggest that Dreamer is capable of online learning in the real world, establishing a strong baseline. We release our infrastructure as a platform for future applications of world models to robot learning.

To learn more about the conference, please click here.