Ken Goldberg at NeurIPS
BCNM faculty member Ken Goldberg will showcase "Video Prediction Models as Rewards for Reinforcement Learning," a paper as part of Advances in Neural Information Processing Systems 36 pre-proceedings (NeurIPS 2023) Main Conference Track. The authors include:
Alejandro Escontrela, Ademi Adeniji, Wilson Yan, Ajay Jain, Xue Bin Peng, Ken Goldberg, Youngwoon Lee, Danijar Hafner, Pieter Abbeel
Ken is an American artist, writer, inventor, and researcher in the field of robotics and automation. He is professor and chair of the research department at the University of California, Berkeley, and holds the William S. Floyd Jr. Distinguished Chair in Engineering at Berkeley, with joint appointments in Electrical Engineering and Computer Sciences (EECS), Art Practice, and the School of Information. Goldberg also holds an appointment in the Department of Radiation Oncology at the University of California, San Francisco.
Abstract
Specifying reward signals that allow agents to learn complex behaviors is a long-standing challenge in reinforcement learning.A promising approach is to extract preferences for behaviors from unlabeled videos, which are widely available on the internet. We present Video Prediction Rewards (VIPER), an algorithm that leverages pretrained video prediction models as action-free reward signals for reinforcement learning. Specifically, we first train an autoregressive transformer on expert videos and then use the video prediction likelihoods as reward signals for a reinforcement learning agent. VIPER enables expert-level control without programmatic task rewards across a wide range of DMC, Atari, and RLBench tasks. Moreover, generalization of the video prediction model allows us to derive rewards for an out-of-distribution environment where no expert data is available, enabling cross-embodiment generalization for tabletop manipulation. We see our work as starting point for scalable reward specification from unlabeled videos that will benefit from the rapid advances in generative modeling. Source code and datasets are available on the project website: https://ViperRL.com