News/Research

Ken Goldberg at RSS 2023

17 Jul, 2023

Ken Goldberg at RSS 2023

At this year's Robotics: Science and Systems (RSS) conference, Ken Goldberg keynoted with a presentation on Expanding Access to Robots for Education and Research in the workshop on lowering barriers for robotics research.

From the description:

Today, most robots are like personal computers were in the 1980s: expensive and isolated systems with limited software, computation, and memory. This is changing with “Cloud Robotics”, where robots are connected and available via the Internet and 5G networks. I’ll describe our work on FogROS2, a networked systems that facilitates access, rapid deployment, automated software updates, distributed data collection, and deep learning to continously improve performance. I will also tell the surprising story about the the African Robotics Network and the Ultra-Affordable Robot Design Challenge.

Check it out here!

He also co-authored a paper titled Self-Supervised Visuo-Tactile Pretraining to Locate and Follow Garment Features with Justin Kerr, Huang Huang, Albert Wilcox, Ryan I Hoque, Jeffrey Ichnowski, and Roberto Calandra.

From the abstract:

Humans make extensive use of vision and touch as complementary senses, with vision providing global information about the scene and touch measuring local information during manipulation without suffering from occlusions. While prior work demonstrates the efficacy of tactile sensing for precise manipulation of deformables, they typically rely on supervised, human-labeled datasets. We propose Self-Supervised Visuo-Tactile Pretraining (SSVTP), a framework for learning multi-task visuo-tactile representations in a self-supervised manner through cross-modal supervision. We design a mechanism that enables a robot to autonomously collect precisely spatially-aligned visual and tactile image pairs, then train visual and tactile encoders to embed these pairs into a shared latent space using cross- modal contrastive loss. We apply this latent space to downstream perception and control of deformable garments on flat surfaces, and evaluate the flexibility of the learned representations without fine-tuning on 5 tasks: feature classification, contact localization, anomaly detection, feature search from a visual query (e.g., garment feature localization under occlusion), and edge following along cloth edges. The pretrained representations achieve a 73-100% success rate on these 5 tasks.

Check out more here!