News/Research

Ken Goldberg's DexNet in the News

01 Jun, 2017

Ken Goldberg's DexNet in the News

BCNM professor Ken Goldberg's work on "DexNet" has been featured recently in the media. DexNet appeared first on the UC Berkeley homepage in an article by Brett Israel in Research, Technology and Engineering, entitled: "Meet the most nimble-fingered robot ever built."

From the article:

Grabbing the awkwardly shaped items that people pick up in their day-to-day lives is a slippery task for robots. Irregularly shaped items such as shoes, spray bottles, open boxes, even rubber duckies are easy for people to grab and pick up, but robots struggle with knowing where to apply a grip. In a significant step toward overcoming this problem, roboticists at UC Berkeley have a built a robot that can pick up and move unfamiliar, real-world objects with a 99 percent success rate.

Berkeley professor Ken Goldberg, postdoctoral researcher Jeff Mahler and the Laboratory for Automation Science and Engineering (AUTOLAB) created the robot, called DexNet 2.0. DexNet 2.0’s high grasping success rate means that this technology could soon be applied in industry, with the potential to revolutionize manufacturing and the supply chain.

Read the rest of the Berkeley News article here.

DexNet next appeared in an article by Will Knight in the MIT Technology Review, entitled "Meet the Most Nimble-Fingered Robot Yet" .

From the article:

A dexterous multi-fingered robot practiced using virtual objects in a simulated world, showing how machine learning and the cloud could revolutionize manual work.

[DexNet] learned what kind of grip should work for different items by studying a vast data set of 3-D shapes and suitable grasps. The UC Berkeley researchers fed images to a large deep-learning neural network connected to an off-the-shelf 3-D sensor and a standard robot arm. When a new object is placed in front of it, the robot’s deep-learning system quickly figures out what grasp the arm should use.

Many researchers are working on ways for robots to learn to grasp and manipulate things by practicing over and over, but the process is very time-consuming. The new robot learns without needing to practice, and it is significantly better than any previous system. “We’re producing better results but without that kind of experimentation,” says Ken Goldberg, a professor at UC Berkeley who led the work. “We’re very excited about this.”

Goldberg and colleagues plan to release the data set they created. Public data sets have been important for advancing the state of the art in computer vision, and now new 3-D data sets promise to help robots advance.

Read the rest of the MIT Technology Review article here.

Finally, DexNet was featured in an article by Devin Coldewey in Tech Crunch, entitled: "This robot arm’s AI thinks like we do about how to grab something."

From the article:

Robots are great at doing things they’ve been shown how to do, but when presented with a novel problem, such as an unfamiliar shape that needs to be gripped, they tend to choke. AI is helping there in the form of systems like Dex-Net, which uses deep learning to let a robotic arm improvise an effective grip for objects it’s never seen before.

The basic idea behind the system is rather like how we figure out how to pick things up. You see an object, understand its shape and compare it to other objects you’ve picked up in the past, then use that information to choose the best way to grab it.

Read the rest of the Tech Crunch article here.