researchers at MIT‘s computer science and artificial intelligence laboratory have developed a robotic arm with the ability to handle any object by studying how best to pick it up. the system, called dense object nets (DON), builds on technology that enables robots to make basic distinctions between items. DON takes this one step further by letting robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before.

MIT's robotic arm masters dexterity by teaching itself how to see

images courtesy of MIT

 

 

the system looks at objects as collections of points that serve as sort of visual roadmaps. this approach lets robots better understand and manipulate items, allowing them to pick up a specific object among a clutter of similar. this sort of tech could prove a valuable skill for the kinds of machines that companies like amazon and walmart use in their warehouses. DON could be used by a robot to grab onto a specific spot on an object like the tongue of a shoe. from that, it can look at a shoe it has never seen before, and successfully grab its tongue.

 

many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,’ research author lucas manuelli says in an MIT press release. ‘for example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.’

MIT's robotic arm masters dexterity by teaching itself how to see

 

 

the team trained the system to look at objects as a series of points that make up a larger coordinate system. it can then map different points together to visualize an object’s 3-D shape, similar to how panoramic photos are stitched together from multiple photos. as well as in industrial settings, MIT researchers think DON could prove useful in the home, tidying up general clutter, or performing specific tasks like putting away the dishes.