Researchers at MIT's Computer Science and Artificial Intelligence Laboratory have developed a robotic arm that has taught itself how to see certain objects.
The robotic arm, which is run by a system called Dense Object Nets (DON), builds on technology that enables robots to make basic distinctions between items. DON takes this a step further by letting the robot inspect the random objects, and visually understand them enough to accomplish certain tasks that it has been given, without ever having seen them before.
The system looks at objects as a collection of points that serve as a sort of visual roadmap. This approach lets the robot better understand and manipulate the items, allowing them to pick up a specific object among a clutter of similar objects. This sort of tech could prove a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses. DON could be used by a robot to grab onto a specific spot on an object like the tongue of a shoe, which can be seen in the video below. From that, it can look at a shoe it has never seen before, and successfully grab its tongue.
"Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter," research author Lucas Manuelli says in an MIT press release. "For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side."
The research team at MIT trained the system to look at objects as a series of points that make up a larger coordinated system. It can then map different points together to visualise an object's 3D shape, similar to how panoramic photos are stitched together from multiple photos. As well as in industrial settings, MIT researchers think DON could prove useful in the home, tidying up general clutter, or performing specific tasks like putting away the dishes.
You can see the robotic hand, that taught itself to pick things up, in action below.