THIS NEW METHOD IMPROVES ROBOTS' GRASPING ABILITIES
Robots that can pick up and move objects may be useful in the near future, but they are typically only programmed to grasp specific types of objects that are specifically placed in a certain orientation.
Now, however, scientists have devised a method of allowing these robots to be more versatile. "One of the key shortcomings of current robotic grasping systems is the inability to quickly adapt to change, such as when an object gets moved," says Dr Jurgen Leitner, from Australia's Queensland University of Technology (QUT). "The world is not predictable – things change and move and get mixed up and, often, that happens without warning – so robots need to be able to adapt and work in very unstructured environments if we want them to be effective."
To that end, a team led by Leitner started by developing an artificial neural network; an artificial intelligence-based system that lets computers learn tasks by analysing examples. Using that network and a depth-mapping camera, a two-fingered picking robot was capable of making a pixel-by-pixel depth map of a moving, cluttered collection of objects placed in front of it, and then determine the best grasping method of picking up any one of those objects.
"By mapping what is in front of it using a depth image in a single pass, the robot doesn't need to sample many different possible grasps before making a decision, avoiding long computing times," says PhD researcher Douglas Morrison, one of the team members. "In our real-world tests, we achieved an 83% grasp success rate on a set of previously unseen objects with adversarial geometry and 88% on a set of household objects that were moved during the grasp attempt. We also achieve 81% accuracy when grasping in dynamic clutter."
This form of technology builds on a system that the team used to win the Amazon Picking Challenge in 2017. Scientists from MIT and Princeton have also developed a system that allows robots to grab random objects from a bin, then identify what the objects are and where they should go.
You can see the QUT robot in object-grasping action, in the video below.