MIT news writes that teaching a robot how to play Jenga with traditional machine learning schemes require 10s of thousands of tested attempts – which is not too bad if you consider how long it will take for a human to build 54 blocks in a perfect perpendicular 3 X 3 stacks 10s of thousands of times.
However, the engineers at MIT have found a shortcut around all of that. The MIT engineers approached this Jenga robot project from a human perspective. This robot can't hear, but it does combine its sense of vision with the sense of touch, which is provided via its "industry-standard ABB IRB 120 robotic arm." Using touch and sight, the MIT Jenga robot quickly learned clusters of data by focusing on the different behaviours the blocks exhibited when they were pushed, pulled, tapped and toppled.
With a soft-pronged gripper, external camera and force-sensing wrist cuff, the robot can both see and feel individual blocks in a Jenga tower, according to an MIT statement. It "learns" whether to remove any specific block in real time, and the robot uses visual and tactile feedback, in much the same way as a human player would switch blocks if the tower started to wobble.
These kind of robot designs can be taught how to function thanks to artificial intelligence, although the engineers had to give it some basic information first. The team told the robot that the goal of Jenga was to remove the blocks and then place them on top of the tower again. But the other stuff is autonomous. "It decides on its own which block to push, [and] which blocks to probe; it decides on its own how to extract them, and it decides on its own when it's a good idea to keep extracting them or to move to another one," says Alberto Rodriguez, the Walter Henry Gale Career Development Assistant Professor in the Department of Mechanical Engineering at MIT. It can give up on a specific block if it needs to, which is an important skill in playing Jenga. (We all know that feeling when we give up on a block.)
You can see the robot in action, below.