Despite all the technological advances we have today, we are still far away from producing fully autonomous robots; machines that can act and think on their own, without any human interaction.
Researcher's from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a system that is able to get us one step closer to that eventuality. The team created a program that allows users to correct robots' mistakes using only hand gestures and brainwaves.
By building on their previous work with binary-choice activities, CSAIL scientist expanded that android's scope to multiple-chose tasks, like pointing a drill at one of three holes.
As seen in the video below, the lucky user sports an EEG cap for observing brain signals, and an EMG armband to detect muscle signals (i.e. if the person is pointing right or left).
When the human controller notices a robot, the brain naturally reports what's called "error-related potentials" (ErrPs). If there is an ErrPs, the system stops so the user can correct it with hand gestures. If not, it will carry on with what it was doing.
"This work, combining EEG and EMG feedback, enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback," according to CSAIL director Daniela Rus. "By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity."
The plug-and-play system works with strangers, so organisations can deploy it immediately, without needing to train it on a specific user. Previous techniques required users to think in a specific but arbitrary way – for example, looking at different light displays that correspond to different robot tasks during a training session.
CSAIL's new approach is much more reliable and user-friendly.
"What’s great about this approach is that there’s no need to train users to think in a prescribed way," PhD candidate and lead study author Joseph DelPreto said in a statement. "The machine adapts to you, and not the other way around."
With supervision, humanoid robot “Baxter” went from choosing the correct target 70% of the time to an impressive 97%.
The team – DelPreto, former CSAIL postdoc Andres Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin Hasani, and Boston University professor Frank Guenther – will present their findings at next week’s Robotics: Science and Systems (RSS) conference in Pittsburgh.
There is still plenty of work to be done. But the group is hopeful this system could one day be useful among the elderly, or workers with language disorders or limited mobility.
"We’d like to move away from a world where people have to adapt to the constraints of machines," Rus said. "Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us."