VIDEO: HELPING COMPUTERS READ BODY LANGUAGE
Researchers at the Carnegie Mellon University's Robotics Institute have been working on a computer that can understand body poses and movements of multiple people from video footage in real-time. This means literally every movement, the pose of each individual, their hand gestures, finger movements, everything!
This method was developed with the help of Panoptic Studio – a two-story dome embedded with 500 video cameras – and the insights gathered from these experiments make it possible to detect the pose of people using a single camera and a computer.
Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted. A self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring body language. Enabling machines to understand human behaviour also could enable new approaches to behavioural diagnosis and rehabilitation, for conditions such as autism, dyslexia and depression.
The challenges for hand detection are greater. As people use their hands to hold objects and make gestures, a camera is unlikely to see all parts of the hand at the same time. Unlike the face and body, large datasets do not exist of hand images that have been annotated with labels of parts and positions.
You can see how the Panoptic Studio dome works by watching the video below: