To adapt to any new trick takes practice to get it right but, apparently, it takes even more for AI to simulate real human movements. Motion control remains a large hurdle for programmers and robotics, a problem which can be summed up as the 'creepy repetition that AI can't seem to shake'.
Despite hours and hours of programming of the physics of movement, computer bots still can't deceive audiences into thinking that their movements are real. UC Berkeley seems to be on the right track to fix the bug, its research has resulted in a sort of sophisticated "boot camp" for AI, which could just as well teach robots to move like Jagger or any other human for that case. It combines reinforced learning, which is a process of refining AI intelligence through trial and error, with motion capture, which allows AI to compare its own movement to an actual reference.
The Berkeley artificial research lab wrote an 'if statement' that perpetually rewards the AI for mimicking the real-life example more accurately. After millions of trails, the AI behaves accordingly, completing backflips, cartwheels and other sophisticated motor skills. The program makes the AI robot learn to move almost in the same way that a dog learns a trick, even in the way that a human learns. This programming theory is partially based on human psychology, and with millions of tries, the AI's movement is indistinguishable from the reference.
These programs do not simply do as they were programmed, they are operating on a strong foundation of knowledge resulting from the countless experimentations, and can take this knowledge to different environments. The possible uses are wide-ranging, from game design, VR, robotics, animation etc, further blurring the line between the virtual world and reality. Sure some dystopian future from film or fiction comes to mind, but there is no need to be nervous, at least not yet.
In the meantime, you can see the AI movement in action below.