Stephen Hawking and Tesla's Elon Musk have been going on for ages about the revolution in artificial intelligence (AI) and the inevitable end of humankind. The AI explosion is, no doubt, remarkable. In just a few years, AI machines have learnt to beat the best humans at the world's hardest board games, drive cars, fly drones and play the stock market. They can compose music, fake voices, even write screenplays. There are AI lawyers out there, AI radiographers and AI scientists. This article might have been written by a machine – but it's not, don't panic!
Some questions remain as to whether we should be worried about Black Mirror-style scenarios, or AI making us all jobless? We did some research about experts and what they have to say about AI and the most common theories.
AI will take over your job
Well, do not panic, it depends on what you do, and it will definitely not happen today. "The layperson is right to worry about AI taking jobs," says Mr Toby Walsh, professor of artificial intelligence at the University of New South Wales and author of Android Dreams. "If you earn your living driving a taxi or a truck, you have to ask yourself what other skill you have that people will pay for besides driving. In 20 years’ time, very few people will earn their living driving. It will be far, far safer and cheaper to have a robot do this."
Similar jobs that rely on pattern recognition such as radiology, or large-scale data, such as an accountant will get the boot or change beyond recognition.
"Equally, some of the claims are overblown," says Mr Walsh. "I am a scientist, so I looked at the data in the well-known Frey and Osborne report predicting 47% of jobs are at risk of automation in the next two decades. Some of the predictions are clearly wrong. For example, they predict with 98 percent certainty that models will be automated. We don’t care about what robots look like in clothes. This is a job that will remain with humans."
Strawberry picking for days
"Let’s say you create a self-improving AI to pick strawberries," Elon Musk told Vanity Fair in April 2017, "and it gets better and better at picking strawberries… so all it really wants to do is pick strawberries." A similar example, by the AI ethicist Mr Nick Bostrom, involves paperclips.
"There is evidence things can go wrong with technology and get out of hand," says Joshua Gans, a professor at the University Of Toronto’s Rotman School of Management, and co-author of Prediction Machines (out 17 April 2018).
"Bitcoin was one person’s idea and was implemented with very limited resources. It now consumes the energy of several small countries. The paperclip experiment is an example of this. [But] there are reasons to suppose that the risk of that is lower than some think. This is because it takes a whole lot of unlikely things to occur in a row for it to happen." It would require an AI that is both super intelligent, but too stupid to interpret nuance in its instructions, for a start. "What is more likely is that AI causes problems when the people controlling it have less innocuous motives."
Artificial Intelligence might seem human
Think about Terminator and the like, we imagine AI in our image. But a strong body of work suggests that human intelligence has evolved the way it has because of our survival instincts. Humans need to eat, so we developed the ability to co-operate. Silicon-based AI won't feel hungry, or get cold, nor will it have the fickle influences of hormones and genes.
"We have no idea what an AGI [artificial general intelligence] would be like," says Mr Walsh. "Would it be conscious? Would it have or need to have emotions like us? This is why AI is such an interesting scientific challenge. Will it be of biological intelligence? Or is intelligence created in silicon, something different – less emotional, more rational?"
Even though we do not have all the answers today, we will most definitely see what happens in the next century.