For some people out there, the term artificial intelligence sparks nightmare visions.
Even people who study AI have a healthy respect for the field's ultimate goal, however, computer scientist Stuart Russell, who has literally written the textbook on AI, has spent his career thinking about problems that arise when a machine's designer directs it towards a goal without thinking about whether its values are all the way aligned with humanity's.
A number of organisations have sprung up in recent years to combat that potential, including OpenAI – a working research group that was founded (the left) by techno-billionaire, Elon Musk.
This week, researchers at MIT have unveiled their latest creation: Norman, a disturbed AI (his name is based off the character in Hitchcock's Psycho)
Norman is an AI that is trained to perform image captioning, a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.
However, there is still some debate about whether the Rorschach test is a valid way to measure a person's psychological state, there is no denying that Norman's answers are creepy as hell.
See for yourself:
You have to hand it to Norman, he paints a vivid picture. The point of the experiment was to show how easy it is to bias any artificial intelligence you train it on biased data. The team did not speculate about whether exposure to graphic content changes the way a human thinks.
They’ve done other experiments in the same vein, too, using AI to write horror stories, create terrifying images, judge moral decisions, and even induce empathy. This kind of research is important. Even though artificial intelligence is not a new field, we're a long, long way from producing something that, as Gideon Lewis-Kraus wrote in The New York Times Magazine, can "demonstrate a facility with the implicit, the interpretive."
Norman is just a thought experiment, but it still raises questions, is machine learning algorithms making judgements and decisions based on biased data urgent and necessary. Norman is simply a way of figuring out how future AI-tech will predict questions it is asked.