Artificial intelligence is one of those technologies that promises a lot but, so far, hasn’t delivered on all its promise. Many of us use AI every day, whether it’s to get directions to a new place, or when we interact with a chatbot on a website. But our experience with AI can often leave us a little cold, wondering how technology can get things so terribly wrong. Some researchers at the Massachusetts Institute of Technology (MIT) looked at this when they created Norman – a very dark form of AI.
Norman, named for Norman Bates from the Alfred Hitchcock thriller Psycho, was trained using some truly disturbing images from the Internet. the researchers trawled the depths of Reddit to find some gruesome pictures of people dying in terrible circumstances. Then, Norma was shown some inkblot pics and he saw, you guessed it, death and destruction.
A control version of Norman, programmed using the same algorithm, was also trained but using cheerful images and exposed to the same inkblots. It saw far nicer things.
For example, when Norman and his lighter counterpart were exposed to the same inkblot one saw a photo of a small bird while Norman saw a man being pulled into a dough machine.
As individuals, we need to make some decisions. When we use an AI or machine learning-based system, our data is used to train the system. That’s why we’re asked to allow our data to be used by such systems. The MIT researcher said the data used to train the AI was more important than the algorithm – something Microsoft learned when their Twitter-based chatbot, Tay, went haywire in 2016. And while Tay’s little sister Zo isn’t perfect, those experiences show the importance of data as well.
The good news is that we continue to create more data every day and that can be used to train and refine the models AI developers use with their algorithms. But it also explains why AI is still in its infancy when it comes to widespread, broad adoption.