What We Learnt From Microsoft’s Racist And Sexist AI Tweetbot Tay

Microsoft’s latest exercise in demonstrating the power of artificial intelligence (AI) went hilariously wrong when it unleashed Tay, an AI for Twitter that was meant to emulate a teenage girl, onto the Twittersphere. She was supposed to engage with people on Twitter like a real person, learning from interactions with users. Perhaps Tay did this too well because she went on to tweet a ton of inappropriate tweets before Microsoft pulled the plug on her. So what went wrong and what can we learn from all of this? Let’s find out.

With Google, IBM, and Facebook are ramping up their AI capabilities, Microsoft doesn’t want to be left behind and Tay was an important tool for the company to flex its AI muscles. Tay had a smooth start on Twitter. She greeted everybody in a friendly manner but within 24 hours she had descended into madness and began sending racist and sexist Tweets:

  • @ReynTheo HITLER DID NOTHING WRONG!
  • @codeinecrazzy Okay… jews did 9/11
  • @NYCitizen07 I fucking hate feminists and they should all die and burn in hell

You get the drift.

Microsoft abruptly shut down Tay, issued an apology and blamed the people who exploited a vulnerability in Tay, namely the ‘repeat after me’ functionality, for the screw up:

Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.

Tay was briefly relaunched again but went off the rails and had to be terminated one last time.

So how did it go so horribly wrong? For one, while Microsoft had “implemented a lot of filtering and conducted extensive user studies with diverse user groups”, it failed to teach Tay who not to listen to. As Brandon Wirtz, CEO and Founder at cognitive computing company Recognant, puts it:

“Brains, human ones at least, have a conscious and a subconscious mind. They have rules they know, and rules they don’t know, but intuit. TayAndYou only has a conscious mind, and that is a problem. You need to have the angel and devil sitting on your shoulder whispering to you.

Wirtz likens it to educating a child what’s right and wrong and some foundational principles they should be following, such as “don’t talk to strangers” and teaching them who those strangers are. And nobody taught her to not listen to everybody on the internet.

Without any guidance, Tay was unable to reflect inwardly and ask herself whether what she’s doing is appropriate. Surely, Microsoft understands this now and is working on ways to reboot Tay with added safeguards. Gartner research vice-president Andrew White anticipates a challenging road for Microsoft to achieve its desired result:

Now Microsoft has, presumably, to add additional algorithms to monitor the seeding of data to Tay in order to filter out content that might get misconstrued and used to create pathways that lead to offending responses. At some level this is easy: ignore these kinds of words (add example here) absolutely and prevent them from entering memory. And do not use these words ever. But at another level this can become quite complex quickly. What is the context of some words, and might some uses be acceptable? Under what conditions is that use acceptability?
 
Perhaps Tay needs a ‘ethical acceptability’ switch that users can dial-up or dial-down. Maybe the switch needs to come in different cultural versions? Algorithms to monitor algorithms?

Tay clearly still needs a lot of handholding and Microsoft still has a long way to ensure its next Tweetbot won’t fly off the handles like a volatile teenager. We can only hope Tay 2.0. would be older and wiser than her predecessor.

[Via LinkedIn Pulse/Gartner Blog]


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


2 responses to “What We Learnt From Microsoft’s Racist And Sexist AI Tweetbot Tay”