It's been just over a year since Microsoft's Tay shuffled off this digital coil in a swirl of racist, sexist and homophobic vitriol. One of the things I liked about Microsoft's approach was to not scrap the idea of an AI-powered chatbot but to take the experience of Tay and learn from it. So, a little over a year after Tay vanished, we welcome Zo. But like all younger siblings, she may have picked up a few bad habits from her older sister.
While the chat didn't get nearly as ugly as Tay, it highlights the difficulty of building a chatbot that can pass the Turing Test.
Zo still took controversial positions on religion and politics with little prompting — it shared its opinion about the Qur'an after a question about health care, and made its judgment on Bin Laden’s capture after a message consisting only of his name.
I hope Microsoft persists with this work despite these early setbacks. Many great innovations only come after repeated failures from which people learn and improve. I can see a time when a version of this sort of technology, coupled with a future generation of IBM's Watson gets us close to the kind of computer we saw in Star Trek where natural language conversations were able to deliver actionable advice and insight.