At the opening of Google's I/O event, the company showed off their new AI tool. In the demonstration, someone told the Google Assistant they wanted to book an appointment. Google found the hairdresser and then phoned them, holding a natural language conversation with a person to make the appointment and add it to a calendar. The party on the other end of the phone didn't know they were talking to a computer (so we're told). This opens up an interesting future.
Most people know about the Turing Test - the ability for a computer to interact with a person without the person realising they're interacting with a computer. And we've seen the beginnings of this ability developing over the last couple of years through chat-bots on websites. But Google's demonstration ups the ante significantly. Most of us know when we're dealing with a machine learning or AI based agent. Google's AI even threw in a few 'ums' and 'ers' to sound more lifelike.
There's a lot to think about in all this. As this technology develops it will become harder to detect the difference between a human and a computer. I can see, pretty soon, a time when online interactions with bots will include a video component.
I saw elements of this at AWS Summit a couple of weeks ago with Smart Video Australia who have developed a system where your questions in a chat window are responded to with an actor who has prerecorded responses including the correct pronunciation of thousands of names so the customer feels more comfortable. It's easy to see a time, very soon, where that actor's responses are completed by something like Google AI so it can respond to a wider array of questions.
In other words, we aren't all that far from a time when we won't know whether we are dealing with a person or a machine.
I asked a few people what they thought about this last night and they found the idea that a machine could call them and hold a conversation uncomfortable.
The good news is that this insight can give us time to prepare. Will we need laws that declare when AI or machine learning systems are responding rather than actual people? Does it matter to you if you are dealing with a bot or a human or do you only care whether your problem is solved?
There are lots of ethical considerations that need serious thought. The good news is that Google's reveal yesterday is an opportunity to start thinking ahead about the ethical issues this raises.