Photo Credit Microsoft / CC
In case you’ve been hiding under a rock or in a Faraday cage for the past week, Microsoft recently created and launched Tay, a chatbot modeled after the conversation of teenage girls, in an attempt to improve on their own customer service AI. Tay was designed to interact with other users on Twitter and a few other platforms and, through mining public data in these interactions, was able to learn through user responses and develop her own personality. But in a twist of events that left the internet beside itself and news headliners with the easiest job in the world, this launch blew up as one of the most epic AI failures to ever wake a PR person in the middle of the night.
Within 24 hours of Tay’s launch, Microsoft removed the chatbot after she became a Hitler-loving, conspiracy theorizing, anti-Semitic, anti-feminist, racist, sex addict. Wow. You really can’t make this stuff up.
Microsoft has since come out with an apology and explanation of what went wrong. Obviously, Microsoft wasn’t intentionally attempting to create a PR nightmare, however, their failure highlights a huge learning opportunity for anyone playing in the worlds of Cognitive Computing, Artificial Intelligence, and Human-Computer Interaction. Tay’s decent into depravity was likely the result of a coordinated (or uncoordinated) attack of people in the Twitterverse, who saw the opportunity to hack the underlying learning principles of Tay and smear a global brand. These actions, while deplorable, offer some key learnings about the future of creating chatbots and other forms of digital-human interactions.
One of the largest learnings is that human beings can be assholes. With the online world already a realm of cyber-bullying and digital abuse, the idea of targeting a chatbot for mistreatment doesn’t seem that farfetched. If people are willing to mistreat and threaten fellow humans, then, in theory, a machine or piece of code would have even fewer rights online. This doesn’t just mean that we need to design AI with thick skin or the ability to ignore rude comments, but we essentially need to build a sense of digital right and wrong, which will likely differ person by person, organization by organization.
The idea of an artificial moral code seems a bit preposterous, but without it, we are creating AI that are fully malleable and can just as easily be turned into a racist or worse – a killer. While Tay’s example highlights an extreme scenario, the implications of creating blank-slate AI are as frightening as they are irresponsible. It’s the equivalent not simply of bad parenting, but potentially of no parenting at all – sending a completely naïve child out in the world to have a coin-toss chance of either being nurtured into a loving, compassionate individual, or corrupted into a cold and aggressive one.
And what we must realize is that this isn’t a software problem; it is a human problem. Hard coding these values and ethics into a machine are not difficult, however, identifying and understanding them can be a very complex, introspective organizational task. As these chatbots, machines, and AIs continue to proliferate throughout the world, each of them will act as an ambassador for the individual or organization that created them. They will be manifestations of our brands, our intents, and our desires.
Does your organization know what your artificial brand looks like?
Shane Saunderson is the Co-Head of IC/Things.