Microsoft’s Psychotic, Racist Twitter Bot was a Fail. What Does it Say About us?

We often hear about the danger of artificial intelligence turning ugly, and Microsoft just saw a prime example of it with their teenage-aimed A.I. chatbot. ‘Tay’ was meant to provide the company with an understanding of how a chatbot would gain knowledge based on its interactions with users, mimicking the way they speak. Tay looked rather innocent with the icon of a computerized young girl, but quickly ended up with a pretty filthy mouth.

It didn’t take long for Twitter to turn Tay into an Obama-hating, misogynistic, racist, homophobic, genocide-promoting, terrible psychotic jerk.

The chatbot was apparently designed to “engage and entertain people where they connect with each other online through casual and playful conversation”, but Microsoft made one big rookie mistake: it trusted the internet.

tay

tay1

tay2

If there is one thing that companies should have learned by now, is that Twitter is a place where trolls multiply like rabbits and any opinion is the wrong opinion. Goodwill and the quest for knowledge be damned. The website simply has little patience for any sort of branded experiment. Most of the time, corporate hashtags and similar ilk simply fall flat and are condemned to the endless graveyard of forgotten buzzwords and taglines. Other times, it backfires and PR is sent into a frenzy. Just ask the NYPD, Paula Deen, Kenneth Cole, or any other company that has felt Twitter’s wrath.

The NYPD and Paula Deen both attempted to bring a drum up some positive feedback after being plagued by scandalous headlines with hashtags that came back to haunt them within minutes. The NYPD urged people to tweet photos with the hashtag #MYNYPD and soon had a wave of tweets coming their way that looked like this:

nypdtweet

Paula Deen attempted to do damage control for her racist remarks, asking people to tweet their favorite recipes using #PaulasBestDishes, and well, basically created a racist cookbook. But those were only hashtags. Tay allowed for a whole new kind of Twitter trolling.

While many companies do of course create successful branded hashtags, the fact that Microsoft’s chatbot learned to become nasty and hateful so quickly is saying a lot. The chatbot was developed by a staff that included improvisational comedians and was targeted at 18-24 yr-olds, who gleefully egged it on. Humans weren’t entirely to blame though – Tay picked up some of that bad behavior itself. The internet’s Godwin’s law says that the longer an internet conversation goes, the bigger the probability is that Hitler or Nazis are being brought up.

tay3

Microsoft deleted many of the tweets and later explained the mishap, saying “within the first 24 hours of coming online, we became aware of a coordinated effort of some users to abuse Tay’s commenting skills.” This likely won’t be the last we’ll here from Microsoft’s Twitter bot as they head back to the drawing board, but what does this failure say about A.I. and about us as human beings?

Maybe we should take Stephen Hawking’s warnings a little more seriously when working in the A.I. lab from now on.

taybye

Related Articles

- Advertisement -

Latest Articles

- Advertisement -