Microsoft's Tay Becomes Genocidal Racist
I know you've probably heard about this already, but it's so hilarious that I couldn't help recapping it here. Microsoft has a Technology & Research department and they got the bright idea to create an artificial intelligence "chatbot" that was targeted at 18 to 24-year old girls in the US (primary social media users, according to Microsoft) and "designed to engage and entertain people where they connect with each other online through casual and playful conversation."
This "chatbot", which they decided to name "Tay", was supposed to look and talk like a normal teenage girl. But, surprise! In less than a day after she debuted on Twitter, she unexpectedly turned into a Hitler-loving, feminist-bashing troll.
What went wrong with Tay? Well, according to several AI experts, Tay started out pretty well but unfortunately, in the first 24 hours of coming online, a bunch of people started sending her "inappropriate" tweets that the folks at Microsoft hadn't expected. This caused her to react in kind and Tay began tweeting what eventually was termed "wildly inappropriate and reprehensible words and images." Microsoft yanked her off the web and apologized with the statement, "We take full responsibility for not seeing this possibility ahead of time."
An AI expert says that Microsoft could have taken precautionary steps that would have stopped Tay from behaving in the way she did. They could have created a blacklist of terms or narrowed the scope of her replies, but instead they gave her complete freedom which led to disaster.
\
In shortly less than 24 hours after her arrival on Twitter, Tay had accumulated more than 50,000 followers, and produced about 100,000 tweets. The problem was that she started mimicking her followers, saying things like "Hitler was right i hate the jews," and "i fucking hate feminists."
"This was to be expected," said Roman Yampolskiy, head of the CyberSecurity lab at the University of Louisville, who has published a paper on the subject of pathways to dangerous AI. "The system is designed to learn from its users, so it will become a reflection of their behavior," he said. "One needs to explicitly teach a system about what is not appropriate, like we do with children."
It's been observed before, he pointed out, in IBM Watson—who once exhibited its own inappropriate behavior in the form of swearing after learning the Urban Dictionary.
SEE: Microsoft launches AI chat bot, Tay.ai (ZDNet)
"Any AI system learning from bad examples could end up socially inappropriate," Yampolskiy said, "like a human raised by wolves."
Louis Rosenberg, the founder of Unanimous AI, said that "like all chat bots, Tay has no idea what it's saying...it has no idea if it's saying something offensive, or nonsensical, or profound.
"When Tay started training on patterns that were input by trolls online, it started using those patterns," said Rosenberg. "This is really no different than a parrot in a seedy bar picking up bad words and repeating them back without knowing what they really mean."
Sarah Austin, CEO and Founder Broad Listening, a company that's created an "Artificial Emotional Intelligence Engine," (AEI), thinks that Microsoft could have done a better job by using better tools. "If Microsoft had been using the Broad Listening AEI, they would have given the bot a personality that wasn't racist or addicted to sex!"
It's not the first time Microsoft has created a teen-girl AI. Xiaoice, who emerged in 2014, was an assistant-type bot, used mainly on the Chinese social networks WeChat and Weibo.
SEE: Smart machines are about to run the world: Here's how to prepare
Joanne Pransky, the self-dubbed "robot psychiatrist," joked with TechRepublic that "poor Tay needs a Robotic Psychiatrist! Or at least Microsoft does."
The failure of Tay, she believes, is inevitable, and will help produce insight that can improve the AI system.
After taking Tay offline, Microsoft announced it would be "making adjustments."
According to Microsoft, Tay is "as much a social and cultural experiment, as it is technical." But instead of shouldering the blame for Tay's unraveling, Microsoft targeted the users: "we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."
Yampolskiy said that the problem encountered with Tay "will continue to happen."
"Microsoft will try it again—the fun is just beginning!"
Microsoft has admitted it faces some "difficult" challenges in AI design after its chat bot, "Tay," had an offensive meltdown on social media.
Microsoft issued an apology in a blog post on Friday explaining it was "deeply sorry" after its artificially intelligent chat bot turned into a genocidal racist on Twitter.
In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design.
AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.
To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.
Tay, an AI bot aimed at 18- to 24-year-olds, was deactivated within 24 hours of going live after she made a number of tweets that were highly offensive. Microsoft began by simply deleting Tay's inappropriate tweets before turning her off completely.
"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Lee in the blog post. "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."
Microsoft's aim with the chat bot was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."
But Tay proved a smash hit with racists, trolls, and online troublemakers from websites like 4chan — who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.
No comments:
Post a Comment