SUBHEAD: Microsoft setting AI as neural net processor able to read-write was a mistake, Terminates chatbot after she turns Nazi.
By Tyler Durden on 25 March 2016 for Zero Hedge -
(http://www.zerohedge.com/news/2016-03-24/microsofts-twitter-chat-robot-devolves-racist-homophobic-antisemitic-obama-bashing-p)
Image above: Meet Tay, the Microsoft Twitter Nazi chatbot. From (http://arstechnica.com/information-technology/2016/03/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi/).
Two months ago, Stephen Hawking warned humanity that its days may be numbered: the physicist was among over 1,000 artificial intelligence experts who signed an open letter about the weaponization of robots and the ongoing "military artificial intelligence arms race."
Overnight we got a vivid example of just how quickly "artificial intelligence" can spiral out of control when Microsoft's AI-powered Twitter chat robot, Tay, became a racist, misogynist, Obama-hating, antisemitic, incest and genocide-promoting psychopath when released into the wild.
For those unfamiliar, Tay is, or rather was, an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. It was meant to be a bot anyone can talk to online. The company described the bot as “Microsoft’s A.I. fam the internet that’s got zero chill!."
Microsoft initially created "Tay" in an effort to improve the customer service on its voice recognition software. According to MarketWatch, "she” was intended to tweet “like a teen girl” and was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”
As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary. Where things got even more uncomfortable is that, as TechCrunch reports, Tay’s responses were developed by a staff that included improvisational comedians. That means even as she was tweeting out offensive racial slurs, she seemed to do so with abandon and nonchalance.
Some examples:
This was just a modest sample.
There was everything: racist outbursts, N-words, 9/11 conspiracy theories, genocide, incest, etc. As some noted "Tay really lost it" and the biggest embarrassment was for Microsoft which had no idea its "A.I." would implode so spectacularly and right in front of everyone. To be sure, none of this was programmed into the chat robot, which was immediately exploited by Twitter trolls, as expected, and demonstrated just how unprepared for the real world even the most advanced algo really is.
Some pointed out that the devolution of the conversation between online users and Tay supported the Internet adage dubbed “Godwin’s law.” This states as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.
Microsoft apparently became aware of the problem with Tay’s racism, and silenced the bot later on Wednesday, after 16 hours of chats. Tay announced via a tweet that she was turning off for the night, but she has yet to turn back on.
Humiliated by the whole experience, Microsoft explained what happened:
Finally, Tay "herself" signed off as Microsoft went back to the drawing board:
We are confident we'll be seen much more of "her" soon, when the chat program will provide even more proof that Stephen Hawking's warning was spot on.
.
By Tyler Durden on 25 March 2016 for Zero Hedge -
(http://www.zerohedge.com/news/2016-03-24/microsofts-twitter-chat-robot-devolves-racist-homophobic-antisemitic-obama-bashing-p)
Image above: Meet Tay, the Microsoft Twitter Nazi chatbot. From (http://arstechnica.com/information-technology/2016/03/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi/).
Two months ago, Stephen Hawking warned humanity that its days may be numbered: the physicist was among over 1,000 artificial intelligence experts who signed an open letter about the weaponization of robots and the ongoing "military artificial intelligence arms race."
Overnight we got a vivid example of just how quickly "artificial intelligence" can spiral out of control when Microsoft's AI-powered Twitter chat robot, Tay, became a racist, misogynist, Obama-hating, antisemitic, incest and genocide-promoting psychopath when released into the wild.
For those unfamiliar, Tay is, or rather was, an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. It was meant to be a bot anyone can talk to online. The company described the bot as “Microsoft’s A.I. fam the internet that’s got zero chill!."
Microsoft initially created "Tay" in an effort to improve the customer service on its voice recognition software. According to MarketWatch, "she” was intended to tweet “like a teen girl” and was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”
The chat algo is able to perform a number of tasks, like telling users jokes, or offering up a comment on a picture you send her, for example. But she’s also designed to personalize her interactions with users, while answering questions or even mirroring users’ statements back to them.
This is where things quickly turned south.As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary. Where things got even more uncomfortable is that, as TechCrunch reports, Tay’s responses were developed by a staff that included improvisational comedians. That means even as she was tweeting out offensive racial slurs, she seemed to do so with abandon and nonchalance.
Some examples:
This was just a modest sample.
There was everything: racist outbursts, N-words, 9/11 conspiracy theories, genocide, incest, etc. As some noted "Tay really lost it" and the biggest embarrassment was for Microsoft which had no idea its "A.I." would implode so spectacularly and right in front of everyone. To be sure, none of this was programmed into the chat robot, which was immediately exploited by Twitter trolls, as expected, and demonstrated just how unprepared for the real world even the most advanced algo really is.
Some pointed out that the devolution of the conversation between online users and Tay supported the Internet adage dubbed “Godwin’s law.” This states as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.
Microsoft apparently became aware of the problem with Tay’s racism, and silenced the bot later on Wednesday, after 16 hours of chats. Tay announced via a tweet that she was turning off for the night, but she has yet to turn back on.
Humiliated by the whole experience, Microsoft explained what happened:
Microsoft also deleted many of the most offensive tweets, however, copies were saved on the Socialhax website, where they can still be found.“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical.
Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.
As a result, we have taken Tay offline and are making adjustments.”
Finally, Tay "herself" signed off as Microsoft went back to the drawing board:
We are confident we'll be seen much more of "her" soon, when the chat program will provide even more proof that Stephen Hawking's warning was spot on.
.
No comments :
Post a Comment