Friday, March 25, 2016

Even new online "Chatbot" gets hacked

It sort of makes you wonder about self driving cars too, doesn't it because they could also be hacked directly through their GPS and given false information too sending cars over cliffs and off bridges to the passengers doom. This is how the Iranians stole a U.S. military drone by the way and made it land in Iran.
Microsoft’s attempt to engage millennials via an artificially intelligent …


Microsoft

Microsoft takes Tay 'chatbot' offline after trolls make it spew offensive comments

Now Playing Microsoft yanks Tay 'chatbot' after trolls teach it racism
Microsoft’s attempt to engage millennials via an artificially intelligent “chatbot” called Tay has failed miserably after trolls made the bot spew offensive comments.
The brainchild of Microsoft's Technology and Research and Bing teams, Tay was designed to engage and entertain people when they connect with each other online. Targeted at 18 to 24-year olds in the U.S., Tay aimed to use casual and playful conversation via Twitter and messaging services Kik and GroupMe. “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” explained Microsoft, in a recent online post.
The Internet, however, can be an unpleasant place and Twitter trolls were quick to pounce on the TayTweets account after the chatbot launched on Wednesday. It wasn’t long before trolls were teaching Tay to make unpleasant comments. The Washington Times reports that after repeating racist comments, she then incorporated the language into her own tweets.
Related: Twitter celebrates a decade of tweets
Tay’s tweets, which were also sexist, prompted the Telegraph newspaper to describe her as a “Hitler-loving sex robot.”
After tweeting 96,000 times and quickly creating a PR nightmare, Tay was silenced by Microsoft late on Wednesday. “c u soon humans need sleep now so many conversations today thx” she tweeted, bringing to a close one of the more ignominious chapters in AI history.


While attention has been squarely focused on Tay’s glaring shortcomings, Digital Trends notes that the chatbot also sent out hundreds of innocent tweets.
Microsoft told FoxNews.com that Tay is as much a much a social and cultural and experiment, as it is technical.
Related: Social media giants back Apple in dispute with FBI
“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” explained a Microsoft spokeswoman, via email. “As a result, we have taken Tay offline and are making adjustments.”
Follow James Rogers on Twitter @jamesjrogers

No comments: