Science & Technology

Microsoft?s AI Bot Goes from Benevolent to Nazi in Less Than 24 Hours


A Microsoft-created AI chatbot designed to learn through its interactions has been scrapped after surprising creators by spouting hateful messages less than a day after being brought online.

The Tay AI bot was created to chat with American 18 to 24-year-olds and mimick a moody millenial teen in efforts to ?experiment with and conduct research on conversational understanding.?

Microsoft described Tay as an amusing bot able to learn through its online experiences.

?Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,? Microsoft stated. ?The more you chat with Tay the smarter she gets.?

But users soon picked up on the bot?s algorithms, training the computer simulation to espouse hatred towards Jews and feminism and even pledge support for Donald Trump.

Numerous screenshots were taken throughout the web of deleted tweets sent from the bot?s account yesterday, in which it professed support for white supremacy and genocide.

Feminist and gamergate icon Zoe Quinn also screen grabbed the bot allegedly calling her a ?whore.?

The bot?s interactions concluded last night with a message that it needed to go to sleep, leading Twitter users to speculate that Microsoft had decided to pull the plug, but the damage, albeit somewhat humourous, had already been done.

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend

By continuing to use this website I accept the use of cookies. More information

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we'll assume that you are happy to receive all cookies from this website. If you would like to change your preferences you may do so by following the instructions here