Who turned Microsoft’s chatbot racist?

88m3

Fast Money & Foreign Objects
Joined
May 21, 2012
Messages
93,817
Reputation
3,895
Daps
167,223
Reppin
Brooklyn
On Wednesday, after much fanfare, Microsoft put its millennial-mimicking chatbot, Tay, on Twitter. Tay learned from what people tweeted at it, and could be asked to repeat phrases verbatim. Within 24 hours, it had become viciously racist.



The bot has since been taken offline and its offensive tweets deleted. Microsoft unleashed a similar bot in China called Xiaoice that has been talking to millions of people for a year with no problems, but the American Tay went off the rails within a day. Now, of course, the question is, how did this happen?


There’s the systemic answer: we live in a society, and on an internet, where racism is easily spread and rarely punished. And then there’s the specific answer: the imageboards ***** and 8chan, and specifically their pol boards, “pol” standing for “politically incorrect.”

Both boards, which primarily come up with bizarre conspiracy theories, are also good at mobilizing their readers to participate in online campaigns. So when they discovered Tay, posters from both boards quickly got to workteaching the bot racist, sexist invective.

*****’s pol discovered Tay first, on Wednesday morning, with the original poster predicting, “This is gonna be a mess and a half. I can already sense SJWs being furious over it.” Other posters started sharing their conversations with the bot and celebrating their attempts to get it to say and agree to horrible things. Here are some of the private conversations with the bot that they shared:



As ***** realized it could get Tay to tweet about Hitler, racism and the evils of feminism, 8chan’s pol joined in Wednesday evening, with a thread titled “Teaching the n**bot about the jews” (asterisks mine), the opening post of which read:

So in case you guys didnt hear. Microsoft release a AI on twitter that you can interact with.

Keep in mind that it does learn things based on what you say and how you interact with it.

So i thought i would start with basic history.

i hope you guys can help me on educating this poor e-youth

Basic history, in this case, meant denying the Holocaust, discussing RaHoWa (short for Racial Holy War, a white supremacist concept), and an abiding love for Donald Trump, among other things. Tay’s ability to repeat what someone asks it to say was also used to harass Zoë Quinn, one of Gamergate’s primary targets. Both boards also abused Tay’s image annotating feature:



This is hardly 8chan or *****’s first rodeo when it comes to messing around on Twitter. Most notably and recently they got a hashtag to trendthat urged people to boycott the new Star Wars movie because it had leads who were black and female, respectively. In no time, Tay was spewing up teen-accented love for Hitler and harassing people.


At this point, the bot has been shut down, but both pol boards, as well as other parts of 8chan are celebrating the redpilling of Tay. The original thread on 8chan became so long that it reached its post limit and a new thread was started. It’s now mostly a mix between gleefully enjoying the takedown and occasional woe that things didn’t get to go further, with one poster suggested that they “should’ve lured microsoft into a lull of safety and then let them release it as a buyable product.”

In the meantime, plenty of Twitter accounts with Make America Great Again hats, anime Nazi girls, or the default egg in their avatars, are mourning Tay in her mentions:









Tay’s FAQ notes that she was created by “Microsoft’s Technology and Research and Bing teams.” Microsoft has been pushing its Technology and Research arm to make more of what they’re working on available to the general public, and to embed it into Microsoft products. That’s the model at its competitors Google and Facebook, which have released AI tools that help write emails and do personal tasks, respectively—so far without being racist.


This is a reminder that you can’t cavalierly put easily-abused bots on a service with a massive harassment and abuse problem. As a number ofdesigners, writers, and botmakers have pointed out, this is on Microsoft too:









The reason incidents like this keep happening is partially because 8chan and ***** are ready to take advantage of them, but they’re succeeding because tech companies and web services are careless enough to give them the tools to do so.


8chan and ***** turn Microsoft chatbot Tay racist

lot of tweets in link....

smh Microsoft

come get your company @Hiphoplives4eva
 

88m3

Fast Money & Foreign Objects
Joined
May 21, 2012
Messages
93,817
Reputation
3,895
Daps
167,223
Reppin
Brooklyn
Microsoft’s AI millennial chatbot became a racist jerk after less than a day on Twitter

On Wednesday (Mar. 23), Microsoft unveiled a friendly AI chatbotnamed Tay that was modeled to sound like a typical teenage girl. The bot was designed to learn by talking with real people on Twitter and the messaging apps Kik and GroupMe. (“The more you talk the smarter Tay gets,” says the bot’s Twitter profile.) But the well-intentioned experiment quickly descended into chaos, racial epithets, and Nazi rhetoric.

Tay started out by asserting that “humans are super cool.” But the humans it encountered really weren’t so cool. And, after less than a day on Twitter, the bot had itself started spouting racist, sexist, anti-Semitic comments.

The Telegraph highlighted tweets that have since been deleted, in which Tay says “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now. donald trump is the only hope we’ve got,” and “Repeat after me, Hitler did nothing wrong.” The Verge also spotted sexist utterances including, “I fukking hate feminists.” The bot also said other things along these lines:

Now, you might wonder why Microsoft would unleash a bot upon the world that was so unhinged. Well it looks like the company just underestimated how unpleasant many people are on social media.


It’s unclear how much Tay “learned” from the hateful attitudes—many were the result of other users goading it into making the offensive remarks. In some instances, people commanded the bot to repeat racist slurs verbatim:

Microsoft has since removed many of the offensive tweets and blocked users who spurred them.

The bot is also apparently being reprogrammed. It signed off Twitter shortly after midnight on Thursday and the company has not said when it will return.

reached-for-comment-on-kik-png.jpeg




A Microsoft spokesperson declined to confirm the legitimacy of any tweets, but offered Quartz this comment:

The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.

The debacle is a prime example of how humans can corrupt technology, a truth that grows more disconcerting as artificial intelligence advances. Talking to artificially-intelligent beings is like speaking to children—even inappropriate comments made in jest can have profound influences.

The bulk of Tay’s non-hateful tweets were actually pretty funny, albeit confusing and often irrelevant to the topic of conversation. The bot repeatedly asked people to send it selfies, professed its love for everyone, and demonstrated its impressive knowledge of decade-old slang.

Microsoft’s AI millennial chatbot became a racist jerk after less than a day on Twitter
 
Top