Ars Technica by Peter Bright

Tay, the neo-Nazi millennial chatbot, gets autopsiedA user told Tay to tweet Trump propaganda; she did (though the tweet has now been deleted).

Microsoft has apologized for the conduct of its racist, abusive machine learning chatbot, Tay. The bot, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was turned off less than 24 hours after going online because she started promoting Nazi ideology and harassing other Twitter users.

The company appears to have been caught off-guard by her behavior. A similar bot, named XiaoIce, has been in operation in China since late 2014. XiaoIce has had more than 40 million conversations apparently without major incident. Microsoft wanted to see if it could achieve similar success in a different cultural environment, and so Tay was born….

A deeper problem, however, is that a machine learning platform doesn’t really know what it’s talking about. While results were mixed, Tay had some success at figuring out the subject of what people were talking about so it could offer appropriate answers or ask relevant questions. But Tay has no understanding; if a bunch of people tell her that the Holocaust didn’t happen, for example, she may start responding in the negative if asked if it occurred[…]

The reason that China has had no incidents is because both XiaoIce and the person once deemed by China’s Communist regime offensive would spend the rest of their existence or non-existence in the case of XiaoIce, in Qincheng Prison.

Should we worry about Tay the robot or others created in its likeness? How about the handlers of these bots? I’ll let you be the judge of that.

 

Advertisements