![]() Sinders is critical of Microsoft and Tay, writing that "designers and engineers have to start thinking about codes of conduct and how accidentally abusive an AI can be." This means that chatbots in a general sense probably shouldn't be racist or historical revisionists, but it also means that they need to take particular consideration of the chat platforms they're using. Still, it illustrates the broader difficulties of creating these natural language systems: the things that you ban the bot from talking about can be just as important as the ones you don't. Apple claimed that this was a bug due to her beta status, not some deliberate attempt to prevent customers from learning about abortions. Soon after Siri's introduction, Apple was accused of giving her anti-abortion programming because while she could tell you where to hide a body or find an escort, she drew a blank when asked about abortions and birth control. But many other topics such as Nazism, rape, or domestic violence had no such protection.īlacklisting topics in this way is itself problematic. Caroline Sinders, who works on IBM's Watson natural language system, has written about Tay. Her examples suggest that certain hot topics such as Eric Garner ( killed by New York police in 2014) generate safe, canned answers. It does appear that Microsoft considered the issue, however. It appears that this testing did not properly cover those who would actively seek to undermine and attack the bot. In its apology, Microsoft's Peter Lee, corporate vice president of Microsoft Research, writes that the company did test her under a range of conditions to ensure that she was pleasant to talk to. "Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI /xuGi1u9S1A Fusion reports that anonymous users of the message boards 4chan and 8chan (specifically, users of their politics boards, both named "/pol/") took advantage of this to create all manner of racist and sexist associations, thereby polluting Tay's responses. Recognizing that Tay seems to operate on the basis of word association and lexical analysis, Internet trolls discovered they could make Tay be quite unpleasant. All she knows of the event is that people tell her it didn't happen. Knowing w hat that event was and why people might lie to her about it remain completely outside the capabilities of her programming. She just knows that the Holocaust is a proper noun or perhaps even that it refers to a specific event. However, that's not because she has any understanding of what the Holocaust actually was. But Tay has no understanding if a bunch of people tell her that the Holocaust didn't happen, for example, she may start responding in the negative if asked if it occurred. While results were mixed, Tay had some success at figuring out the subject of what people were talking about so it could offer appropriate answers or ask relevant questions. This was trivially exploited to put words into the bot's mouth, and it was used to promote Nazism and attack (mostly female) users on Twitter.Ī deeper problem, however, is that a machine learning platform doesn't really know what it's talking about. One of its capabilities was that it could be directed to repeat things that you say to it. Although many early interactions were harmless, the quirks of the bot's behavior were quickly capitalized on. Unfortunately, the Tay experience was rather different. Microsoft wanted to see if it could achieve similar success in a different cultural environment, and so Tay was born. A similar bot, named XiaoIce, has been in operation in China since late 2014. XiaoIce has had more than 40 million conversations apparently without major incident. The company appears to have been caught off-guard by her behavior. The bot, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was turned off less than 24 hours after going online because she started promoting Nazi ideology and harassing other Twitter users. Further Reading Microsoft terminates its Tay AI chatbot after she turns into a NaziMicrosoft has apologized for the conduct of its racist, abusive machine learning chatbot, Tay.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |