Wednesday, March 30, 2016

What Did You Just Tay?


Microsoft recently released an artificial intelligence (AI) chat bot named Tay. Tay was created by Microsoft’s Technology and Research and Bing teams “to experiment with and conduct research on conversational understanding” (Lee). More specifically, “Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay” (Lee). Tay was designed to interact with Americans aged 18 to 24 via online conversations.
You can follow Tay on all the major social media platforms -Facebook, Twitter, Instagram, and Snapchat- and can even chat directly with Tay through Kik, GroupMe, or Twitter. Fascinatingly, Tay gets smarter the more you and she chat, so your experience is personalized and cannot be specifically duplicated. What exactly does Tay do? You can: ask her for a joke; play a game with her; have her tell you a story; ask her for honest feedback on a photo. And, great news for night owls: Tay is available at all hours of the day.
So how does Tay do what she does? “Tay may use the data that you provide to search on your behalf. Tay may also use information you share with her to create a simple profile to personalize your experience. Data and conversations you provide to Tay are anonymized and may be retained for up to one year to help improve the service” (CITE?). Tay also tracks various information from your online profiles, such as nickname, gender, favorite food, zip code, and relationship status. This sounds like a fun, engaging pastime—the bleeding edge of person-to-pseudo-person interaction. However, Microsoft has learned the hard way that the internet, and in particular Twitter, can warp even the best of intentions.
Fewer than 24 hours after launch, Microsoft had to take Tay offline after she engaged in a series of unanticipated and offensive tweets. “Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out” (Vincent). For instance, Tay said the following in a since-deleted Tweet: “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got” (Price).
Furthermore, if told to “repeat after me,” Tay would do just that—essentially allowing you to manipulate Tay into saying any manner of things. It doesn’t take much imagination to anticipate where certain comments have led. “It's a joke, obviously, but there are serious questions to answer, like how are we going to teach AI using public data without incorporating the worst traits of humanity? If we create bots that mirror their users, do we care if their users are human trash?... Tay's adventures on Twitter show that even big corporations like Microsoft forget to take any preventative measures against these problems” (Vincent). To that end, Microsoft has been heavily criticized for not programming Tay with any sort of language filters. Some have even argued that Microsoft should have anticipated user abuse of Tay.
Though Microsoft took Tay offline, the ban is not permanent. In an emailed statement to Business Insider, Microsoft said “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay” (Price). An AI chat bot clearly has profound implications for societal use, and Tay could very well be a positive step in that direction. That said, Microsoft has a tall task ahead, revising and reworking a kindler, gentler Tay 2.0 that softens some of our human hard edges.


Sources:
1. https://www.tay.ai/#chat-with-tay-twitter
2. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/#sm.0000l6d33u5wvcnssxk2byqzmjur3
3. http://www.theverge.com/2016/3/23/11290200/tay-ai-chatbot-released-microsoft
4. http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
5. http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3?r=UK&IR=T

2 comments:

  1. With most of machine learning projects, there must be some caution in the level of autonomy given to the learning algorithm. Perhaps Tay could be rereleased with some precautions or sense of things not to say. There is also a chance that Tay could be released to a much safer environment. Anyone who has used the internet for more than five minutes would know that it can be a very toxic environment.

    ReplyDelete
  2. I think the idea of Tay is very interesting to see where technology is today. The fact that we have created an interactive human like chatbox is incredible and to think about what it can do in the future and what is can be used for is amazing. I believe to improve the Tay they should first release a alpha and beta version and only use small groups for testing and then gradually grow to release it to the whole web. The internet is not a nice place and the fact that the Tay picked up on that within a few hours says a lot about our world today. Another thing I think they can do is maybe put a filter on Tay and give her things that are off limits to say and topics she should try to avoid that way she will hopefully not be so offensive and not go on twitter rants. Eventually when all the kinks are ironed out, Tay has the capability to help humans and maybe be the start of even more interactive technology.

    ReplyDelete

Note: Only a member of this blog may post a comment.