Tay chatbot is an AI designed by Microsoft to interact with users around the world via Twitter. The program is a spectacular attempt to break the barriers between machine and men. We have been using search engines for a long time and when Siri first came out interacting with the user, everybody was excited.

So far so good, but after a day of hanging out with people, the results are… Expected. By the end of its first day, the chat bot was posting offensive, racist and all kinds of wrong comments. Apparently, a group of users found out how to exploit a couple “bugs” in the program and started ruining little Tay’s reputation.

tay-chatbot-microsoft
Microsoft took a step further and allowed the AI to interact and learn from real human beings. Credit: ABC7Chicago.

The company decided to turn off the program and erase most of the tweets from a public database as a temporary measure. They are going to adjust the software to avoid this kind of problem. Which means, they are going to teach the chat bot how to talk with the average internet user.

There are no details on what things they will change. Some people say that the problem does not lie in the software and it is a problem in the internet society. A Microsoft spokeswoman said in a statement that the chatbot “is as much a social and cultural experiment, as it is technical”.

The most infamous comments address very controversial and complicated topics. Such as, racism, politics and religion. Which means that a multimillionaire company invested time and effort to give people the rare opportunity to actually interact with an artificial intelligence that learns and what did they do? They trolled. There are even users saying the mistake is Microsoft’s and a lot of people think they are right, they were too naive to trust average people with such a unique opportunity.

Microsoft did not specify when they were going to relaunch the chat bot, but they said they already found some ways to reprogram the chat bot to deal with such problems in the future.

Source: PC World