Tuesday, April 3, 2018

Stopping racist AI is as difficult as stopping racist people


In early 2016, Microsoft launched Tay, an AI chatbot that was supposed to mimic the behavior of a curious teenage girl and engage in smart discussions with Twitter users. The project would display the promises and potential of AI-powered conversational interfaces. However, in less than 24 hours, the innocent Tay became a racist, misogynist and a holocaust denying AI, debunking—once again—the myth of algorithmic neutrality. For years, we’ve thought that artificial intelligence doesn’t suffer from the prejudices and biases of its human creators because it’s driven by pure, hard, mathematical logic. However, as Tay and several other stories have shown, AI…

This story continues at The Next Web

No comments: