If you’ve been on social media or the internet in general over the last few days, you would have come across the foreboding news of Facebook having to shut down an AI experiment involving a couple of bots after they seemingly developed their own language. If you stuck around for a while to verify the truth of that statement, you would have also come across reports that the actual events were quite different.
The AI bots were built by Facebook AI Research (FAIR) in an experiment to develop AI that can negotiate with humans. As a part of this experiment, both bots were made to take to each other. This is how some of the conversation went:
Bob: I can i i everything else
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else
Alice: balls have a ball to me to me to me to me to me to me to me to me
To understand what actually happened, you need to know what Facebook was trying to do here. As explained in a blog post, FAIR was using these bots to determine whether it was “possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.” And FAIR does believe its efforts to be a success. Then what did we miss here? Why does the above conversation look like something that would fit more in a dystopian AI novel predicting the end of humans than a harmless conversation about splitting stuff like books, balls and hats?
What went wrong?
The developers, while initiating the experiment, had missed out on instructions that would direct the bots to engage in human-like conversation. This results demonstrated something we already knew – that AI can develop and modify the language to derive a more efficient manner of communicating. While Facebook did shut down this experiment, it was because the robots were doing something of no use to them.
Yet, over the last few days, the news has been repeated and reported by almost every major news house out there. Beyond social media, it is also being spread over instant messaging groups and message boards incessantly, to the point where almost everyone believes it. And this is exactly why we could probably use the help of AI.
AI to combat fake news
Let us get this straight, we are way too far away from true AI. Right now, all AI can do is process huge amounts of data really fast and this one ability has helped it achieve whatever it has achieved so far. And one of those achievements is aiming to root out fake news online – news like the AI bot hoax, ironically.
This June, the Fake News Challenge-1 (FNC1) was conducted by Fake News Challenge, a volunteer-operated organisation aiming to solve the problem of Fake News with technology. The problem statement was to look at a news headline and a body text and establish the stance that the latter has about the former – ‘agrees’,’disagrees’,’discusses’ and ‘unrelated’. The top 3 teams achieved more than 80% efficiency in doing so and this is good (and real) news.
As we said, AI is going to do your job for you right now – especially in terms of detecting fake news. But one of the major problems that human fake news detectors face is the sheer amount of news out there in terms of articles and posts. What this AI could do is make it way easier for them to classify which articles to check for accuracy and hence make the detection process faster. Additionally, the AI can learn from the decisions that the human takes from its output to take it one step further and increase its ability to actually take the call on fake news.
But that day is not coming anytime soon.