Blog Listing

Fear Shaming AI

By Yina Moe-Lange  

 

These days there are a lot of conversations revolving around the future of Artificial Intelligence (AI) and its role in our society’s future. Many of these are productive conversations about the benefits that can be provided by implementing AI into different areas such as the health care industry. But simultaneously there has been many articles posted about the dangers of AI and how it will lead to the end of the human race. A few minutes on Google looking into AI will lead you to articles like the following:

 

“Elon Musk’s Billion-Dollar Crusade to Stop the AI Apocalypse”

“If Silicon Valley Types Are Scared of AI, Should We Be?”

“Robots will destroy our jobs – and we’re not ready for it”

“Stephen Hawking warns artificial intelligence could end mankind”

 

These types of fear inducing and generally misleading titles prevent us from having productive conversations about the use of AI and how it can positively and constructively improve humankind.

 

One of the latest AI fear-mongering episodes, began when several mainstream news outlets began reporting that Facebook has to shut down its AI because it developed its own methods of communication. Turns out this was not quite the extreme case that it was first reported as, with article titles like the following:

 

“Facebook engineers panic, pull plug on AI after bots develop their own language”

or

“Researchers shut down AI that invented its own language”

 

In reality, the situation was not the first step in the robotic takeover of Earth. Facebook Artificial Intelligence Research (FAIR) is the research group within Facebook that has created AI bots to chat within Facebook Messenger. A project has been focused on introducing dialog agents with the ability to negotiate. The FAIR team crowdsourced a dataset of negotiations between people, where people were asked to divide objects between themselves. Then they began training a Recurrent Neural Network (RNN), having the bot learn how to negotiate by imitating the human actions. Being trained purely on human language and decisions, this end-to-end method of supervised training allows the neural net to be adapted to other tasks.

 

The FAIR team experimented with their bots by having them negotiate with humans. They say that the humans in these experiments did not realize that they were communicating with bots, which meant that the bots had “learned to hold fluent conversations in English in this domain.” Impressively, the bots using Reinforcement Learning (RL) and dialog rollouts had performance levels that matched the human negotiators.

 

 

The reason the FAIR team shut down the AI bots was because they began speaking their own shorthand to more efficiently negotiate together. It turns out that for the bots, communicating in English was not the most efficient way to negotiate and there was no clearly defined reward for communicating in English.

 

 

Though it is easy to fall for the extreme news headlines, we are not quite at the point of a robotic takeover. Machine Learning and AI are quite far from exceeding human performance. Before then, there are many unique and efficient applications of AI that can be used to enhance different aspects of human lives.

 

P.S. On the tails of this comes the announcement that Facebook has bought Ozlo to increase its conversational AI efforts. Here is the open-source code that they use for bot negotiation. Read more about Facebook’s AI negotiation training here.