Why are we so afraid of artificial intelligence?

It is within human nature to fear the unknown. And regardless of how much debate there is around artificial intelligence, most of it still remains mysterious. The concept of AI exists since the 1950s, however, it has gained considerable ground and attention starting with 2000. Catchy headlines in well-known publications constantly make us aware of the advancements in the field and the raising concerns as well as dangers awaiting just around the corner. What is a person to do in this situation? Panic seems to be the appropriate response. But what are we actually panicking about?

AI will steal our jobs

Even though we continue to have a lot of questions when it comes to artificial intelligence, it is also quite clear that sophisticated AI could make the world a better place. It can help us fight climate change, discover new treatments for diseases, better understand our customers and take over most of the traditional activities that we have to do now. But this does not come lightly. Because, what we see as mundane activities that do not bring any value, others see as their only way towards making a living. Knowing that smart machines are coming to handle all these tasks makes a lot of people anxious. For some of them, this is a legitimate fear. However, a lot of people nurture skills that technology is unlikely to replicate. These are related to creativity, abstract or critical thinking, social or emotional intelligence and, of course, programming and other roles tied to developing the AI systems we're afraid of.

Robots could take over 20 million jobs by 2030, study claims

Machines will take over the world

Although they are quite entertaining and interesting, we should admit that most of the AI-based movies share a negative image of a future in which super-intelligent machines will take over the world. The number of humans has already been surpassed by mobile devices since 2014, so, a world where we are no longer dominant does not seem so far-fetched.

Speaking at the Zeitgeist 2015 conference in London, the late Stephen Hawking warned that computers will overtake human intelligence within the next 100 years. According to him, a solution to this should come from scientists and engineers that need to safely and carefully coordinate and communicate advancements in AI to ensure it does not grow beyond humanity’s control. Elon Musk, who uses artificial intelligence capabilities to build the car of the future, is also worried about AI development. In an interview, he mentioned that “something seriously dangerous” may come about from AI in the next 5-10 years.

AI is able to harm people

Needless to say, AI holds the potential for improving a lot of domains, including controlling robots that can enhance efficiency and precision in different areas. Regardless of the positive impact robots have in automating numerous tasks, individuals continue to have mixed feelings about them. The more robots we have around us, the more reasons we seem to find to be afraid of them. And accidental deaths in which machines have been involved sure did not help.

The first time a human died because of a robot was in 1979. The machine that was supposed to retrieve parts from a storage area malfunctioned and the incident caused the death of a worker. The last registered such case was in 2015. A contractor working at one of Volkswagen's production plants was killed by the very robot he was setting up to grab and manipulate auto parts. Even though robots were involved in these accidents, later investigations concluded that they caused harm not because of bugs or malice, but because the people that created and handled them did not properly validate the software. So artificial intelligence has nothing to do with it. Moreover, studies show that in the US the number of industrial accidents has decreased since technology has increased and artificial intelligence and automation have been widely adopted.

Courtesy of The Conversation

Another reason artificial intelligence is seen as dangerous is because people believe it may be used to weaponize nations. On a certain level, the idea of having machines fighting machines and saving human lives doesn't sound too bad, but there are a lot of other implications to be considered. What if the machines malfunction, like we've already seen it happen? And how about the countries that do not have the necessary tech resources to build such systems? In a, hopefully unlikely, military event involving AI weapons how are people supposed to protect themselves? These are just some of the questions that keep certain people awake at night. Among them is also Max Tegmark, a Swedish-American physicist and the leader of the Future of Life Institute. In 2015, he addressed his worry about autonomous weapons in an open letter together with Stuart Russell, a computer scientist known for his contributions to artificial intelligence. In the end, the letter was signed by over 17,000 individuals, including Stephen Hawking. The letter asked governments to refrain from building AI weapons and pointed out that AI researchers do not want to tarnish their field by allowing the use of AI that is not beneficial for the human race.

How can we address these fears?

In some parts of the world people talk about connected homes, artificially-enhanced lives, robots that perform outstanding operations in hospitals, whereas, in other countries individuals still fight for the right to get an education. In 1956 when John McCarthy held the first workshop on artificial intelligence, there were villages in the world that did not have electricity or running water. Obviously, first thing that needs to be done is to address the inequality and inequity that exists in the world. Education is the best chance humanity has to survive.

Going back to the specific case of artificial intelligence, it is clear that most people's fears come from not clearly understanding the concept and what it implies. So, governments should invest more in raising awareness on what's happening in the field of AI. Additionally, there is a need for creating some regulation around artificial intelligence. Although it is being said the rules and laws might sometimes hinder innovation in such areas, AI could benefit greatly from regulation. And it would help keep track of every way that the technology is being used. Even though there have already been a lot of breakthroughs, artificial intelligence is still at the beginning, and we all have the possibility to contribute to its development. By pointing out how we don't want it to be, we can together shape the AI era. In this respect, you can have a look on this website and see how you can get involved. We should not allow myths or unfounded fears to distract us from really understanding where AI is heading.