Terminator is a well-known American science-fiction franchise created by James Cameron and Gale Ann Hurd. The series is comprised of five movies released to date: The Terminator (1984), Terminator 2: Judgement Day (1991), Terminator 3: Rise of the Machines (2003), Terminator Salvation (2009) and Terminator Genisys (2015) with another one already announced for 2019 - Terminator 6.
The central theme of the franchise is the battle for survival between the nearly-extinct human race and the world-spanning synthetic intelligence that is Skynet. With the risk of being slightly superficial and giving away spoilers, let's just say that humans are, generally speaking, not doing very well throughout the movies.
So this bears the question: is such a scenario possible? Have the advancements of AI brought us closer to our doomsday? Are we going to be exterminated like meaningless ants by our own creations?
We wanted to lay down some of our thoughts around this idea, so we ended up putting this article together. In it, we want to explore the likelihood of certain human race ending scenarios as well as talk about the real problems that we think AI brings to the table.
But, before we do that ...
Time traveling the AI field
In order to better understand what are the societal implications of AI and make (decently) accurate predictions about the future, we believe it helps to have a birds-eye view on the evolution of AI.
AI was founded as an academic discipline in 1956, and in the years since, has experienced several waves of optimism followed by disappointment and the loss of funding. Yes, you read that right - AI had a lot of hiccups along the way. These happened due to a combination of unrealistic expectations and insufficient computing power. Such times are now known as AI winters - defining hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later.
The key here is the early expectations of AI. Around that time, the goal was to construct something that was very much like human intelligence. Thus, the advent of robots capable of mimicking or even surpassing humans in every field and with every task. This, in turn, spawned a plethora of science fiction books, which only reinforced this grim image of the future.
Constructing a human like intelligence is no easy task (especially without the help of true computing power), so, in hindsight, it was expected for the early efforts to be unsuccessful (hence, the AI winters that followed). But the idea stuck with us through the years and the literature helped by emphasising the doomsday scenario. So, unfortunately, a lot of these ideas are still being promoted today and being considered true. It has been claimed that AI is a path to world domination. Others make even more extraordinary statements according to which AI marks the end of humanity (in about 20-30 years from now), life itself will be transformed in the “Age of AI“, and that AI is a threat to our existence.
The truth is that nowadays, AI methods tend to focus on breaking a problem into a number of smaller, isolated and well-defined problems and solving them one at a time. Modern AI is bypassing grand questions about the meaning of intelligence, the mind, and consciousness, and focuses on building practically useful solutions to real-world problems.
The AI that surrounds us is, by no chance, looking to replace us as human beings, but merely to help us by automating various processes from our lives. Its role is complimentary.
So let's try to debunk three common myths about destructive AIs.
One of the most pervasive and persistent ideas related to the future of AI is the Terminator. Think of the image of a brutal humanoid robot with a metal skeleton and glaring eyes that wants you dead at all costs. That's basically the Terminator in a nutshell.
So why this scenario is unrealistic?
The idea that a superintelligent, conscious AI that can outsmart humans emerges as an unintended result of developing AI methods is naive. As we've seen, modern AI focuses on automated reasoning, based on the combination of perfectly understandable principles and plenty of input data, both of which are provided by humans or systems deployed by humans. To think that common algorithms such as nearest neighbour classifier or linear regression could somehow spawn consciousness and start evolving into superintelligent AI minds is farfetched in our opinion.
Maybe building human-level intelligence is not categorically impossible. But for sure, superintelligence will not emerge from developing narrow AI methods and applying them to solve real-world problems.
Singularity is one of the favourite ideas of those who believe in superintelligent AIs. In short, technological singularity represents a system that optimises and “rewires“ itself so that it can improve its own intelligence at an ever accelerating, exponential rate.
So why this scenario is unrealistic?
The idea of exponential intelligence increase is unrealistic for the simple reason that even if a system could optimise its own workings, it would keep facing more and more difficult problems that would slow down its progress. This would be similar to the progress of human scientists requiring ever greater efforts and resources from the whole research community and indeed the whole society, which the superintelligent entity wouldn’t have access to.
Also, every time we make progress in AI technology, we become more powerful and better at controlling any potential risks due to it.
The value alignment problem
Imagine an intelligent, but not conscious AI system that is controlled by humans. The system can be programmed, for example, to optimise the production of paperclips. Sounds innocent enough, doesn’t it?
However, if the system possesses superior intelligence, it will soon reach the maximum level of paperclip production that the available resources, such as energy and raw materials, allow. After this, it may come to the conclusion that it needs to redirect more resources to paperclip production. In order to do so, it may need to prevent the use of the resources for other purposes even if they are essential for human civilisation. The simplest way to achieve this is to kill all humans, after which a great deal more resources become available for the system’s main task, paperclip production.
So why this scenario is unrealistic?
Truth be told, this scenario brings in ethical problems and enforcing or debunking it would require some degree of moral discussions as well. That being said, let's agree that we somehow managed to create a superintelligent system that could defeat humans who tried to interfere with its work. It’s reasonable to assume that such a system would also be intelligent enough to realise that when we say “make me paperclips”, we don’t really mean to turn the Earth into a paperclip factory of a planetary scale.
Ending the list of unlikely scenarios
The above ideas, although great for science fiction and selling, are quite unrealistic and most likely will never happen to their full extent. The Terminator is a great story to make movies about, but hardly a real problem worth panicking about. The Terminator is a gimmick, an easy way to get a lot of attention, a poster boy for journalists to increase click rates, a red herring to divert attention away from other real social or political issues that the human race has.
Does that mean AI is harmless? Well, we believe that there are some things we must immediately address and temper within the AI field. And that these represent the real issues with AI, not the doomsday scenarios presented till now.
So let's explore them together...
The real problems with AI
There are some social implications of AI and not being aware of them or their effects leads to a downward spiral of accelerating the negative impact.
Reinforcing biases through algorithms
Humans are biased. In a lot of ways. And some biases are extreme and downright harmful (for example racism). Machine learning, as a sub-discipline of AI, is being used to make important decisions in many sectors through the algorithms employed. This brings up the concept of algorithmic bias. What it means is the embedding of a tendency to discriminate according to ethnicity, gender, or other factors when making decisions about job applications, bank loans, and so on.
However, the main reason for algorithmic bias is human bias in the data. Humans are the one who helped construct the training/test data sets via indirect actions and the machine is simply relying on that. For example, when a job application filtering tool is trained on decisions made by humans, the machine learning algorithm may learn to discriminate against women or individuals with a certain ethnic background. Notice that this may happen even if ethnicity or gender are excluded from the data since the algorithm will be able to exploit the information in the applicant’s name or address.
We believe what we see. We interpret the things that we perceive with our eyes as being the reality. And we fully trust this extremely important sense of ours. We are visual creatures.
For example, when we see photo evidence from a crime scene or from a demonstration of a new tech gadget, we put more weight on the evidence than on written report explaining how things look.
But with the advent of AI techniques regarding image recognition and generative images, the things we see may no longer represent reality. AI is taking the possibilities of fabricating evidence to a whole new level.
For example, Face2Face is a system capable of identifying the facial expressions of a person and putting them on another person’s face in a Youtube video. Or consider Lyrebird, which is a tool for automatic imitation of a person’s voice from a few minutes of sample recording. It makes a pretty good impression, despite the robot voice reminiscences.
Changing the dynamics of how we work
Human evolution has one thing that has always been constant. Progress through automation.
First, we evolved by constructing better tools: to hunt, to gather, to protect ourselves. Over time, our inventions have gotten better and better. The 1700s allowed us to tap into an easily portable form of machine power that greatly improved the efficiency of factories as well as ships and trains - the steam engine. Automation has always been a path to efficiency: getting more with less. AI is a natural continuation of this progress.
However, with every step towards better automation, we changed the working life. With the steam engine, there was less need for horses and horsemen; with the computer, there is less need for typists, manual accounting, and many others. With AI and robotics, there is even less need for many kinds of dull, repetitive work.
Now, it's hard to predict how AI is going to evolve, but there is a certainty related to that and to the fact that advancements in this field will replace various types of jobs. There have been some estimates about the extent of job automation, ranging up to 47% of US jobs being at risk reported by researchers at the University of Oxford. We believe numbers like these should be taken with a grain of salt, but we cannot contradict the truth that the job dynamics is changing.
There are a lot of tasks that are more likely to be automated. We can already observe some clear signs: self-driving vehicles or customer-service applications such as helpdesks that can be automated in a very cost-effective fashion slowly replacing the human component.
It is true that, in the past, every time one kind of work has been automated, people have found new kinds to replace it. The new kinds of work are less repetitive and routine, and more variable and creative. The issue with the current rate of advance of AI and other technologies, is that during the career of an individual, the change in the working life might be greater than ever before. Such an abrupt change could lead to mass unemployment as people don’t have time to train themselves for other kinds of work.
So yes, there will also be new work that is created because of AI. But how exactly that is going to look like and how easy will be for the masses to tap into the skills needed for those jobs, is still debatable. If you'd like to read more on this topic, see for example Abhinav Suri's nice essay on Artificial Intelligence and the Rise of Economic Inequality.
With a history of over 60 years, the field of AI definitely has one of the most interesting evolutions. Everything that we have learned and applied with AI so far, suggests that the future is bright. We will get new and better services and increased productivity will lead to positive overall outcomes. Scenarios such as human race extermination and singularity belong more likely to the science fiction realm, tingling our imagination but most likely, stopping at that.
However, AI is not without perils. We have to carefully consider the social implications and ensure that the power of AI is used for the common good. Some common areas that we still have to work on: avoiding algorithmic bias to be able to reduce discrimination instead of increasing it or learn to be critical about what we see, as seeing is no longer the same as believing.
We also need to find new ways to share the benefits with everyone, instead of creating an AI elite, those who can afford the latest AI technology and use it to access unprecedented economic inequality.
These are the real challenges for the human race. And answers to them involve many verticals, not only technological but political, social, ethnical and psychological concerns as well.