The trolley dilemma in AI

Written with love on by Ionela Bărbuță

Posted in #Artificial Intelligence

There is a principle in ethics that forces us to carefully think about the consequences of an action and consider whether its moral value is determined solely by its outcome. The trolley dilemma is a classic experiment developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985.

The situation goes like this: You see a runaway trolley moving toward five tied-up (or otherwise unaware of the trolley) workers on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, on the side track there is a single person just as oblivious as the other workers. The ethical dilemma that arises is if you should pull the lever, leading to one death but saving five? Or should you just do nothing and allow the trolley to kill the five people on the main track?

Which is the more ethical option? (Courtesy of Points in Case)

Since 2001, the trolley problem and its variants have been used extensively in empirical research on moral psychology. The trolley dilemma has also been a topic of popular books. And in recent years, the problem often arises in the discussion of AI development.

Ethics and autonomous cars

The fourth revolution, powered by artificial intelligence, is adding cognitive capabilities to everything, and it is definitely a game changer. We are using AI to build autonomous vehicles, to automate processes, jobs and in some cases, even lives. Considering the impact it will have on individuals and also on the very future of humanity, addressing the topic of ethics is a must.

The first ethical dilemma that appears in AI is related to driverless cars. The advent of companies that are trying to build truly autonomous cars has brought the trolley problem back into people’s attention.

Courtesy of Inc.

Imagine you’re driving to work and suddenly there’s a child crossing the street illegally. The second you see him, you hit the breaks to avoid hitting him, right? In that moment you are making a moral call that might shift risks from the pedestrian to you and the other people in the car, if any.

In the case of AI systems, things are not that straightforward. Is a self-driving car able to make such moral judgements? Furthermore, if we were able to teach autonomous cars how to make an ethical decision, does it mean that we should actually do it? Not many people feel comfortable knowing that a machine might soon be able to actually decide who gets to die and who gets to be saved.

Besides, there's more to ethics in AI than teaching a machine to make a certain decision. We also need to look at the causes behind a specific course of action. The clear explanation of machine reasoning is necessary to determine accountability. But can we hold a machine accountable for a decision it took based on programming made by humans? Also, how can we make sure that AI will evolve to be non-discriminatory?

The Three Rules of Robotics introduced by Asimov are mentioned a lot in the past couple of years and a large number of projects are tackling the ethical challenge. Among them, initiatives funded by the US Office of Naval Research and the UK government's engineering-funding council address tough scientific questions, such as what kind of intelligence, and how much, is needed for ethical decision-making, and how that can be translated into instructions for a machine.

According to the Moral Machine survey released in 2018, many of the moral principles that guide a driver’s decisions vary by country, which means that settling on a universal moral code for the vehicles could be quite a difficult job. Results show that people from relatively prosperous countries with strong institutions were less likely to spare a pedestrian who stepped into traffic illegally.

The study laid out 13 scenarios in which someone’s death was inevitable. Respondents were asked to choose who to spare in situations that involved a mix of variables: young or old, rich or poor, more people or fewer.

In a different survey, conducted by Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge, respondents stated they wanted an autonomous vehicle to protect pedestrians even if it meant sacrificing its passengers. However, they also mentioned they wouldn’t buy self-driving vehicles programmed to act this way.

The ethics of automated jobs

Although the Luddite movement ended a long time ago, some people still have a sense of fear and anxiety when it comes to technology and automating jobs. And with the development of AI systems, there’s currently a general debate whether or not the machines will steal all the jobs.

Robots assemble a Tesla Model S car in Fremont, California. Courtesy of The Guardian

A Mckinsey report indicates that people’s fears might not be unfounded. According to data from the report, up to 800 million jobs (20 percent of the global workforce) could be lost worldwide to automation by 2030. It seems that for the first time, individuals will actually start competing with machines on a cognitive level. With AI systems being able to compile data and learn so much faster than us, many economists are concerned that, as a society, we won’t be able to adapt, and might ultimately get left behind.

The ethical question that arises here is not whether or not we should allow certain tasks and jobs to be taken over by machines. The real question is why we don’t try to provide individuals that might lose their jobs in the future with viable and within reach alternatives.

There are a lot of questions we need to take into account when it comes to ethics in building AI system and the time to answer them is right now.

“If the current trends continue, the people will rise up before the machines do.” – Andrew McAfee

Thank you for reading. Now help us spread the ❤️ by sharing.

Ionela Bărbuță

Written with love by Ionela Bărbuță

Ionela is an enthusiastic professional, with a proven experience in the payments and fintech industry. She likes working with people, creating things, and writing about AI, security and fraud.