Self-Driving Cars May Have to Kill to Keep People Safe

As the reality of self-driving cars has entered the fast lane, and humanity may soon face the very real possibility of automated transport on roads, designers of the cars are faced with some serious considerations.

Known as the Trolley Problem, this century-old thought experiment deals with an ethical dilemma. Say you are driving an old trolley down the track when you see five people haphazardly roaming about completely unaware of the large trolley barrelling down upon them. The impact is imminent, and the only way to save those children’s lives is to switch the trolley onto another track. The problem is that on the diverted track lies a person tied to the track like in an old Western movie. The ethical question; who do you spare? The five or the one?

Trolley problem 570x195 300x103

What do I do?!?! (image: Wikipedia)

Now imagine you are in a self-driving car. Totally unaware of their surroundings, three toddlers run out into traffic. The only way to save the children is to swerve quickly off the road and into a ditch, potentially killing the vehicle’s occupant, you. The question has long been debated, but now that self-driving cars have entered testing, the programmers of those cars are in an interesting bind.

By programming the car to save lives, such as the three toddlers, the car is simultaneously being programmed to kill, should the situation warrant it.

googleselfdriving 570x317 300x167

Google’s Self Driving Car. Let’s hope it isn’t set to “Kill Mode.”

That being said, there are benefits to automated vehicles. The United States sees roughly 30,000 deaths a year as a result of automobile accidents. Across the globe, it’s over one million. The problem is that humans are generally crummy at a lot of things, and getting behind the wheel of a four thousand pound car is no exception. Machines on the other hand are calculating, pretty consistent, and do not suffer from road rage, getting drunk, or getting distracted on the road by a cute GIF your girlfriend just sent. Looking at the big picture, automated cars will save lives. Many lives.

That being said, technology will never be flawless, and no one can guarantee that self-driving cars will be 100% perfect, successful and safe. Accidents, no matter how many variables are removed, will eventually happen. The problem arises when you have to try to convince people that, depending on the situation, their car may be forced to kill them.

ubercar 570x414 300x218

What big scanners you have. The better to make ethical decisions that are not in your favour my dear…

The vast majority of people, in the toddler situation above, would swerve and send their car and themselves into the ditch instead of running down a bunch of helpless innocent little kids. Is irrational to get jittery when the self-driving car has to make that decision instead? Is removing the locus of control from the driver in determining life or death a problem? Even if by removing that split-second decision, thousands of lives could be saved every year? It all boils down to what people think, and more importantly, feel, and the car manufactures are definitely exploring this factor.

Placing your life in the hands of a machine, especially one that has the programming to kill you should the situation dictate it, is an uncomfortable thought. For now though, keep your eyes on the damn road.

SOURCE