Should Your Driverless Car Kill You To Save a Pedestrian?

One thought experiment stands above the rest for the simple reason that it tackles the most important question of all – when is it okay to take a life?

The experiment is known as The Trolley Problem, and it goes like this: there are five people on a train track and a runaway trolley is about the kill them. The disaster can be averted, however, if you derail the train. This can be accomplished by pushing a fat man onto the track. While there have been tweaks to the design over the years – the addition of a rail switch, the “fat man” being responsible for putting the people in peril, the number of victims being higher/lower – the final question remains the same: do you let the trolley kill? Or do you kill to save?

Generally, answering this kind of question in real life has been reserved for the various heads of state, but a new group is now being forced to tackle it. With the inevitable spread of the driverless car on our world’s highways, companies like Google, Audi, and Mercedes-Benz are consulting the moralists and ethicists of our day to help answer our generation’s version: if a driverless car is about to run over a pedestrian, should it swerve if it means sacrificing its passenger?

The gut answer is, yes, of course the car should swerve out of the way of innocent pedestrians and send the passenger to their death. When you step into a driverless car and allow a machine to drive for you, you should be the one taking the risk. A recent study made by Jean-François Bonnefon from the Toulouse School of Economics showed that 75% of people think a driverless car should always swerve and kill the passenger, even to save just one pedestrian.

But the answer isn’t that clean. When you start throwing other specifics into the scenario, the question becomes a bit thorny. Does the calculus change if there is one pedestrian versus four passengers? What if the pedestrians are a group of obnoxious jaywalkers who ran into the middle of the road? What if the car swerves, but when it does, it hits another group of innocent people? What if the car’s passengers include children?

And on and on the scenarios go. Of course, just like the Trolley Problem, there is no “right” answer. In fact, the actual question being asked with either thought experiment is what is “acceptable” to the general public. And that may point us towards the beginning of a possible solution. What seems to be acceptable to the general public is to let the individual responsible for the accident die; but the core of the problem is to find a technology able to determine who is responsible – and even more difficult, to determine it in a split second.

See, no one is ever going to be pleased with whatever the hired philosophers for the car companies come up with (this is why, not surprisingly, no manufacturer has announced a stance on this issue.) So, the best bet may be to have programmers create a randomization algorithm that activates when the car finds itself in a “no right answer” conundrum, allowing the chips fall where they may. It may not be elegant, it may not seem right, but it might be the most fair, in a way. What is certain, is that engineers and philosophers never had to work so hand-in-hand to solve such a heated ethical question before.

If no answer is found, engineers might just have to invent driverless cars strong enough to take a tumble off a cliff and allow the passenger to walk away without a scratch.

Another challenge.

Related Articles

- Advertisement -

Latest Articles

- Advertisement -