Self-driving cars are estimated to save 29,447 lives each year once they become a mainstay. Although it’s exciting to realize the benefits of a computer making decisions (it’s never tired, it doesn’t drunk drive, it doesn’t have road rage, it will obey all laws, to name a few) there is an aspect of putting our lives in the hand of a computer that is a bit more sinister.
We look at the trolley problem in the context of autonomous vehicles, an ethical dilemma debated since the 1960’s: if a trolley is headed towards five people tied to the track and you have a lever that could move the train to a different track that had only one person on it, would you pull the lever? For the majority of people surveyed, the answer is an obvious “yes”. Saving five lives at the cost of one is both mathematically and socially justifiable. For self-driving cars, this dilemma becomes all too real.
Picture this: a self-driving car is barreling down a road and five pedestrians suddenly walk directly into its path. The car doesn’t have the ability to brake fast enough so it has one of two options; hit and kill the five pedestrians or, swerve and hit a concrete barrier, killing the single passengers inside. Theoretically, the same logic stands that five lives are more valuable than one, but tell any passenger that the car they are about to entrust their life in may choose other lives over theirs, and you can guarantee they won’t get in, much less buy one of this potential death-traps.
So, what’s the solution? Car manufacturers know they can’t sell a car that will value other lives over their customers, and, they must also appeal to the greater good. It seems no manufacturer
It seems, for now, the debate is still very open. What do you think is the ethical way to program self-driving cars? Would you get in a car that would value other lives over yours?
Subscribe to our newsletter to follow our monthly future of the industry series!