On October 22, 2015, MIT Technology Review published a piece called, “Why Self-Driving Cars Must Be Programmed to Kill”.
It’s hard to imagine the degree of academic detachment required to reduce a philosophical quandary to the level of computer programming and to anthropomorphize said computer and the programs it runs to the extent that a life-and-death scenario could be blithely characterized as:
an impossible ethical dilemma of algorithmic morality.
But that’s exactly what happened because:
Jean-Francois Bonnefon at the Toulouse School of Economics in France and a couple of pals … say that even though there is no right or wrong answer to these questions, public opinion will play a strong role in how, or even whether, self-driving cars become widely accepted.
And professor Bonnefon and his fellow academics concluded:
People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.
After reading that article, we found ourselves walking, biking, and taking public transportation for fear of coming up on the short end to some algorithmic morality controlling a self-driving killing machine. But we eventually got over ourselves.
Since then, of course, self-driving cars have become all the rage, with some predicable results. Nevertheless, the May/June edition of NU Claims features an article entitled, “Autonomous Vehicles: Predictions vs. Truth”. The article said this, in part:
Autonomous technology doesn’t just apply to private passenger vehicles. With the shortage of truck drivers, having autonomous technology would allow goods to be moved across the country on a 24-hour basis. The autonomous trucks could keep moving without the need to stop for drivers to rest … the size of the vehicle and the logistics of turning and navigating in traffic bring a different element to the task at hand.
When we read that, we couldn’t help thinking of the 1971 Steve Spielberg film, Duel. In a latter-day version, of course, the truck wouldn’t be driven by a dude with terminal road rage. Rather, it would be directed an algorithmic morality programmed to kill. We’ve come a long way, baby.
After reading the article, we couldn’t decide who we’d least like to be: The guy who comes up on the short end of a car programmed to kill. The actuaries who have to work with algorithmic morality to calculate risk probabilities. Or the underwriters who have to throw darts at the wall to determine risk levels until there’s enough claims history to rate and price coverages with any reasonable degree of plausibility, to say nothing of profitability.
This is the kind of thing that give the phrase, brave new world, entirely new meaning.