This is the first in a series of classic ethical conundrums I’ll twist into fun new shapes with ideas from science fiction and fantasy. To start off the series let’s look at one that isn’t too far off.
Jack and Jill are cruising along in their driverless cars. Neither of them is paying attention, because neither of them needs to; at this point the cars are that good. Just about everyone is driverless now, and all of the vehicles have transponders that communicate with each other, so we’ve got backups upon backups. Everything is as safe as can be.
So Jack is reading a good novel and Jill is playing Uno with her kids in the back seat. (The front seats turn all the way around to face the back, of course, because why wouldn’t they? Car travel is really more like train travel now, including a little table between the front and back seats for playing Uno.) Everyone’s having a nice little road trip.
Until Jack blows a tire. His car has to make a split-second decision—no problem, because it actually makes dozens of safety decisions per second. It has to choose: swerve left, into Jill’s car in the oncoming lane, or do nothing, and crash into a beautiful and majestic redwood tree. Jack’s car instantly knows:
• Cars are engineered for crash safety, but that technology is far from perfect, and a head-on collision will almost certainly cause injury to all parties involved.
• Redwoods are not at all engineered for crash safety, and in fact the more beautiful and majestic they are, the worse they are for you to crash into.
• Jill’s car has more people in it than Jack’s car, and also more people in it than the redwood tree.
Whereupon Jack’s car instantly calculates the probable damages from two choices:
1. Do nothing. Jack hits Jill head-on, risking injury to himself, Jill, and her kids.
2. Swerve. Jack hits the tree head-on, risking very serious injury to himself but placing everyone else out of harm’s way.
So what should the car be programmed to do?
If you choose 1, you might be an ethical egoist: that is, you believe “morally right” means “whatever maximizes my own personal well-being,” regardless of anyone else’s interests.
If you choose 2, you might be a utilitarian: that is, you believe “morally right” means “whatever maximizes well-being for everyone involved.”
If you choose 1, you’d also have to recognize that you wouldn’t choose 1 if you were Jill; only Jack is better off in 1. In which case you might have to admit you’re a hypocrite.
If you choose 2, you might put a pretty serious dent in new car sales, inasmuch as the old beater I have to drive myself might look pretty appealing if I know that those shiny new driverless cars might be willing to kill me.
If you choose 1, you might be trading your driver’s license for a bus pass. If what you want to do is maximize your own well-being, the best way to ensure that is for all cars to choose 2—that is, to swerve away from you if they can—and that means you can’t choose 1.
If you choose 2, you might have a hard time answering people who say that their car, which they paid for with their money, ought to maximize their best interests—or in other words, that every car has a greater obligation (if you can call it that) to protect its owner than to protect anyone else.
If you choose 1, you might also be committed to running over motorcyclists more often than necessary, because they’re light and you’ll probably survive the impact.
If you choose 2, you might also be committed to smashing up Volvos more often than necessary, because they’re so good in a crash. In fact, you’re probably committed to driverless cars getting regular software updates on which vehicles are the best to hit. This would shake up the entire economy around cars, car insurance, etc.
Whether you choose 1 or 2, you probably want to call your senator right now and push for a bill to give the National Travel Safety Board jurisdiction over driverless car programming. The NTSB is the organization charged with minimizing risk in air travel, and it mandates that all airlines adhere to the same safety rules. If that universal conformity were not the rule with driverless cars, then some auto manufacturers would make cars that were more selfless than others. Some companies might play a little fast and loose with the rules—I’m looking at you, Volkswagen—and that could be very dangerous indeed. Whether you’re a utilitarian or an egoist, you probably want to know what the rules are and also know that everyone has to follow them.
And in case you think all this philosophy stuff has no practical value, consider this: programmers are weighing all of these considerations right now.
So what’s your answer, 1 or 2? Pop on over to Novelocity’s Facebook page and make your opinion known!