Category Archives: Steve Bein

Cybermorality: Should we aspire to live forever?

Steve Bein continues his series on philosophy and science fiction. Read past articles here.


When I became a real grownup and got a real grownup job, I got my first life insurance policy. I took the test you have to take and my insurance company predicted I’d live to the age of 121. I was speechless, but then I thought about it. Medicine has changed so radically in the last 40 years that it’s fair to say it’s almost a completely new science. When I was a kid, getting your tonsils out involved two days in the hospital. Now they actually shoot them out of your throat with a laser gun. Science fiction has not only become real, it’s become routine.

All estimates indicate that medicine will advance far more radically in the next 40 years than it did in the last 40. And 40 years from now I’ll only be 83.

keep-calm-and-live-foreverA common criticism of Western medicine writ large is that it sees mortality as a curable condition. Rest assured, there are thousands of researchers working on immortality right this minute. So in all probability my insurance company underestimated my life expectancy. In fact, it’s fair to say that 80 years from now, no one has the slightest idea what medical technology will be capable of, nor how long the average human life span will be. It’s not unreasonable to predict that we’ll be able to keep a human body alive more or less indefinitely.

To this we must add a caveat: alive and thriving are not the same thing. This is why that estimate of 121 left me speechless: I’m not sure it’s good news. Give me 119 good years and 2 bad ones and I’ll say sign me up. Give me 81 good years and 40 bad ones—which is what our current medical practices would promise me—and I’ll say thanks but no thanks.

And we should add one more observation: while medical technology has drastically expanded the average number of thriving years in a human lifespan, it hasn’t actually extended human lifespan itself all that much. The world’s oldest person today and the world’s oldest person of 100 years ago and 200 years ago are all about the same age. So it’s possible—doubtful, I think, but possible—that we really do cap out as hundred-and-teenagers, and that the only question is whether we can make all of our years good years.

But let’s be optimistic. Let’s say that in the year 2094 I’m the world’s sexiest 121-year-old man. My mountaineering years are long behind me, but I can still write books and hang out with friends and have a basically comfortable, more or less self-sufficient existence.

The question is, is such a world morally good?

There are some reasons to think it might not be. For one thing, just being 100 years old is expensive, so probably only millionaires can be hundred-and-teenagers. That will exacerbate certain kinds of economic inequality and contribute measurably to certain ecological problems. But even if we could somehow make it just as cheap to be 121 as it is to be 21, we’d have other, larger social justice questions.

As people age they tend to get set in their ways, and so an aging but undying demographic would tend to retain its current political beliefs. The unfortunate truth is that much of the political progress in the world only happens when the old guard dies out.

Maybe some conservatives will bristle at that, but consider the following sentence: “I know my grandma is racist, but she’s a really nice person.” That is a completely coherent sentence in modern American society. We tend to forgive older people for old-fashioned beliefs. Why? There are many reasons, but only one is inevitable: even if these geezers never surrender their ideas, in a couple of decades they’ll kick the bucket.

Suppose that stopped being true. Suppose the old guard gets another eighty years before passing the torch. If we all lived to 121, some of the lawmakers to vote against the 19th Amendment—you know, the one that allows women to vote—would still be alive and voting. Some of those guys would have parents old enough to be slave owners. That’s right: we’d only be one generation removed from the Civil War.

We can’t even imagine what the hot-button issues will be in 2094, when I am the world’s sexiest 121-year-old man. If interracial marriage was the big Supreme Court decision in 1960, and if same-sex marriage was the big one in 2015, maybe the one I’ll be upset about is the case way back in 2050 where humans gained the right to marry robots. Maybe my great-great-grandnieces will blush as they make excuses for me: “I know Uncle Steve-o is a human supremacist but he’s a really nice person.”

humans-marry-robots-1


Reach Steve Bein at @AllBeinMyself or on facebook/philosofiction.

Save

Cybermorality: Number Five Is Alive

Steve Bein continues his series on philosophy and science fiction. Read past articles here.


In our last installment we toyed around with a classic problem in ethics called “the fat man in the cave.” You can re-read it in full here, but here’s a brief recap: you and a bunch of other people are in a cave that’s filling with water. The only way out is blocked by a portly fellow who’s gotten quite stuck. He can’t be removed by any means short of cutting him out (thereby killing him). The only way to save everyone in the cave is to kill this guy, who, it must be stressed, is an innocent person.

Oh, and he’s facing you, so if you don’t kill him he’ll drown anyway.

We faced four options last time:

Daughter of the Sword1) It’s okay to kill the guy because it’s in self defense. This is nonsense, of course. The guy you’re planning to kill can’t possibly harm you. He’s stuck.

2) It’s okay to kill one innocent person to save a greater number of innocents. This is the most popular choice in my ethics classes, but it has some really horrible implications. (Ask Ozymandias in Watchmen how many innocents it’s okay to murder. It’s kind of a lot.)

3) It’s not okay to kill an innocent person, period. This is the choice pretty much everyone says they believe until they’re confronted with a scenario like the fat man in the cave. Then… well, pretty much everyone sells out.

4) It’s okay to kill innocent people only if you have their permission. This is a clever loophole: if you can talk the guy into letting you kill him, it’s not murder, it’s just assisted suicide.

#4 is appealing to a lot of people. It lets you maintain the ban on killing innocents and get out of the cave alive. But what if the guy really doesn’t want to give permission? What if he’d rather drown than let you kill him?

This is where we added a bit of sci-fi: the genius pill, which boosts your intelligence to nigh-superhuman levels. (Ted Chiang played with this idea in “Understand,” as did Daniel Keyes in “Flowers for Algernon.”) Let’s say you have one of these pills with you in the cave. You could take it, making yourself far more intelligent than the human drain plug, and persuade him to “take one for the team.” That gives us a variant on #4:

4.1) It’s okay to kill innocent people only if you have their permission, AND it’s okay to use unfair advantages to secure their permission.

Remember, the point of taking the pill is to talk him into something he’s otherwise unwilling to do. The big question is whether or not that’s coercive.

If the bad guys dose James Bond with truth serum to make him give up secrets, clearly that’s coercive. It’s really this simple: if they were to ask his permission, he’d say no. But that’s not quite what’s happening here. There’s a huge difference between giving this guy a drug to make him dumber and giving yourself a drug to get smarter. The former constitutes assault, the latter doesn’t.

But is it relevant that the results are the same? Whether you’re doping him or geniusing yourself, either way you get him to agree to something he wouldn’t have done otherwise. He expressed his will—a firm no—and you said, hey, let’s keep talking (and hang on a sec while I take this pill).

So let me give you an option #5, one that no one has ever raised in my ethics classes:

5) Give the fat man the pill and see if his newfound genius makes him volunteer to die.

If you’re convinced that he ought to volunteer—so convinced, in fact, that you’re willing to pop a pill to seal the deal—then maybe you ought to see if he finds your super-logic persuasive. Give him the pill. Let him weigh the situation with the benefit of a juiced-up brain. If you’re right, he’ll see that. Right?

I can’t tell whether it’s strange that my ethics students never propose #5. On the one hand, it’s got some serious appeal. Essentially you get #2 and #4 wrapped up in one. On the other hand, you’re handing your fate to someone else. The fact that that person is much, much smarter than you is cold comfort. Especially if that person wasn’t polite enough to volunteer to die in the first place.


Reach Steve Bein at @AllBeinMyself or on facebook/philosofiction.

Save

Cybermorality: The Genius Pill

Steve Bein continues his series on philosophy and science fiction. Read past articles here.


Here’s the sentence I write on the board to kick off one of my Ethics classes:

Murdering an innocent person is wrong.

Then I ask people if they think the statement is true or false. We bat it around a while. We make it clear that the person is innocent in any sense you wish them to be: they’re not hurting you or anyone else, they’re not committing any “victimless crimes,” they really are standing around minding their own business.

In the last installment of Cybermorality, I told you almost all of my students say the sentence is true. Then I give them one scenario and almost all of them recant. It’s called “the fat man in the cave.” (It’s an old problem, created long before we started approaching things like obesity and self-image with sensitivity. Bear with me.)

You’re following a heavyset fellow who is leading a group of people out of a cave near the coast. The tide is rising, the cave is filling with water, and he gets stuck in your only viable exit. He seals it completely. Don’t worry: he’ll be fine. He’s facing upward, out of the cave, so he won’t drown. Unfortunately, the rest of you are not so lucky. All of you will drown—unless, of course, you do something to remove him.

In the original scenario (from the philosopher Philippa Foot) you’re given a stick of dynamite. (Why she picked dynamite I don’t know. A knife seems more plausible for a bunch of spelunkers.) Either way, it matters that this guy is innocent. He didn’t force you in here, and in fact he was trying to get you out. But now he is well and truly stuck. The only way you can save yourself and everyone else in the group is to remove him from the hole. Cut him out or blow him up, either way he dies.

You’re faced with a couple of competing principles:

1) It’s not wrong to kill in self defense. I’m not letting you off that easy. Yes, you’ll die unless you kill this guy, but he’s not the one who’s going to kill you. The water is.

2) It’s not wrong to kill innocents in order to save a greater number of innocents. This is the usual reason people give me when they say it’s okay to kill the poor guy. But let’s be perfectly clear: you’re taking an innocent life and you benefit from his death directly. In any other circumstance that looks a lot like murder.

3) Murdering an innocent person is wrong. Like, always. No matter what. Sounded pretty good a minute ago, didn’t it? The thing is, faced with this scenario about 90% of my students abandon #3 in favor of #2.

For the holdouts we can a little more pressure: turn the guy around so he’s facing the water. Now he’s going to die no matter what. If you don’t kill him, he’ll drown with the rest of you. But his innocence hasn’t changed one bit.

I find that about one in thirty students will say #3 is true even in that final scenario, where the innocent person dies no matter what. But once I turn the poor guy around, some crafty people come up with a fourth option:

4) Murdering an innocent person is wrong, but assisting in suicide isn’t. This allows one morally sound way out of the cave: the guy stuck in the hole has to give you permission to kill him. If he can’t do it himself—maybe because his arms are stuck—then you’re just helping him complete a noble suicide.

People tend to like #4, but only if the guy gives you permission to kill him. If you ask him and he says no, then for a lot of people he becomes not only innocent but also vulnerable. It’s not his fault that his only defense against you is words, while it’s totally your fault that you’re standing there with a murder weapon in hand.

Now maybe you don’t agree with those people. Maybe you want to say he’s being a selfish jerk. He’s going to die anyway, so why not go out a hero? (I can think of some good reasons, like how painful it is to be knifed or dynamited to death. I’m told drowning isn’t that bad.) But presumably it’s just as wrong to kill selfish innocents as selfless ones, so I don’t think that gets you anywhere.

AlgernonYou’ve still got one recourse left to you: you can try to talk him into the noble suicide. And this gives us a nice opportunity to see how much weight #4 can bear.

Let’s get science-fictional about this. Maybe you’ve read Daniel Keyes’s beautiful story, “Flowers for Algernon,” and maybe you’ve read Ted Chiang’s chilling story, “Understand.” At the center of both of them is a medical treatment that dramatically increases the patient’s intelligence. So let’s pose a new scenario in which, in addition to the knife or the dynamite, you also get a performance-enhancing drug: the genius pill.

Let’s say you believe #4 is true, so you try to talk the guy into allowing you to kill him. He isn’t having it. He’s as smart as you are, and for every argument you offer he’s got a counterargument. But if you take the pill—and only if you take the pill—you’ll be able to outsmart him.

If you take the pill and convince him to commit suicide, is that any different from an adult convincing a child to run out into traffic? By taking the pill you make a vulnerable person even more vulnerable. On the other hand, you save a lot of lives. But does that offset the cost?

There’s one more alternative no one ever mentions, and I can’t know whether it’s because no one thinks of it or no one who thinks of it wants to say it. Should I tell you what it is?

homer to einstein

How about this: I’ll tell you next time. Until then, mull it over, and if you think of anything cool you can reach me @AllBeinMyself, or pop over to facebook/philosofiction or facebook/novelocity and let me know!

Cybermorality: Time travel and killing Hitler, pt. II

Last time on Cybermorality we asked the big question: should you travel back in time to kill Hitler? A fundamental assumption in that debate—one that maybe you accept, maybe you reject, or maybe you didn’t even notice—is that killing him is justified because if he’d never come to power, the world would be much better off.

Let’s examine that assumption. It’s got two parts: (1) killing him is justified because (2) if he’d never come to power, the world would be much better off. I think the truth of (2) is self-evident. It’s (1) that we need to examine more closely.

For one thing, it’s not at all clear that you have to be violent to remove Hitler from power. You could just help him stay in art school. Or pull a Back to the Future and see to it that his parents never meet. Or—my preferred method, since I’m a philosopher—engage him in reasoned debate. See if you can talk him out of his irrational anti-Semitism and ineffective authoritarianism. They’re really stupid positions; arguing against them isn’t hard.

But maybe you want to say that’s impossible. He’s a closed-minded bigot. He’s power-hungry. You can’t reason people out of a position they didn’t reason themselves into. That sort of thing. Personally I place a lot of faith in the power of reason, but I do understand where you’re coming from.

So let’s take it one step further. Let’s postulate that the only way to prevent Hitler from rising to power is to kill him. You can’t reason with him, can’t guide him into becoming a mediocre artist, can’t prevent him from being born, yadda yadda yadda. Let’s say your only options are to let him be (and he comes to power, and horrible things happen) or to execute him (and they don’t).

There’s still another question to be asked: does he deserve to be killed?

Maybe your first thought is, Well, duh. Of course he does. The dude is a genocidal maniac. But keep in mind, you’re going to kill him before he does any of that. In fact, that’s the point: to take him out before he’s guilty of any of his horrendous crimes.

MinorityReportPhilip K. Dick toyed with this idea in his short story, “The Minority Report.” (You can read a summary here.) The central question there is whether it’s right to punish people for things they haven’t done yet. We should point out that in some cases the answer might well be yes. For example, if Joe Thug is trying to whack you on the head so he can steal your wallet, lots of people say it’s not wrong to preemptively kick him in the wee-wee and run. You don’t have to wait for him to actually hit you before you hit back.

But that case is far too easy because Joe has already committed assault by threatening you. By kicking him, you’re just preempting his attempt at battery. For it to count in Philip K. Dick’s sense—what he calls “pre-crime”—the cops would be able to arrest Joe for assault and battery before he even leaves home.

The Hitler case is legitimate pre-crime. We already know what he did. But maybe even his case is too easy, because his name is synonymous with evil. So let’s take a current hypothetical case, the one that’s on the news every night.

I don’t know what it means about my country that the two most hated people in the nation are the front-runners for the presidency. What I do know is that millions of people fear a Trump presidency in the same way they fear a meteor eradicating all life on Earth, and millions of other people fear a Clinton presidency in exactly the same way. The “argument,” such as it is, goes something like this:

This candidate knows absolutely nothing about national defense, nothing about securing our nuclear arsenal, and nothing about dealing with terrorism. Therefore if this candidate becomes president, we all die screaming in a nuclear fireball.

Trump or Clinton, take your pick; either way, you won’t have to look far to find someone spouting this line of rhetoric. (I recommend ignoring these people. There’s plenty of well-reasoned, well-informed journalism out there too.)

But let’s say it turns out not to be rhetorical. Let’s say you intercept a time traveler who has come back to kill the candidate in question. This person brought along some history textbooks from eighty years from now, conclusively proving that this candidate is directly responsible for millions of deaths by nuclear fireball. The only solution, your time traveler says, is to kill the candidate.

So you lock this person in the bathroom and call 911. Good idea. But just for argument’s sake, let’s say all of this really is true. The case for killing the candidate (again, you pick which one) is the same as the case for killing Hitler: namely, if this person comes to power, the death toll will run into the millions. But as of today, this person hasn’t come to power, hasn’t got any nuclear weapons, and hasn’t brought about the deaths of millions.

You have at least three options:

1) It is always wrong to kill an innocent person. Even if this candidate will be responsible for millions of deaths, and even if the candidate will deserve execution for that, s/he doesn’t deserve execution now.

2) Killing one to save millions is morally right. But only if no nonviolent means are available, of course. (If, for instance, it would be enough to kidnap the candidate until November, that would be much better than shooting this person.)

3) Both options are equally right and equally wrong. As of today the candidate is innocent, and therefore deserves to live, but it’s also wrong not to kill the candidate if that really is the only way to prevent millions of deaths.

This may take some air out of the basic intuition that it’s obviously right to kill Hitler. It might also leave you with some uncomfortable commitments:

If you chose 3, you still have to land on 1 or 2. You can’t abstain; choosing not to intervene is the same as choosing 1.

If you chose 2, I’ll bet I can talk you down a lot lower than a million lives. Would you kill one innocent to save a thousand others? A hundred? If so, why not kill one to save ten? If you’ll go that far, why not kill one to save two? And if that’s too far, what’s the magic number? More importantly, how do you justify that number? Or is it just an arbitrary choice?

If you chose 1, you’re actually on pretty solid ground, philosophically speaking—if you can stand your ground. Almost all of my ethics students say killing innocents is always wrong, until I pose one case for them; after that, almost all of them say there are exceptions to the ban on killing innocents.

I’ll give you that case on our next installment of Cybermorality. Until then, hop over to facebook/novelocity, facebook/philosofiction, or Twitter @AllBeinMyself and make your opinion known!


Steve Bein

Cybermorality: Should we go back in time and kill Hitler?

Maybe you’ve read the short story “Wikihistory” by Desmond Warzel. It’s the one written in the form of message boards from the International Association of Time Travelers, starting with the post of a new member who proudly announces that he’s gone back in time and killed Hitler.

Minutes later, a senior member goes back and incapacitates him before he can carry out the deed. Why? Because if there’s no Hitler, there’s no World War II, then we get none of the radical technological advances fueled by the war, and without these we’d never have developed—you guessed it—time travel.

None of this is spoilery, since it all happens on page one, but go ahead and read the rest of the story now if you want to. I’ll wait.

Okay, welcome back. My favorite line of the story also comes on page one: “Take it easy on the kid, SilverFox316; everybody kills Hitler on their first trip.” Why? Well, why not? The dude’s name is synonymous with evil. Not many people have earned that distinction. Emperor Nero got that reputation for himself way back when, but by body count he’s a featherweight compared to Hitler.

Now here’s this week’s thought experiment: let’s say you get to go back in time exactly once. Setting Desmond Warzel’s concern hypothetical concern aside, let’s say we can erase Hitler and still get time travel tech without WWII. (For what it’s worth, I think there are plenty of other factors besides warfare that incentivize us to develop rocketry, electronics, and computers.) You can go back whenever and wherever you want, but you only get to do it once. Let’s give you six months to get your project done, then you come back.

We can ask two questions now: what would you do, and what should you do?

I’ve always liked Patton Oswalt’s answer to this: beat George Lucas to death before he can make Star Wars: Episode I. It’s an admirable choice. I hate that movie so much that it has affected the way I review all other movies. For instance, I give Batman v. Superman 1½ stars: one star for being absolutely terrible, plus half a star for not having Jar-Jar Binks in it.

But as much as I despise that film, as much damage as it (and the other abominable prequels) did to my beloved childhood memories, I have to admit this would be a terribly selfish use of my one chance to change history. One guy ruined my favorite movies, one guy murdered millions. Seems like a straightforward choice.

So let’s stick with the second question. Never mind what you’d like to do with your one opportunity to go back in time. What should you do?

A lot of people will say the right thing to do is to benefit humanity. (Since I really do think erasing Episode I from history would benefit humanity, perhaps we should add that we ought to benefit humanity to the greatest extent possible.) If that’s true, then seeing to it that Hitler stayed in art school isn’t necessarily your best option. Stalin killed millions more than Hitler did. Genghis Khan killed millions more than Stalin—twice as many, in fact. Something like 45 million people, over 10% of the world population at that time.

But maybe preventing genocide isn’t your best option. Maybe you want to go back 95 million years or so and kill every last mosquito you can find. The number that gets thrown around is that about half of all human deaths in history can be attributed to diseases delivered by mosquitoes. So just wipe them out. Don’t worry about the bats that eat them; with no mosquitoes on the menu in the first place, they’ll just evolve to eat something else.

But maybe extinguishing an entire species just for human benefit isn’t your cup of tea. If so, then how about this: tell ancient people about germ theory. You could save a lot more than 40 million people if all the physicians of antiquity knew that sterilizing their instruments in boiling water is a really good idea.

While you’re at it, read up on obstetrics before you go, and teach those same physicians a thing or two about delivering babies. Prior to modern medicine, the maternal mortality rate during childbirth was about 1 for every 100 live births. Today we measure it in deaths per 100,000 live births. So here’s a pretty awesome Mother’s Day present: cut childbed mortality by 99.99% throughout history.

Or maybe all of this is still too selfish for you. Maybe we should benefit not humanity but rather the entire planet. You’d only have to go back to the 1960s to meet the first scientists making serious headway on climate change. Bring plenty of books with you. You’d catapult our understanding of carbon emissions decades ahead in a matter of weeks. You could even kick off the green energy industry, and then when you got back home you could retire on the massive profits. In the long term, you’d save not only millions of human lives but also dozens—perhaps even hundreds—of plant and animal species.

Or maybe you’ve got a better idea. If so, comment here, or tweet me @AllBeinMyself, or head over to Novelocity’s Facebook page to make your opinion known!

Steve Bein

Cybermorality: Soldierless Warfare, pt. II

In the last installment I addressed the moral cost of engaging in war with robots rather than humans. That was a response to a Rolling Stone article, and I condensed that Novelocity post down to a few sentences and sent it in to Rolling Stone.

Well, they published it. So yes, Novelocity had it first and one of the best known magazines in the country had it second. Looks like we’re pretty cool around here. I’m just sayin’.

Rolling Stone letter

 


Steve Bein

Cybermorality: From Driverless Cars to Soldierless Warfare

Steve Bein continues his fascinating series on the intersection of philosophy and SFF. Previous installments include:
when your car should kill you
if genocide is always wrong
and making moral decisions in a vacuum

Cybermorality: From Driverless Cars to Soldierless Warfare

Steve BeinA couple of weeks ago on this site we looked at the ethics of the driverless car. This week’s Rolling Stone addresses the same issue in an article called “The Ride of Intelligent Machines,” starting with Google’s car and moving on to military robots in Iraq and Afghanistan.

The selling points of robot warfare are pretty obvious. Human soldiers can bleed, they can die, and these days they’re increasingly able to survive what used to be unsurvivable, then come home to cope with the consequences. Soldiers are expensive to train, house, mobilize, and feed. They have feelings, families, and—seldom mentioned in these discussions—moral values, which are sometimes at odds with the orders they’re given or the causes they’re sent to fight for, especially in modern theaters of war. That has real psychological repercussions, and the mental trauma of modern warfare can be worse than the physical trauma.

Robots aren’t subject to any of that. They don’t get funerals. They rarely make headlines. They’re cheap and getting cheaper. (This, incidentally, should make global superpowers nervous; state-of-the-art combat drones are affordable even for the tiniest nations.) Most importantly, though, their destruction counts as collateral damage, not casualties. Rolling Stone’s Jeff Goodell sums it up this way: “Robots can go into situations where soldiers can’t, potentially saving lives of troops on the ground. […] Since robots don’t come home in caskets, use of smart machines allows military leaders to undertake difficult missions that would be unthinkable otherwise.”

As an ethicist, it seems to me that this misses the most important point: right now, in the active war zones of 2016, robotic warfare only reduces the human cost on one side of the conflict. No doubt the day will come when robots and drones fight one another, but we’re not there yet. Today when we speak of robots taking the place of soldiers, we ought to remember that they only save lives on the side that owns the robots.Daughter of the Sword

Reducing the human cost of warfare seems like a good thing, and in some very important respects it is, but we must keep in mind that the human cost of warfare is our primary incentive for pursuing peace. A war without casualties, a war that only costs money, is a war you can fight until you go broke. It follows that the cheaper your robots become, the longer you can wage such a war—if you’re the one with the robots.

If you’re a human combatant, though, it’s a very different war. That war might not look so different from a Terminator movie.

I say this without political judgment. I’m not suggesting that the coalition forces in Afghanistan and Iraq are Skynet or that the Taliban and ISIS are John Connor’s resistance fighters. That’s nonsense. What I’m saying is this: it doesn’t matter who the good guys and bad guys are. When one side fights with robots, that side doesn’t have as much skin in the game. The moral calculations are different.

I think that’s a move in the wrong direction. I think anything that makes it easier to go to war is a colossal mistake. I think war is bad, period, and I think the less of it we have, the better.

I’m not a radical pacifist. I believe in self-defense and proportional response. What concerns me is the possibility that nations could be actively engaged in conflict in which only one side suffers all the casualties. In that scenario—the one we’re moving toward all too quickly—the winning side has little reason to stop fighting. Peace has never been the easiest solution. Anything that makes warfare easier places peace that much further out of reach.


Steve Bein

Cybermorality: Making moral decisions in a vacuum

Steve Bein continues his fascinating series on the intersection of philosophy and SFF. Previous installments ponder when your car should kill you and if genocide is always wrong.


 

ColdEquationsThis week I’m going to juxtapose two famous short stories, Tom Godwin’s “The Cold Equations” (1954) and Orson Scott Card’s “Kingsmeat” (1978). If you haven’t read them, and if you don’t want them spoiled, go root them up and read them right now.

Godwin and Card both toy with our consequentialist sensibilities. Consequentialism is the not-too-creatively-titled moral theory that defines right and wrong in terms of consequences. (Not very creatively titled, is it?) As it turns out, this is how most people describe their moral intuitions: that is, if you ask them to evaluate the morality of, say, kicking a random stranger in the shin, most people will say it’s wrong because it causes needless pain.

There’s an open question about whose consequences matter most. If you’re an ethical egoist, you say your own pain and pleasure are more important than everyone else’s, whereas if you’re a utilitarian, you say everyone counts equally (or, put another way, what’s important is the net amount of pain or pleasure, not who happens to be receiving it). It’s worth noting that the great majority of moral philosophers discount ethical egoism as little more than a sophisticated defense of selfishness. In my experience, most ethics students do too.

OrsonScottCardBack to Godwin and Card. In “The Cold Equations,” a pilot named Barton encounters a stowaway on his Emergency Dispatch Ship, or EDS. The sole function of the EDS is to send emergency supplies to people who need them, and so it’s only equipped to deliver its payload and a pilot to their destination. Even reserve fuel is too great an expense; every gram of additional propellant is one gram less of life-saving cargo.

The stowaway, inevitably, is young and cute and female, and by modern lights it’s a little depressing how much this matters to the plot. She knows the EDS is headed for a planet where she’s got family, which is why she stole aboard in the first place. She knew what she was doing was against the law. What she didn’t know was that the punishment is death. But Barton’s policy manual is crystal clear: Any stowaway discovered in an EDS shall be jettisoned immediately following discovery.

So Godwin confronts Barton with a choice: doom himself, the girl, and her family, or send the girl out the airlock.

It seems to me this is a false dilemma. Barton could jettison himself instead. But let’s assume the EDS isn’t equipped with a decent autopilot and the stowaway isn’t a trained pilot herself.

Enter Card. In “Kingsmeat,” a human space colony has been conquered by aliens who love delicious human flesh. (In their defense, we do taste exactly like pigs, and are therefore particularly yummy with barbecue sauce. I happen to know this for a fact, but please don’t ask me how.) A character called the shepherd manages to fend off the destruction of the entire colony, but only by teaching the aliens how to maintain a herd. They give him a shepherd’s crook of sorts, part stun gun and part scalpel. With it he can paralyze his fellow colonists, painlessly remove their body parts, and keep the aliens well fed while waiting for the cavalry to arrive and fend off the aliens.

So now let’s say you’re the EDS pilot and in addition to a stowaway you’ve also got the shepherd’s crook. Your options:

1) Follow policy. Shove the stowaway out the airlock. Body count: 1, and it ain’t you.

2) Be chivalrous. Keep her aboard. Body count: you, her, and everyone you were sent to save.

3) Noble suicide. Thank your lucky stars that your stowaway happens to be an expert EDS pilot, then go out the airlock yourself. Body count: 1, but this time you’re the popsicle.

4) Start shepherding. Figure out which body parts you and your stowaway will have to sacrifice. Carve off enough meat and vent it out the airlock and you’ll get the net weight of the EDS down to where it needs to be. Body count: zero.

So now you get to choose which of these options you like. To make it harder, I’ll give you two questions. First, which option do you think is morally best? Second, which one do you think you’d actually be able to go through with?

Here’s what’s fun for me as a philosopher: this decision is actually harder for the ethical egoist than for the utilitarian. The utilitarian can’t tell the difference between 1 and 3, and has to say 4 is obviously the right choice. (This next part isn’t as obvious, but for my money I think you’ve got to start by chopping legs off. Legs are nice and heavy. You’d still be able to use all your hand controls, and your stowaway can lie on the floor and operate the foot pedals for you. After that, maybe lose a kidney apiece. After that… well, weigh in again before you do anything rash.)

Disciple of the Wind by Steve BeinIf you’re an ethical egoist, it turns out option 1 isn’t the obvious choice. Why? Because you also have to consider your long-term consequences. You’ll meet this poor stowaway’s family as soon as you land. You’ll have to tell them something. If you don’t, they can call your mothership (which you know they can contact, because that’s how you got in this mess in the first place). Whatever you tell them, you’ll be stuck on this planet with them indefinitely. So all of a sudden, even the most selfish person has to take a good, close look at option 4.

Steve Bein

Cybermorality: Do you really think genocide is always wrong?

Steve Bein continues his fascinating series on the intersection of philosophy and SFF. Last month’s first installment ponders when your car should kill you.


 

Steve BeinDo you really think genocide is always wrong?

For the second installment of this series we’re going to take on a biggie: genocide.

Let me put all of my cards on the table right from the get-go. I think it was morally despicable every single time in history that one group of people ever tried to eradicate another group of people for no reason other than the fact that the second group exists. I think it was wrong to attempt it, wrong to carry it out, wrong to cover it up, and wrong to deny it ever happened. I am so totally judgmental about this.

But one of the really cool things about science fiction and fantasy is that they can challenge our moral assumptions in some really interesting ways. Some of those assumptions might be hiding in your current beliefs about genocide, so let’s see what we dig up.

Unless you have some uncommon moral beliefs, you’re willing to take an antibiotic to get rid of your strep throat. It’s worth nothing, though, that antibiotics are essentially biological weapons. Their purpose is to eradicate some unwelcome species infecting your body. A microscopic case of genocide, then.

Or maybe not. After all, it’s not as if your taking antibiotics kills all the streptococcus in the world. It only gets rid of your strep throat. Not at all like genocide, then. Closer to kicking a bunch of ne’er-do-wells out of your apartment.

Fair enough, but most people would happily embrace the complete eradication of HIV in all its forms. Not just one species of the virus, and not just in one person: all of it, every strain, everywhere. That’s genocide for sure.

To this maybe you want to say okay, fine, so maybe what I’m opposed to isn’t genocide but ethnocide. That is, what you find morally objectionable isn’t the elimination of a genotype, but rather of a people—that is, a group of creatures with language, culture, history, the capacity to remember and celebrate their history, all that good stuff. HIV and streptococcus don’t count because they’re not people.

Incidentally, it’s not enough to say that those species don’t count as people because they don’t have human DNA. Science fiction and fantasy throw that assumption for a loop, because Chewbacca and Gimli look a lot like people. They have language, culture, history, personality, conscience, you name it. Confronted with the Chewbacca example, pretty much everyone in my ethics classes concedes that “nonhuman people” is a legitimate category, one that contains wookiees, dwarves, and maybe even some terrestrial species too. (Chimpanzees are people-like enough that US courts granted them limited human rights protections, and dolphins and whales are sufficiently people-ish that there’s a growing cetacean rights movement.)

Year of the DemonNow suppose there’s a virus capable of language—or proto-language, anyway, maybe to a similar degree that whales have. We can observe the transfer of information in a way that’s not reducible to infection or reproduction. This isn’t just a communicable disease; it’s a disease that communicates.

Let’s also say that this virus can live in human beings, and when it does it makes them sick—let’s say like having a cold. You feel lousy, but usually not so terrible that you miss work. Symptoms are treatable with over-the-counter meds. This can get expensive, depending on your household finances, but it’s not a huge burden on the average middle-class family. For people who can’t afford the cough syrup or whatever, the symptoms are only annoying, not life-threatening.

This particular virus isn’t airborne like a cold, so you don’t have to worry too much about infecting the people around you. But it doesn’t go away of its own accord, so the only way to rid yourself of these yucky symptoms is to take an antibiotic.

So here’s the question: how many days in a row are you willing to be headachy to keep this tiny, annoying, somewhat intelligent species alive in your body?

Does it matter that the virus that’s in you is one of a kind? Viruses reproduce very quickly, and since this one is talkative, it only takes a few generations—a matter of days—before the viruses in my body and the ones in yours don’t really speak the same language anymore. They’re now genetically and linguistically unique.

Would it matter if researchers predicted an end to your symptoms? Does it matter if the end is a long way off? Let’s say a year from now, either A) virologists will have figured out how to talk this disease into not making people feel lousy, or B) they’ll have created some suitable habitat for it to live in so it doesn’t have to live in a human host, or C) if we still haven’t accomplished A or B, you can just give in and take an antibiotic.

Let’s posit a few more things, which we’ll take as given:

1. Having a cold really does suck.

2. It doesn’t suck nearly as much as genocide. Not a millionth of a millionth of a percent as much.

3. Some people would say taking the antibiotic—eradicating a genetically and linguistically unique population—counts as genocide at best, ethnocide at worst.

4. Some people would acknowledge 3 and still take the antibiotic without a second thought.

There’s a fifth claim we can’t take as given, because it’s subject to some debate:

5. It’s morally wrong to take the antibiotic.

So what do you think about 5? Do you buy it or not? Head over to Novelocity’s Facebook page and make your opinion known!

Steve Bein

That moment when maybe your car should kill you

This is the first in a series of classic ethical conundrums I’ll twist into fun new shapes with ideas from science fiction and fantasy. To start off the series let’s look at one that isn’t too far off.

Jack and Jill are cruising along in their driverless cars. Neither of them is paying attention, because neither of them needs to; at this point the cars are that good. Just about everyone is driverless now, and all of the vehicles have transponders that communicate with each other, so we’ve got backups upon backups. Everything is as safe as can be.

So Jack is reading a good novel and Jill is playing Uno with her kids in the back seat. (The front seats turn all the way around to face the back, of course, because why wouldn’t they? Car travel is really more like train travel now, including a little table between the front and back seats for playing Uno.) Everyone’s having a nice little road trip.

Until Jack blows a tire. His car has to make a split-second decision—no problem, because it actually makes dozens of safety decisions per second. It has to choose: swerve left, into Jill’s car in the oncoming lane, or do nothing, and crash into a beautiful and majestic redwood tree. Jack’s car instantly knows:Disciple of the Wind by Steve Bein

• Cars are engineered for crash safety, but that technology is far from perfect, and a head-on collision will almost certainly cause injury to all parties involved.
• Redwoods are not at all engineered for crash safety, and in fact the more beautiful and majestic they are, the worse they are for you to crash into.
• Jill’s car has more people in it than Jack’s car, and also more people in it than the redwood tree.

Whereupon Jack’s car instantly calculates the probable damages from two choices:

1. Do nothing. Jack hits Jill head-on, risking injury to himself, Jill, and her kids.
2. Swerve. Jack hits the tree head-on, risking very serious injury to himself but placing everyone else out of harm’s way.

So what should the car be programmed to do?

If you choose 1, you might be an ethical egoist: that is, you believe “morally right” means “whatever maximizes my own personal well-being,” regardless of anyone else’s interests.

If you choose 2, you might be a utilitarian: that is, you believe “morally right” means “whatever maximizes well-being for everyone involved.”

If you choose 1, you’d also have to recognize that you wouldn’t choose 1 if you were Jill; only Jack is better off in 1. In which case you might have to admit you’re a hypocrite.

If you choose 2, you might put a pretty serious dent in new car sales, inasmuch as the old beater I have to drive myself might look pretty appealing if I know that those shiny new driverless cars might be willing to kill me.

If you choose 1, you might be trading your driver’s license for a bus pass. If what you want to do is maximize your own well-being, the best way to ensure that is for all cars to choose 2—that is, to swerve away from you if they can—and that means you can’t choose 1.

If you choose 2, you might have a hard time answering people who say that their car, which they paid for with their money, ought to maximize their best interests—or in other words, that every car has a greater obligation (if you can call it that) to protect its owner than to protect anyone else.

If you choose 1, you might also be committed to running over motorcyclists more often than necessary, because they’re light and you’ll probably survive the impact.

If you choose 2, you might also be committed to smashing up Volvos more often than necessary, because they’re so good in a crash. In fact, you’re probably committed to driverless cars getting regular software updates on which vehicles are the best to hit. This would shake up the entire economy around cars, car insurance, etc.

Whether you choose 1 or 2, you probably want to call your senator right now and push for a bill to give the National Travel Safety Board jurisdiction over driverless car programming. The NTSB is the organization charged with minimizing risk in air travel, and it mandates that all airlines adhere to the same safety rules. If that universal conformity were not the rule with driverless cars, then some auto manufacturers would make cars that were more selfless than others. Some companies might play a little fast and loose with the rules—I’m looking at you, Volkswagen—and that could be very dangerous indeed. Whether you’re a utilitarian or an egoist, you probably want to know what the rules are and also know that everyone has to follow them.

And in case you think all this philosophy stuff has no practical value, consider this: programmers are weighing all of these considerations right now.

So what’s your answer, 1 or 2? Pop on over to Novelocity’s Facebook page and make your opinion known!


Steve Bein