In science fiction, we wrestle with big questions. One of the biggest is the question of what makes humans different from others–whether it be the other species of our planet, the hypothesized creatures of other planets, or some form of artificial intelligence. It’s a tricky question, made more difficult by the fact that we can’t even agree on the right vocabulary to use. Here are a few of the terms common in science fictional treatments of the subject, and why they all fall a little bit short:
SENTIENT. Sentience is probably the most common one heard in SF circles, and also the most inaccurate. “Sentient” simply means to have senses, to be able to perceive the world around you. A nemotode worm is sentient. Your dog it sentient, as are its fleas. For a computer, sentience may be a more interesting addition, but it’s still a far cry from the defining concept we’re looking for.
INTELLIGENT. This comes a little closer. We say “artificial intelligence” to mean computers that demonstrate at least the simulation of thought. But intelligence is a sliding scale. Mice have a level of intelligence. Dolphins more so. Worse, some people are not very intelligent, and yet still demonstrate many of the qualities we find definitive: a rich emotional life, longing, imagination, a sense of justice and morality, love of beauty, a drive to make sense of the world around us.
SELF-AWARE. Like intelligence, this term is useful, but addresses only part of the picture. Self-awareness refers to our ability to contemplate our own existence. However, it’s fiendishly difficult for anyone except for the one who is self-aware to know if it is actually going on. I know that I’m self-aware, but how do I know you are? You claim to be, but I could write a computer program that claims to be self-aware and isn’t. As such, it’s not a very useful measuring stick to compare beings. Are dolphins aware of themselves? Are human babies? The concept seems to strike at something we consider important, but its ambiguity limits its usefulness.
SAPIENCE. This may be the best term we have available, though it does a bit of an end-run around the question. Instead of trying to define what we mean, this term riffs off of homo sapiens and essentially means “that thing that humans are.” As such, however, it works where other terms fail. Its only downside is that it’s hard to extrapolate to non-humans. What about a creature that has many of the transcendent traits of humanity, but is very different from humans in other ways? Would we call such a creature sapient? How would we decide whether it is or isn’t? Some have suggested the word “sophont” as an alternative (from the Greek word for “wise”), to disconnect the term from humanity, but that only changes the word itself, not its meaning.
The basic problem we have with settling on a term is that we don’t all agree, philosophically, with what does differentiate humanity. If you are a Christian, like I am, you probably see human consciousness as a separate category, something created by God to imitate his own characteristics. If, however, you’re a scientific materialist, you probably consider the human mind in the same category as that of other animals, differing in details, of course, but not in kind. Depending on your philosophical framework, you might consider the differences between humans and dolphins to be trivial or profound. You might consider computers close to being on par with us, or not even on the same track.
I don’t see our vocabulary problem being solved any time soon, but this kind of dilemma is why I love science fiction. It gives us the opportunity to explore this space by imaging creatures–both organic and artificial–that may challenge our ideas of what makes humanity unique. And a philosophical challenge is a great way to exercise that part of you that makes you different… whatever you want to call it.