This essay is a project was written by Jason Schukraft, with contributions from Peter Hurford, Max Carpendale, and Marcus A. Davis.
Other humans merit moral concern. We think many nonhumans merit moral concern too. But how do we know? And which nonhumans? Chimpanzees? Chickens? Bumblebees? Protozoa? Roombas? Rocks? Where and how do we draw a line?
What would it take to justifiably believe that some nonhuman experiences pain (or pleasure) in a morally significant way? This is a tough question, but it is incredibly important to get right. Humans constitute a very, very small fraction of the animal kingdom. If other vertebrate animals experience morally significant pain, then much of our engagement with these animals is deeply immoral. If invertebrate animals experience morally significant pain, then, given the sheer number of invertebrates, an almost incomprehensible amount of morally significant suffering occurs beyond the ken of normal human attention. And if the capacity to experience morally significant pain is not restricted to organic entities, then human civilizations of the future may be capable of producing exponentially more sentient entities than presently exist.
On the other hand, if many, most, or all nonhumans do not experience morally significant pain, then it could be a waste of resources to try to change their condition. Given that there are millions of humans currently experiencing morally significant pain (for whom these resources would be a great aid), the opportunity cost of wasting time, talent, and money on nonhumans appears tremendous.
Figuring out where and whether to allocate resources to help nonhumans is of significant interest to Rethink Priorities. This post is our first in a series on morally significant pain in invertebrates. We focus on invertebrates for two reasons: (1) We are already reasonably confident that mammals, birds, reptiles, amphibians, and most fish feel morally significant pain, and hence must be included in our moral calculations, but we are unsure if more distantly related animals warrant similar concern, and (2) The subject of invertebrate sentience, though recently gaining traction both in the scientific literature and the effective altruism community, appears neglected relative to the subject’s potential import. In future posts we look at which features might be relevant for determining whether an entity is capable of experiencing pain. We also present a detailed table outlining the distribution of these features throughout the animal kingdom.
Of course, we recognize that delineating the phylogenetic distribution of morally significant pain is an extraordinarily complex and difficult task, one that we are extremely unlikely to solve outright. To put it mildly, much more research, at virtually every level of the problem, is needed. Nevertheless, the urgency of the issue compels us to address it now, before all the potentially relevant evidence is in. As grantmakers and charity entrepreneurs, we do not have the luxury to wait. We must decide how to allocate resources now, in our current epistemically incomplete state. Our goal in this series of posts is to determine, to the best of our abilities and within reasonable funding and time constraints, what we should think about morally significant pain in invertebrates, given the current state of the evidence.
To that end, we begin with a review of the philosophical difficulties inherent in the detection of morally significant pain in nonhumans. We discuss eight conceptually sequential steps, alongside their attendant difficulties, needed to identify morally significant pain in nonhumans. The first three steps concern detecting pain; the other five steps concern determining whether (and to what extent) the pain is morally significant.
The Problem of Other Minds
Start with yourself. You experience pleasure and pain. You can be as confident of this fact as you can be of any fact. Why? You have direct introspective access to at least some of your phenomenal states. But there is an asymmetry between you and everything else. You cannot know by direct introspection that someone else is phenomenally conscious. If you are justified in believing that other entities experience pains and pleasures, it must be by some different epistemic strategy. Solipsism is the view that one’s mind is the only mind that exists. If we are to justifiably believe that some nonhuman experiences pain, we must first overcome the challenge of solipsism.
Although philosophers disagree about the appropriate resolution, robust solipsism has few, if any, contemporary defenders. The idea that other humans experience pleasure and pain is very central to our web of beliefs. Any theory that would wage war against such a central belief had better come loaded with powerful ammunition. It is generally held that traditional arguments in favor of solipsism are incapable of providing such ammunition.
Analogical Argument and Inference to the Best Explanation
The most common response to solipsism takes the form of an inference to the best explanation. One begins with an examination of one’s own behavior. For example: when I cut my hand, I cry out, I move my hand away from the sharp object, and I later treat the wound with a clean bandage. Then one considers the behavior of other humans: they also cry out when cut and attend in similar ways to similar wounds. There are a variety of hypotheses which, if true, could explain this behavior. Perhaps they are sophisticated robots programmed to behave as I do. But the simplest and best explanation of the behavior of other humans is that they feel pain like I do.
Of course, this explanation might be mistaken, and we might come to know it is mistaken. If I examined the heads of many fellow humans and in each case found not a brain but a crude artificial device receiving signals from a robotics factory, that would constitute a defeater for my prior explanation. I would then no longer be able to rationally endorse the view that other humans have mental states like I do. Inference to the best explanation tells us that, in the absence of defeaters, we are licensed to prefer the simplest explanation of a phenomenon.
Inference to the best explanation is related to, but distinct from, argument by analogy. The basic structure of an analogical argument is as follows (where E1 is the source domain and E2 is the target domain):
(1) Entity E1 has some properties P1 … Pn
(2) Entity E2 has the same properties P1 … Pn
(3) Entity E1 has some further property Pn+1
(4) Therefore, entity E2 likely has the same property Pn+1
Analogical arguments are by their nature inductive. The wider the inferential base upon which to base an induction, the better the inductive argument. But pain is a private mental state, so when it comes to pain, we each have an inductive base of one (namely, ourselves). Inductive inferences from an inductive base of one generally aren’t sound. So we probably don’t know that others experience pain by analogical reasoning alone.
Inference to the best explanation, by contrast, is abductive. Abductive arguments are non-deductive like traditional inductive arguments, but, unlike traditional inductive arguments which are, allegedly, justified empirically, abductive arguments are justified a priori. We are justified in using induction because, as a matter of contingent fact, induction has worked well in the past. Instances of abductive reasoning, in contrast, are generally held to instantiate principles of rationality, which, if they are known at all, are known a priori.
Inference to the best explanation can also be applied to nonhumans. If a class of nonhumans exhibits pain behavior sufficiently similar to humans, then, in the absence of defeaters, we are licensed to prefer the explanation that they feel pain to alternate explanations. But what counts as sufficiently similar? And what counts as a defeater?
Consider similarity first. One worry is that the behavior of phylogenetically distant animals (to say nothing of inorganic entities) is so alien that the behavior cannot even be accurately described without resorting to problematic anthropomorphizing. Even when we can accurately describe the behavior of, say, invertebrates without inappropriately anthropomorphizing them, it’s unclear how much similarity antecedently to expect. Different species of animal, after all, are different. To take a trivial example: most of the time, when humans are in pain, they grimace. But the hard exoskeleton of an insect does not allow for grimacing. Does this difference provide a small bit of evidence that insects don’t feel pain? Presumably not. But that doesn’t mean that grimacing is irrelevant. Consider another example: many times when a human is in pain, she cries out. Again, owing to anatomical differences, we shouldn’t expect this feature to be widespread in invertebrates, even if they do feel pain. But farm animal vocalization has recently been taken to be a good metric of animal welfare in pigs, cows, and chickens.
The general lesson here is that there is no set of features which is universally relevant for the detection of pain in nonhumans. Even if pain experiences are widespread throughout the animal kingdom, the extreme diversity of living organisms suggests that pain experiences might be expressed in behaviorally and neurobiologically distinct ways.
The same problem applies to potential defeaters. It was once widely thought that a neocortex is required for conscious experience. Thus, it was thought, any creature which lacked a neocortex thereby lacked conscious experience. No matter how similar the behavior, the absence of a neocortex in a creature served as a defeater for the view that that creature experienced pain.
Today the picture is more complicated. For starters, evidence is emerging that, even in humans, a neocortex is not required for conscious experience. More importantly, the absence of a neocortex doesn’t imply that there aren’t homologous cells performing the same role in other creatures. The point to appreciate here is that the bar for justifiably believing that some neurological feature is a necessary condition on conscious experience is quite high. Neurological differences surely are relevant, but, in the absence of a general theory of consciousness, the degree to which they can be decisive is limited.
There is a further, more fundamental limitation to investigating consciousness empirically. Although pain states are associated (at least in humans) with various physiological responses, such as elevated heartbeat and increased respiration, pain cannot be defined in terms of these responses. It’s natural to suppose that my experience of pain explains why my heart starts beating faster and my respiration quickens. If pain just is elevated heartbeat and increased respiration (or whatever other physiological responses one favors), then we lose this natural explanation. More importantly, if we define pain in purely physiological terms, we miss the moral significance of pain. Pain is intrinsically morally bad (ceteris paribus) not because it causes or is identical to certain physiological responses. Pain is bad because it feels bad. Measured physical responses can provide evidence that an entity experiences the felt badness of pain, and that evidence can be decisive, but we should not confuse evidence of a phenomenon with the phenomenon itself. To investigate the phenomenon of consciousness directly, we probably ought to turn to philosophy.
Applying a General Theory of Consciousness
Determining whether an entity is phenomenally conscious is probably not a strictly scientific endeavor. At some point, some difficult philosophical theorizing might be needed to help us make appropriate attributions of consciousness. So suppose all the empirical data is in, and it’s still unclear whether a certain nonhuman is conscious. To settle the question, it would be nice to appeal to a well-justified general theory of consciousness. Below, I briefly examine three broad families of views about the relationship between mind and matter: dualism, physicalism, and a hybrid theory. But first, I outline some peculiar initial difficulties we face before embarking on a quest for a theory of mind. Collectively, these subsections show that uncertainty in philosophy of mind will at some point probably infect our credences about which nonhumans experience morally significant pain.
The Common Ground Problem
Theories of consciousness, like all philosophical theories, begin with certain pre-theoretic intuitions about the subject matter. These pre-theoretic intuitions include background framework assumptions about roughly how widespread consciousness is likely to be across phyla. If final theories begin with radically different starting assumptions, comparing the theories won’t really be helpful. There has to be sufficient common ground in order for robust theory-comparison to be possible. But there’s some evidence that this common ground is lacking in theories of consciousness. Existing theories of consciousness, from the world’s top researchers, span the board from so-called “higher-order theories,” which, due to their metarepresentational requirements on consciousness, seem to deny consciousness to babies and dogs, to panpsychism, which attributes some degree of consciousness not just to plants and unicellular organisms but also to protons and electrons. One might have thought that these consequences could serve as reductios on their respective theories, but apparently this is not the case. So the field must allow an unusually diverse range of initial assumptions. This makes adjudicating between competing theories in philosophy of mind particularly hard.
The Causal Status of Consciousness: Dualism
Every theory of consciousness must grapple with the causal status of consciousness. Epiphenomenalism is the view that mental events are causally inert. According to the epiphenomenalist, pains and pleasures exist, but they are nonphysical states. Because the physical world is causally closed, these nonphysical states have no causal power. All conscious experience could be subtracted from the world, and it would not make any physical difference.
According to epiphenomenalism, conscious experience doesn’t have a causal profile. If conscious experience doesn’t have a causal profile, then empirically investigating features which are allegedly indicative of conscious experience is probably a waste of time. If I cut my finger and cry out immediately thereafter, my cry is not caused by an experience of pain. So my cry is not evidence of pain, at least not in the straightforward way we normally take it to be. The same goes for more complicated physical features, such as brain size, opiate sensitivity, or long-term behavior modification to avoid noxious stimuli.
One motivation for epiphenomenalism is intuitions about so-called “phenomenal zombies.” A phenomenal zombie is a creature whose behavior and physical structure, down to the atomic level, is identical to that of a normal human being but who lacks any conscious experience. David Chalmers claims that phenomenal zombies are physically possible. If phenomenal zombies are physically possible, then, even if they are nonactual, mental states must be causally inert.
Epiphenomenalism is a natural consequence of many dualistic theories of mind. It seems true that anything that can cause a physical event must itself be a physical event. There are also arguments to the effect that mental states are nonphysical. If those two claims are true, epiphenomenalism seems nigh on inevitable.
If epiphenomenalism is true, it will be very difficult, if not impossible, to determine whether an entity (aside from oneself) is phenomenally conscious. Certainly no amount of empirical information will settle the question. A complete account of the psychophysical laws could perhaps do the trick, but it’s unclear how we could come to justifiably believe that we have such an account. Relatedly, epiphenomenalism seems to undercut the force of the inference to the best explanation strategy for responding to solipsism. If epiphenomenalism is true, then mental states do not explain the behavior of other humans. At best I can infer that other humans have brain states similar to mine, but I am no longer justified in supposing that they are conscious.
Emergentism: A Hybrid Theory
Emergent properties are constituted by more fundamental entities yet are novel or irreducible with respect to them. (In simpler terms, the whole is greater than the sum of its parts.) Emergentism is a position in philosophy of mind that seeks to preserve the intuition that mental events and physical events are distinct without entailing epiphenomenalism. On this view, consciousness is an emergent property of the brain. Sometimes this point is put in epistemic terms: consciousness supervenes on constituent parts of the brain, but complete knowledge of all the brain’s constituent parts would not enable us to justifiably infer the existence of consciousness. (If we could so infer, then consciousness would be reducible to brain states, not emergent from them.)
Emergentism leaves us in a better epistemic position than epiphenomenalism. Because mental states and functional brain states are necessarily connected (phenomenal zombies are physically impossible on this view), we can potentially employ inference to the best explanation to determine whether some nonhuman is conscious. Still, it’s not clear how well emergentism fundamentally avoids the problem of epiphenomenalism. According to the emergentist, although mental states and brain states are necessarily connected, they are metaphysically distinct: no amount of neuroscientific knowledge could explain how the brain gives rise to consciousness. The connection between brain states and mental states has, in the words of Hempel and Oppenheim (1948), “a mysterious quality of absolute unexplainability.” Of course, just because a phenomenon cannot be explained in terms of neuroscience doesn’t mean that the phenomenon can’t be explained at all. It may be possible to explain how the brain gives rise to consciousness in terms of substantive principles of metaphysical grounding. Unfortunately, these principles seem as difficult to ascertain as the psychophysical laws that the epiphenomenalist purports to exist. Thus, this view seems to leave us in a similarly problematic epistemic position.
Semantic Indeterminacy: Physicalism
In contrast to the nonreductive emergentism outlined above, reductive physical accounts of the mind hold that mental states straightforwardly reduce to physical states. Although there are many arguments against reductive physicalism, rehearsing them here is less helpful than considering what is implied by the truth of the view.
Consciousness, even if it is a purely physical feature of the world, is not a simple phenomenon. It’s unlikely that we will learn that consciousness reduces to a single feature of the brain. It’s much more plausible to suppose that consciousness is some complex bundle of physical features. Given the complexity of consciousness, it’s also implausible to suppose that we will be able to describe consciousness in terms of necessary and sufficient conditions. If reductive physicalism is true, consciousness is much more likely to be a cluster concept. Finally, it seems implausible that these features would be coextensive across the animal kingdom. Some entities, such as humans, might possess all the features. Some entities, such as plants, might possess none of the features. And some entities, such as sea hares, might possess some but not all of the features. Thus, if reductive physicalism is true, then at some point on the phylogenetic tree, it will probably be semantically indeterminate whether a given species is conscious. We might know all the physical facts about the species and know the correct theory of mind, and yet still not be able to say definitively whether a creature is conscious. This raises a difficult question: what is the moral status of a creature for whom it is semantically indeterminate that that creature is conscious?
The Unpleasantness of Pain
I turn now away from the question of whether some nonhumans experience to pain to the question of the moral significance of that pain, supposing it exists. As we’ll see, we need not think that all pains are equally morally significant. Indeed, we might reasonably conclude that some pains ought to be ignored completely in our moral calculations.
Suppose we assign some moderately high credence to the claim that certain nonhumans, octopuses say, experience pain. What might these pain-experiences be like? In particular, we would want to know whether octopuses experience the unpleasantness of pain. It might seem like an analytic truth that pain is unpleasant, but there is actually good empirical evidence to suggest this is not necessarily so. Humans with pain asymbolia report experiencing pain without the pain being unpleasant. This dissociation can also be induced pharmacologically, notably with morphine.
It’s possible that pain asymbolia patients are conceptually confused and that pain is necessarily unpleasant. But it’s also possible that pain is a multi-dimensional experience, the unpleasantness of which is only one dimension. Because the unpleasantness of pain almost always accompanies the other dimensions, we may be misled into thinking the various dimensions of pain are necessarily coextensive. To analogize: one might have thought that pains had to be localized in some part of one’s body, at least vaguely so. But phantom limb pain shows that this is not the case.
The unpleasantness of pain is what makes pain experiences non-instrumentally bad. Thus, pain experiences may not be morally significant simplicter. They may be morally significant only when they are accompanied by the usual (in humans, at least) negatively valenced phenomenology.
Accounting for the unpleasantness of pain has been a recent topic of interest in both philosophy and neuroscience. Take philosophy first. Although there has lately been a proliferation of subtly different theories, two broad strategies stand out. There are desire-theoretic accounts of pain’s unpleasantness, and there are evaluative accounts of pain’s unpleasantness. According to most desire-theoretic accounts of pain’s unpleasantness, a pain’s unpleasantness consists in the pain-bearer having an intrinsic desire that the pain not occur. According to many evaluative accounts of pain’s unpleasantness, a pain’s unpleasantness consists in the pain representing that the bodily damage that the pain represents is bad for you. There’s a lot to unpack in those definitions, but for our purposes the only important aspect to note is that both broad strategies invoke second-order thoughts: in the one instance, a second-order desire; in the other, a second-order representation. It seems unlikely that cognitively unsophisticated reptiles, amphibians, and fish–to say nothing of most invertebrates–are capable of entertaining second-order thoughts.
Perhaps, however, investigating the unpleasantness of pain is better conceived as an empirical matter. In that case, we should turn to the neuroscience. Here, again, we find difficulties. Scientists are beginning to suspect there are two functionally distinct pain pathways, the lateral and the medial. The lateral pathway is responsible for representing the intensity of the pain, the location of the pain, and the modality of the pain. The medial pathway represents the degree of unpleasantness of the pain. Importantly, the medial pathway is mediated by the anterior cingulate cortex, a part of the neocortex, which, as we’ve already seen, is unique to mammals. So here again we have some evidence that non-mammalian animals do not experience morally significant pain.
Again, however, the picture is complicated. Pain is a very effective teaching tool. (Indeed, this appears to be the evolutionary role of pain.) Studies show that rats and monkeys with damaged anterior cingulate cortices display almost none of the typical pain-learning behaviors of their undamaged conspecifics. It seems that it is unpleasant pain that is the effective teaching tool. If non-mammalian animals exhibit many of the same pain-learning behaviors as mammals–and there is good reason to think that they do–then that is some evidence that they are capable of experiencing the unpleasantness of pain. Once again, we can’t rule out the possibility that there are homologous brain structures at work representing the felt badness of pain.
The Phenomenal Intensity of Pain
Some pains hurt more than others. Call this dimension the phenomenal intensity of pain. Ceteris paribus, the greater the phenomenal intensity of a pain, the greater its moral significance. If some nonhumans do experience pain, how intense might their pain be?
The first thing to note is that reported phenomenal intensities of pain, as studied in humans, correlate very poorly with external factors. Even under optimal conditions, a small increase in voltage or temperature can double the reported phenomenal intensity of electric shock or heat-induced stimulus. Indeed, phenomenal intensity can be systematically manipulated completely independently of external stimuli, via hypnotic suggestion. On the other hand, the phenomenal intensity of human pain correlates almost perfectly with the firing rates of neurons in the parts of the brain involved in the specific type of pain. If we could get a handle on homologous firing rates in nonhuman animals, we might have a better idea of the intensity of their pain.
Another way to potentially get a handle on the phenomenal intensity of nonhuman pain is to consider again the evolutionary role that pain plays. Pain teaches us which stimuli are noxious, how to avoid those stimuli, and what we ought to do to recover from injury. Because intense pain can be distracting, animals in intense pain are at a selective disadvantage compared to conspecifics not in intense pain. Thus, we might expect evolution to select for creatures with pains just phenomenally intense enough (on average) to play the primary instructive role of pain. Humans are the most cognitively sophisticated animals on the planet, the animals most likely to pick up on patterns in signals only weakly conveyed. Less cognitively sophisticated animals generally require stronger signals for pattern-learning. If pain is the signal, then we might reasonably expect the phenomenal intensity of pain to correlate inversely with cognitive sophistication. If that’s the case, humans experience (on average) the least intense pain in all the animal kingdom.
A final consideration involves not the phenomenal intensity of pain but its phenomenal extension (that is, its felt duration). Due to neurological differences, phenomenal extension might not be directly comparable across species. Consider brain-processing speed and rates of subjective experience, both loosely defined. Animals with faster metabolisms and smaller body sizes tend, according to some metrics, to process information faster. Thus, there is some reason to think that smaller animals have, in general, faster subjective experiences. So a hummingbird might experience one minute of objective time as longer, in some robust, non-subjective sense of the term, than a human would. If that’s true, then a given hummingbird and a given human experiencing a pain of the same phenomenal intensity would not, ceteris paribus, suffer equally during the same objective span of time. The hummingbird would suffer more. Hence, we should not naively equate the phenomenal extension of pain with its duration expressed in objective time. The takeaway here is that the moral significance of pain might be related in important ways to an entity’s processing speed. Such concerns would increase exponentially if we ever created artificial minds capable of conscious experience. As with other areas, more research is needed.
Degrees of Consciousness
The moral significance of pain might also depend on the extent to which an entity is aware of (the unpleasantness of) the pain it is in. This is a subtle claim which requires some unpacking. First, distinguish aware of from aware that. I’m not here asserting that the moral significance of pain requires that a pain-bearer be aware that it is in pain. To be aware that one is in pain, one must possess the concept pain. It seems plausible that a creature might experience pain without possessing the concept pain. The extent to which one can be aware of a pain is the extent to which one can attend to a pain. It is the extent to which one is conscious of a pain. And if consciousness comes in degrees, as many neuroscientists believe, then the extent to which one can be aware of pain also comes in degrees, potentially in a morally significant way.
There are several mundane ways in which consciousness can be said to come in degrees. An entity that is conscious might be conscious all the time or only part of the time. (Humans, for example, are unconscious during dreamless sleep and when they undergo general anaesthesia.) For an entity that is currently conscious, consciousness might span many or few modalities. (Some creatures are sensitive to differences in light, sound, temperature, pressure, smell, bodily orientation, and magnetic field. Other creatures are sensitive to fewer sensory modalities.) For an entity that is currently conscious of a given sensory modality, that modality might be coarse-grained or fine-grained. (Within the light modality, some creatures are only sensitive to differences in brightness, while other creatures are sensitive to a wide swath of the electromagnetic spectrum.)
There is a more fundamental sense in which it might be true that consciousness comes in degrees. One of the most striking features of consciousness is its unity. When I step outside my door, I experience the hum of distant machinery, the gray haze of fog, and the smell of fresh cut grass as elements of a unified and densely integrated representation of reality. Sounds, sights, and smells are all experienced as part of the same global workspace. This sort of integrated representation may provide for more open-ended behavioral responses than a comparable amount of information presented in isolated streams. If that’s true, then one of the evolutionary functions of consciousness may be to integrate information.
According to the Integrated Information Theory of consciousness, consciousness just is suitably integrated information. When the effective informational content of a system, mathematically defined in light of the system’s causal profile, is greater than the sum of the informational content of its parts, the system is said to carry integrated information. Integrated information of the relevant source is conscious, whether that integration occurs in the brain or in a two-dimensional graph. Because integration comes in degrees, so too does consciousness.
Intuitively, we might think that creatures like cockroaches and octopuses integrate information to a lesser degree than humans. Headless cockroaches, for example, can be trained to avoid electric shocks. Octopuses trained to discriminate between horizontal and vertical rectangles using only one eye were unable to discriminate between the shapes using the other eye. One natural interpretation of these results is that although cockroaches and octopuses are adept at detecting and responding to various stimuli, the degree to which that information is centrally processed is limited, at least compared to humans.
If a theory of this sort is correct–and Integrated Information Theory is often considered the leading scientific theory of consciousness–then different entities will possess different amounts of consciousness. Although it is unclear what a claim of this sort even means, it is plausible that the moral significance of pain will depend in part on the amount of consciousness that the entity undergoing the pain possesses.
Moral Dignity and Pain
Hedonism is the view (roughly) that the only things that matter morally are pains and pleasures. If hedonism is true, then the (unpleasant) pains and (pleasant) pleasures of nonhumans matter according to their phenomenal intensities and the extent to which the creatures are aware of them. But if hedonism is false, then then there may be reasons to regard those nonhuman pains as less morally significant than human pains. Even if some nonhuman experiences the unpleasantness of pain to the same phenomenal intensity and with the same awareness as a neurotypical adult human, there still might be some difference between the nonhuman and the human which mitigates the moral significance of the nonhuman’s pain.
Let us take just one example. Personal autonomy is the ability to, in some sense, govern oneself. Autonomous agents live their lives according to reasons that are their own, and they act according to motivations largely free from distorting external forces. Autonomous agents possess the capacity to reflectively endorse their commitments and change those commitments when they are found to be deficient. The value of personal autonomy features prominently in much of modern Western ethics, and it famously was given central place in Immanuel Kant’s moral philosophy. If personal autonomy is non-instrumentally valuable, we might rate the pain of autonomous agents as worse, ceteris paribus, than the pain of non-autonomous entities, especially if the pain interferes somehow with the agent’s autonomy. Because personal autonomy requires self-reflection, many nonhuman animals are not plausible candidates for instantiating this value. Thus, ceteris paribus, their pain may matter less.
Because ethical theorizing is so hard, we should partition our credences over a fairly wide range of plausible normative theories. This partition need not be equal, but it should assign some non-negligible credence even to views strongly at odds with one’s preferred theory. No one ought to be certain, even in the mere colloquial sense of ‘certain,’ that consequentialism or deontology is false.
All ethical theorizing involves some degree of reflective equilibrium. We have intuitions about particular cases and also intuitions about general principles. When we formulate a general principle, we try to capture as many case intuitions as we can. Sometimes, if we are confident in a general principle, we are willing to adjust our judgments in individual cases. Other times, however, our individual judgments are strong enough that they constitute counterexamples to the general principle.
When our intuitions about case judgments conflict with our intuitions about general principles, we must decide which to privilege and to what degree. According to the terminology of Roderick Chisholm (1973), the philosophical particularist privileges case judgments over general principles when engaging in reflective equilibrium. The philosophical methodist privileges general principles over case judgments when engaging in reflective equilibrium.
Let’s explore a potential conflict. Suppose you believe that the conscious experience of (unpleasant) pain is always morally significant, at least to a small degree. This is a general principle. Suppose you also believe that given the choice between the life of a human child and the lives of a trillion ants, the morally correct action, ceteris paribus, is to save the human child. This is a case judgment. Next, suppose you come to assign, on the basis of solid empirical and philosophical evidence, a small but non-negligible chance to the proposition that ants experience morally significant pain. Because of the sheer number of ants, the amount of expected ant suffering in the world will be quite high. Ameliorating ant suffering suddenly looks like one of the most important issues in the world. This, to say the least, is a surprising result.
How, if at all, should you revise your judgment about whether to save the trillion ants or the single human child? If you do revise your judgment, can you provide an error theory for why the initial judgment was mistaken? If you don’t revise your judgment, does that undercut the general principle? Should you abandon your principle? Or maybe refine it? (Perhaps the aggregation of pain does not consist of mere addition. Or perhaps relatively small instances of pain never sum to relatively and sufficiently big ones.)
Some people may regard it as obvious that one should revise one’s initial case judgment in light of the new information about ant consciousness. Perhaps, but one ought also to be careful not to be pushed down frictionless slopes without proper backstops in place. Here we begin to approach “Pascal’s Mugging” territory. For instance: should one assign a non-zero credence to the proposition that plants feel pain? Probably. After all, panpsychism might be true. But there are far, far more plants than ants. Even with an extremely low credence that plants experience pain (and I’ll remind you that some very smart people endorse panpsychism), expected plant suffering will probably dominate expected ant suffering by several orders of magnitude. Now it looks like ameliorating plant suffering is the most important issue in the world.
It’s true that we could continue to adjust our credences downward until we avoid this result, but that somehow feels like cheating. After all, credences are just something we have; they are not the sort of thing we get to set directly. One might reply: “I don’t have infallible epistemic access to all my credences. I know that potential animal suffering is more important than potential plant suffering. I use this information to infer that my credence must be however low it must be in order to avoid the result that expected plant suffering is greater than expected animal suffering.”
This response succeeds up to a point, but ultimately it is unsatisfying. Suppose we discover that we undercounted the number of plants by some 100 quadrillion. (After all, what counts as a “plant” is a somewhat slippery notion.) Then one would have to adjust one’s credence again. At some point these adjustments begin to look ad hoc. A better description of what’s going on looks like this: there are some propositions the entailment of which serve as a reductio ad absurdum on the theory that entails them. That plant suffering matters more than animal suffering is one such proposition. But if we can use the plant-suffering proposition as a reductio on the theory which entails it, why can’t we use the ant-suffering proposition as a reductio on the theory which entails it. After all, didn’t we start with a strong intuition that a trillion ant lives are no more important than a single human life?
The general point here is not that any particular proposition about suffering is absurd or that we should begin our ethical theorizing with any particularly strong views on the worth of ant-lives versus human-lives. The only point I’m trying to make is that bringing one’s theory into reflective equilibrium can be hard. Sometimes there is simply no non-question-begging method to persuade an interlocutor that the equilibrium she has settled on is worse than the equilibrium you have settled on.
Directions for Future Work
To recap: I’ve discussed eight conceptually sequential steps needed to identify morally significant pain in nonhumans. The eight steps are:
- Determine that other minds exist.
- Check to see if the nonhuman entity in question engages in pain behavior. If so, check to see if there are any defeaters for the explanation that the entity in question feels pain.
- Apply one’s best theory of consciousness to see what it says about the likelihood that the entity in question feels pain.
- Assuming that the entity feels pain, check to see if it experiences the felt badness of pain.
- Determine the phenomenal intensity and phenomenal extension of the pain.
- Determine the degree to which the entity is aware of the pain.
- Determine the entity’s moral standing relative to other entities which experience pain.
- Check to see if your final result constitutes a reductio on the whole process.
There is a tremendous amount of uncertainty, both empirical and moral, surrounding the issue of nonhuman pain. Because the subject is so complex, we should ascribe some credence to views which hold that phenomenal consciousness is rare outside humans and also ascribe some credence to views which hold that phenomenal consciousness, though common, is not terribly morally significant.
Nonetheless, my personal view is that even after folding all this uncertainty into our calculations, we are still left with the result that we should take nonhuman pain much more seriously than the average policymaker does. There are good reasons to think that many nonhumans feel pain and that this pain is morally significant. These nonhumans do not have a voice in policy debate and they do not have a vote. They are powerless to stop the harms we inflict on them, and they are powerless to ask us for help. They are not just systematically mistreated; their suffering is almost wholly ignored.
One of the best ways to help these creatures is to reduce the uncertainties surrounding the issue of nonhuman pain. To that end, Rethink Priorities has been working on an ambitious project to analyze and catalogue 60+ features potentially relevant to phenomenal consciousness and morally significant pain. (A project of this sort was suggested by Luke Muehlhauser in his 2017 Report on Consciousness and Moral Patienthood.) We aim to carefully define each feature and explain why and to what degree it might be relevant to consciousness. We have selected 17 representative species from across the animal kingdom and are currently scouring the scientific literature to see whether and to what extent each species exhibits each of the features. Some of the species are intuitively conscious (e.g., cows), while others are intuitively not (e.g., nematodes). In between are a host of interesting edge cases, like honey bees and octopuses. All this information will eventually be compiled into an easily searchable database. Of course, the project won’t definitively settle whether honey bees or octopuses experience morally significant pain. Nonetheless, our hope is that the database will become an invaluable resource for future consciousness research. In our next essay, we explain this approach in more detail.
J.P. Andrew, Elijah Armstrong, Kim Cuddington, Marcus A. Davis, Neil Dullaghan, Sam Fox Krauss, Peter Hurford, David Moss, Katie Plemmons, and Daniela R. Waldhorn provided helpful comments on this essay.