This essay project was written by Jason Schukraft, with contributions from Peter Hurford, Max Carpendale, and Marcus A. Davis.
Other humans merit moral concern. We think many nonhumans merit moral concern too. But how do we know? And which nonhumans? Chimpanzees? Chickens? Bumblebees? Protozoa? Roombas? Rocks? Where and how do we draw a line?
What would it take to justifiably believe that some nonhuman experiences pain (or pleasure) in a morally significant way?1 This is a tough question, but it is incredibly important to get right. Humans constitute a very, very small fraction of the animal kingdom. If other vertebrate animals experience morally significant pain, then much of our engagement with these animals is deeply immoral. If invertebrate animals experience morally significant pain, then, given the sheer number of invertebrates,2 an almost incomprehensible amount of morally significant suffering occurs beyond the ken of normal human attention. And if the capacity to experience morally significant pain is not restricted to organic entities, then human civilizations of the future may be capable of producing exponentially more sentient entities than presently exist.
On the other hand, if many, most, or all nonhumans do not experience morally significant pain, then it could be a waste of resources to try to change their condition. Given that there are millions of humans currently experiencing morally significant pain (for whom these resources would be a great aid), the opportunity cost of wasting time, talent, and money on nonhumans appears tremendous.
Figuring out where and whether to allocate resources to help nonhumans is of significant interest to Rethink Priorities. This post is our first in a series on morally significant pain in invertebrates. We focus on invertebrates for two reasons: (1) We are already reasonably confident that mammals, birds, reptiles, amphibians, and most fish feel morally significant pain,3 and hence must be included in our moral calculations, but we are unsure if more distantly related animals warrant similar concern, and (2) The subject of invertebrate sentience, though recently gaining traction both in the scientific literature and the effective altruism community, appears neglected relative to the subject’s potential import. In future posts we look at which features might be relevant for determining whether an entity is capable of experiencing pain. We also present a detailed table outlining the distribution of these features throughout the animal kingdom.4
Of course, we recognize that delineating the phylogenetic distribution of morally significant pain is an extraordinarily complex and difficult task, one that we are extremely unlikely to solve outright. To put it mildly, much more research, at virtually every level of the problem, is needed. Nevertheless, the urgency of the issue compels us to address it now, before all the potentially relevant evidence is in. As grantmakers and charity entrepreneurs, we do not have the luxury to wait. We must decide how to allocate resources now, in our current epistemically incomplete state. Our goal in this series of posts is to determine, to the best of our abilities and within reasonable funding and time constraints, what we should think about morally significant pain in invertebrates, given the current state of the evidence.
To that end, we begin with a review of the philosophical difficulties inherent in the detection of morally significant pain in nonhumans. We discuss eight conceptually sequential steps, alongside their attendant difficulties, needed to identify morally significant pain in nonhumans.5 The first three steps concern detecting pain; the other five steps concern determining whether (and to what extent) the pain is morally significant.6
The Problem of Other Minds
Start with yourself. You experience pleasure and pain. You can be as confident of this fact as you can be of any fact. Why? You have direct introspective access to at least some of your phenomenal states. But there is an asymmetry between you and everything else. You cannot know by direct introspection that someone else is phenomenally conscious.7 If you are justified in believing that other entities experience pains and pleasures, it must be by some different epistemic strategy. Solipsism is the view that one’s mind is the only mind that exists.8 If we are to justifiably believe that some nonhuman experiences pain, we must first overcome the challenge of solipsism.
Although philosophers disagree about the appropriate resolution, robust solipsism has few, if any, contemporary defenders. The idea that other humans experience pleasure and pain is very central to our web of beliefs. Any theory that would wage war against such a central belief had better come loaded with powerful ammunition. It is generally held that traditional arguments in favor of solipsism are incapable of providing such ammunition.9
Analogical Argument and Inference to the Best Explanation
The most common response to solipsism takes the form of an inference to the best explanation.10 One begins with an examination of one’s own behavior. For example: when I cut my hand, I cry out, I move my hand away from the sharp object, and I later treat the wound with a clean bandage. Then one considers the behavior of other humans: they also cry out when cut and attend in similar ways to similar wounds.11 There are a variety of hypotheses which, if true, could explain this behavior. Perhaps they are sophisticated robots programmed to behave as I do. But the simplest and best explanation of the behavior of other humans is that they feel pain like I do.12
Of course, this explanation might be mistaken, and we might come to know it is mistaken. If I examined the heads of many fellow humans and in each case found not a brain but a crude artificial device receiving signals from a robotics factory, that would constitute a defeater for my prior explanation. I would then no longer be able to rationally endorse the view that other humans have mental states like I do. Inference to the best explanation tells us that, in the absence of defeaters, we are licensed to prefer the simplest explanation of a phenomenon.13
Inference to the best explanation is related to, but distinct from, argument by analogy. The basic structure of an analogical argument is as follows (where E1 is the source domain and E2 is the target domain):
(1) Entity E1 has some properties P1 … Pn
(2) Entity E2 has the same properties P1 … Pn
(3) Entity E1 has some further property Pn+1
∴
(4) Therefore, entity E2 likely has the same property Pn+1
Analogical arguments are by their nature inductive. The wider the inferential base upon which to base an induction, the better the inductive argument. But pain is a private mental state, so when it comes to pain, we each have an inductive base of one (namely, ourselves). Inductive inferences from an inductive base of one generally aren’t sound. So we probably don’t know that others experience pain by analogical reasoning alone.14
Inference to the best explanation, by contrast, is abductive. Abductive arguments are non-deductive like traditional inductive arguments, but, unlike traditional inductive arguments which are, allegedly, justified empirically, abductive arguments are justified a priori. We are justified in using induction because, as a matter of contingent fact, induction has worked well in the past.15 Instances of abductive reasoning, in contrast, are generally held to instantiate principles of rationality, which, if they are known at all, are known a priori.
Inference to the best explanation can also be applied to nonhumans.16 If a class of nonhumans exhibits pain behavior17 sufficiently similar to humans, then, in the absence of defeaters, we are licensed to prefer the explanation that they feel pain to alternate explanations. But what counts as sufficiently similar? And what counts as a defeater?
Consider similarity first. One worry is that the behavior of phylogenetically distant animals (to say nothing of inorganic entities) is so alien that the behavior cannot even be accurately described without resorting to problematic anthropomorphizing. Even when we can accurately describe the behavior of, say, invertebrates without inappropriately anthropomorphizing them, it’s unclear how much similarity antecedently to expect. Different species of animal, after all, are different. To take a trivial example: most of the time, when humans are in pain, they grimace. But the hard exoskeleton of an insect does not allow for grimacing. Does this difference provide a small bit of evidence that insects don’t feel pain? Presumably not. But that doesn’t mean that grimacing is irrelevant. Consider another example: many times when a human is in pain, she cries out. Again, owing to anatomical differences, we shouldn’t expect this feature to be widespread in invertebrates, even if they do feel pain. But farm animal vocalization has recently been taken to be a good metric of animal welfare in pigs, cows, and chickens.
The general lesson here is that there is no set of features which is universally relevant for the detection of pain in nonhumans. Even if pain experiences are widespread throughout the animal kingdom, the extreme diversity of living organisms suggests that pain experiences might be expressed in behaviorally and neurobiologically distinct ways.
The same problem applies to potential defeaters. It was once widely thought that a neocortex is required for conscious experience.18 Thus, it was thought, any creature which lacked a neocortex thereby lacked conscious experience.19 No matter how similar the behavior, the absence of a neocortex in a creature served as a defeater for the view that that creature experienced pain.
Today the picture is more complicated. For starters, evidence is emerging that, even in humans, a neocortex is not required for conscious experience.20 More importantly, the absence of a neocortex doesn’t imply that there aren’t homologous cells performing the same role in other creatures.21 The point to appreciate here is that the bar for justifiably believing that some neurological feature is a necessary condition on conscious experience is quite high. Neurological differences surely are relevant, but, in the absence of a general theory of consciousness, the degree to which they can be decisive is limited.
There is a further, more fundamental limitation to investigating consciousness empirically. Although pain states are associated (at least in humans) with various physiological responses, such as elevated heartbeat and increased respiration, pain cannot be defined in terms of these responses. It’s natural to suppose that my experience of pain explains why my heart starts beating faster and my respiration quickens. If pain just is elevated heartbeat and increased respiration (or whatever other physiological responses one favors), then we lose this natural explanation. More importantly, if we define pain in purely physiological terms, we miss the moral significance of pain. Pain is intrinsically morally bad (ceteris paribus) not because it causes or is identical to certain physiological responses. Pain is bad because it feels bad. Measured physical responses can provide evidence that an entity experiences the felt badness of pain, and that evidence can be decisive, but we should not confuse evidence of a phenomenon with the phenomenon itself. To investigate the phenomenon of consciousness directly, we probably ought to turn to philosophy.
Applying a General Theory of Consciousness
Determining whether an entity is phenomenally conscious is probably not a strictly scientific endeavor. At some point, some difficult philosophical theorizing might be needed to help us make appropriate attributions of consciousness. So suppose all the empirical data is in, and it’s still unclear whether a certain nonhuman is conscious. To settle the question, it would be nice to appeal to a well-justified general theory of consciousness. Below, I briefly examine three broad families of views about the relationship between mind and matter: dualism, physicalism, and a hybrid theory. But first, I outline some peculiar initial difficulties we face before embarking on a quest for a theory of mind. Collectively, these subsections show that uncertainty in philosophy of mind will at some point probably infect our credences about which nonhumans experience morally significant pain.
The Common Ground Problem
Theories of consciousness, like all philosophical theories, begin with certain pre-theoretic intuitions about the subject matter. These pre-theoretic intuitions include background framework assumptions about roughly how widespread consciousness is likely to be across phyla. If final theories begin with radically different starting assumptions, comparing the theories won’t really be helpful. There has to be sufficient common ground in order for robust theory-comparison to be possible. But there’s some evidence that this common ground is lacking in theories of consciousness.22 Existing theories of consciousness, from the world’s top researchers, span the board from so-called “higher-order theories,” which, due to their metarepresentational requirements on consciousness, seem to deny consciousness to babies and dogs, to panpsychism, which attributes some degree of consciousness not just to plants and unicellular organisms but also to protons and electrons. One might have thought that these consequences could serve as reductios on their respective theories, but apparently this is not the case.23 So the field must allow an unusually diverse range of initial assumptions.24 This makes adjudicating between competing theories in philosophy of mind particularly hard.
The Causal Status of Consciousness: Dualism
Every theory of consciousness must grapple with the causal status of consciousness. Epiphenomenalism is the view that mental events are causally inert. According to the epiphenomenalist, pains and pleasures exist, but they are nonphysical states. Because the physical world is causally closed, these nonphysical states have no causal power.25 All conscious experience could be subtracted from the world, and it would not make any physical difference.
According to epiphenomenalism, conscious experience doesn’t have a causal profile. If conscious experience doesn’t have a causal profile, then empirically investigating features which are allegedly indicative of conscious experience is probably a waste of time. If I cut my finger and cry out immediately thereafter, my cry is not caused by an experience of pain. So my cry is not evidence of pain, at least not in the straightforward way we normally take it to be.26 The same goes for more complicated physical features, such as brain size, opiate sensitivity, or long-term behavior modification to avoid noxious stimuli.
One motivation for epiphenomenalism is intuitions about so-called “phenomenal zombies.”27 A phenomenal zombie is a creature whose behavior and physical structure, down to the atomic level, is identical to that of a normal human being but who lacks any conscious experience. David Chalmers claims that phenomenal zombies are physically possible.28 If phenomenal zombies are physically possible, then, even if they are nonactual, mental states must be causally inert.
Epiphenomenalism is a natural consequence of many dualistic theories of mind.29 It seems true that anything that can cause a physical event must itself be a physical event. There are also arguments to the effect that mental states are nonphysical. If those two claims are true, epiphenomenalism seems nigh on inevitable.
If epiphenomenalism is true, it will be very difficult, if not impossible, to determine whether an entity (aside from oneself) is phenomenally conscious. Certainly no amount of empirical information will settle the question. A complete account of the psychophysical laws could perhaps do the trick, but it’s unclear how we could come to justifiably believe that we have such an account. Relatedly, epiphenomenalism seems to undercut the force of the inference to the best explanation strategy for responding to solipsism. If epiphenomenalism is true, then mental states do not explain the behavior of other humans. At best I can infer that other humans have brain states similar to mine, but I am no longer justified in supposing that they are conscious.
Emergentism: A Hybrid Theory
Emergent properties are constituted by more fundamental entities yet are novel or irreducible with respect to them. (In simpler terms, the whole is greater than the sum of its parts.) Emergentism is a position in philosophy of mind that seeks to preserve the intuition that mental events and physical events are distinct without entailing epiphenomenalism. On this view, consciousness is an emergent property of the brain. Sometimes this point is put in epistemic terms: consciousness supervenes on constituent parts of the brain, but complete knowledge of all the brain’s constituent parts would not enable us to justifiably infer the existence of consciousness. (If we could so infer, then consciousness would be reducible to brain states, not emergent from them.)30
Emergentism leaves us in a better epistemic position than epiphenomenalism. Because mental states and functional brain states are necessarily connected (phenomenal zombies are physically impossible on this view), we can potentially employ inference to the best explanation to determine whether some nonhuman is conscious. Still, it’s not clear how well emergentism fundamentally avoids the problem of epiphenomenalism. According to the emergentist, although mental states and brain states are necessarily connected, they are metaphysically distinct: no amount of neuroscientific knowledge could explain how the brain gives rise to consciousness. The connection between brain states and mental states has, in the words of Hempel and Oppenheim (1948), “a mysterious quality of absolute unexplainability.”31 Of course, just because a phenomenon cannot be explained in terms of neuroscience doesn’t mean that the phenomenon can’t be explained at all. It may be possible to explain how the brain gives rise to consciousness in terms of substantive principles of metaphysical grounding. Unfortunately, these principles seem as difficult to ascertain as the psychophysical laws that the epiphenomenalist purports to exist. Thus, this view seems to leave us in a similarly problematic epistemic position.
Semantic Indeterminacy: Physicalism
In contrast to the nonreductive emergentism outlined above, reductive physical accounts of the mind hold that mental states straightforwardly reduce to physical states. Although there are many arguments against reductive physicalism, rehearsing them here is less helpful than considering what is implied by the truth of the view.
Consciousness, even if it is a purely physical feature of the world, is not a simple phenomenon. It’s unlikely that we will learn that consciousness reduces to a single feature of the brain. It’s much more plausible to suppose that consciousness is some complex bundle of physical features. Given the complexity of consciousness, it’s also implausible to suppose that we will be able to describe consciousness in terms of necessary and sufficient conditions. If reductive physicalism is true, consciousness is much more likely to be a cluster concept. Finally, it seems implausible that these features would be coextensive across the animal kingdom. Some entities, such as humans, might possess all the features. Some entities, such as plants, might possess none of the features. And some entities, such as sea hares, might possess some but not all of the features.32 Thus, if reductive physicalism is true, then at some point on the phylogenetic tree, it will probably be semantically indeterminate whether a given species is conscious. We might know all the physical facts about the species and know the correct theory of mind, and yet still not be able to say definitively whether a creature is conscious. This raises a difficult question: what is the moral status of a creature for whom it is semantically indeterminate that that creature is conscious?
The Unpleasantness of Pain
I turn now away from the question of whether some nonhumans experience to pain to the question of the moral significance of that pain, supposing it exists. As we’ll see, we need not think that all pains are equally morally significant. Indeed, we might reasonably conclude that some pains ought to be ignored completely in our moral calculations.
Suppose we assign some moderately high credence to the claim that certain nonhumans, octopuses say, experience pain. What might these pain-experiences be like? In particular, we would want to know whether octopuses experience the unpleasantness of pain. It might seem like an analytic truth that pain is unpleasant, but there is actually good empirical evidence to suggest this is not necessarily so. Humans with pain asymbolia report experiencing pain without the pain being unpleasant. This dissociation can also be induced pharmacologically, notably with morphine.33
It’s possible that pain asymbolia patients are conceptually confused and that pain is necessarily unpleasant. But it’s also possible that pain is a multi-dimensional experience, the unpleasantness of which is only one dimension. Because the unpleasantness of pain almost always accompanies the other dimensions, we may be misled into thinking the various dimensions of pain are necessarily coextensive. To analogize: one might have thought that pains had to be localized in some part of one’s body, at least vaguely so. But phantom limb pain shows that this is not the case.
The unpleasantness of pain is what makes pain experiences non-instrumentally bad.34 Thus, pain experiences may not be morally significant simplicter. They may be morally significant only when they are accompanied by the usual (in humans, at least) negatively valenced phenomenology.
Accounting for the unpleasantness of pain has been a recent topic of interest in both philosophy and neuroscience. Take philosophy first. Although there has lately been a proliferation of subtly different theories, two broad strategies stand out.35 There are desire-theoretic accounts of pain’s unpleasantness, and there are evaluative accounts of pain’s unpleasantness. According to most desire-theoretic accounts of pain’s unpleasantness, a pain’s unpleasantness consists in the pain-bearer having an intrinsic desire that the pain not occur. According to many evaluative accounts of pain’s unpleasantness, a pain’s unpleasantness consists in the pain representing that the bodily damage that the pain represents is bad for you. There’s a lot to unpack in those definitions, but for our purposes the only important aspect to note is that both broad strategies invoke second-order thoughts: in the one instance, a second-order desire; in the other, a second-order representation. It seems unlikely that cognitively unsophisticated reptiles, amphibians, and fish–to say nothing of most invertebrates–are capable of entertaining second-order thoughts.36
Perhaps, however, investigating the unpleasantness of pain is better conceived as an empirical matter. In that case, we should turn to the neuroscience. Here, again, we find difficulties. Scientists are beginning to suspect there are two functionally distinct pain pathways, the lateral and the medial.37 The lateral pathway is responsible for representing the intensity of the pain, the location of the pain, and the modality of the pain.38 The medial pathway represents the degree of unpleasantness of the pain. Importantly, the medial pathway is mediated by the anterior cingulate cortex, a part of the neocortex, which, as we’ve already seen, is unique to mammals. So here again we have some evidence that non-mammalian animals do not experience morally significant pain.
Again, however, the picture is complicated. Pain is a very effective teaching tool. (Indeed, this appears to be the evolutionary role of pain.) Studies show that rats and monkeys with damaged anterior cingulate cortices display almost none of the typical pain-learning behaviors of their undamaged conspecifics. It seems that it is unpleasant pain that is the effective teaching tool. If non-mammalian animals exhibit many of the same pain-learning behaviors as mammals–and there is good reason to think that they do–then that is some evidence that they are capable of experiencing the unpleasantness of pain. Once again, we can’t rule out the possibility that there are homologous brain structures at work representing the felt badness of pain.
The Phenomenal Intensity of Pain
Some pains hurt more than others. Call this dimension the phenomenal intensity of pain. Ceteris paribus, the greater the phenomenal intensity of a pain, the greater its moral significance. If some nonhumans do experience pain, how intense might their pain be?
The first thing to note is that reported phenomenal intensities of pain, as studied in humans, correlate very poorly with external factors.39 Even under optimal conditions, a small increase in voltage or temperature can double the reported phenomenal intensity of electric shock or heat-induced stimulus. Indeed, phenomenal intensity can be systematically manipulated completely independently of external stimuli, via hypnotic suggestion. On the other hand, the phenomenal intensity of human pain correlates almost perfectly with the firing rates of neurons in the parts of the brain involved in the specific type of pain. If we could get a handle on homologous firing rates in nonhuman animals, we might have a better idea of the intensity of their pain.40
Another way to potentially get a handle on the phenomenal intensity of nonhuman pain is to consider again the evolutionary role that pain plays. Pain teaches us which stimuli are noxious, how to avoid those stimuli, and what we ought to do to recover from injury. Because intense pain can be distracting, animals in intense pain are at a selective disadvantage compared to conspecifics not in intense pain. Thus, we might expect evolution to select for creatures with pains just phenomenally intense enough (on average) to play the primary instructive role of pain. Humans are the most cognitively sophisticated animals on the planet, the animals most likely to pick up on patterns in signals only weakly conveyed. Less cognitively sophisticated animals generally require stronger signals for pattern-learning. If pain is the signal, then we might reasonably expect the phenomenal intensity of pain to correlate inversely with cognitive sophistication. If that’s the case, humans experience (on average) the least intense pain in all the animal kingdom.41
A final consideration involves not the phenomenal intensity of pain but its phenomenal extension (that is, its felt duration). Due to neurological differences, phenomenal extension might not be directly comparable across species. Consider brain-processing speed and rates of subjective experience, both loosely defined. Animals with faster metabolisms and smaller body sizes tend, according to some metrics, to process information faster. Thus, there is some reason to think that smaller animals have, in general, faster subjective experiences. So a hummingbird might experience one minute of objective time42 as longer, in some robust, non-subjective sense of the term, than a human would. If that’s true, then a given hummingbird and a given human experiencing a pain of the same phenomenal intensity would not, ceteris paribus, suffer equally during the same objective span of time. The hummingbird would suffer more. Hence, we should not naively equate the phenomenal extension of pain with its duration expressed in objective time. The takeaway here is that the moral significance of pain might be related in important ways to an entity’s processing speed. Such concerns would increase exponentially if we ever created artificial minds capable of conscious experience. As with other areas, more research is needed.
Degrees of Consciousness
The moral significance of pain might also depend on the extent to which an entity is aware of (the unpleasantness of) the pain it is in. This is a subtle claim which requires some unpacking. First, distinguish aware of from aware that. I’m not here asserting that the moral significance of pain requires that a pain-bearer be aware that it is in pain.43 To be aware that one is in pain, one must possess the concept pain. It seems plausible that a creature might experience pain without possessing the concept pain. The extent to which one can be aware of a pain is the extent to which one can attend to a pain. It is the extent to which one is conscious of a pain. And if consciousness comes in degrees, as many neuroscientists believe,44 then the extent to which one can be aware of pain also comes in degrees, potentially in a morally significant way.
There are several mundane ways in which consciousness can be said to come in degrees. An entity that is conscious might be conscious all the time or only part of the time. (Humans, for example, are unconscious during dreamless sleep and when they undergo general anaesthesia.) For an entity that is currently conscious, consciousness might span many or few modalities. (Some creatures are sensitive to differences in light, sound, temperature, pressure, smell, bodily orientation, and magnetic field. Other creatures are sensitive to fewer sensory modalities.) For an entity that is currently conscious of a given sensory modality, that modality might be coarse-grained or fine-grained. (Within the light modality, some creatures are only sensitive to differences in brightness, while other creatures are sensitive to a wide swath of the electromagnetic spectrum.)
There is a more fundamental sense in which it might be true that consciousness comes in degrees. One of the most striking features of consciousness is its unity. When I step outside my door, I experience the hum of distant machinery, the gray haze of fog, and the smell of fresh cut grass as elements of a unified and densely integrated representation of reality. Sounds, sights, and smells are all experienced as part of the same global workspace. This sort of integrated representation may provide for more open-ended behavioral responses than a comparable amount of information presented in isolated streams. If that’s true, then one of the evolutionary functions of consciousness may be to integrate information.
According to the Integrated Information Theory of consciousness, consciousness just is suitably integrated information. When the effective informational content of a system, mathematically defined in light of the system’s causal profile, is greater than the sum of the informational content of its parts, the system is said to carry integrated information. Integrated information of the relevant source is conscious, whether that integration occurs in the brain or in a two-dimensional graph. Because integration comes in degrees, so too does consciousness.
Intuitively, we might think that creatures like cockroaches and octopuses integrate information to a lesser degree than humans.45 Headless cockroaches, for example, can be trained to avoid electric shocks. Octopuses trained to discriminate between horizontal and vertical rectangles using only one eye were unable to discriminate between the shapes using the other eye.46 One natural interpretation of these results is that although cockroaches and octopuses are adept at detecting and responding to various stimuli, the degree to which that information is centrally processed is limited, at least compared to humans.
If a theory of this sort is correct–and Integrated Information Theory is often considered the leading scientific theory of consciousness–then different entities will possess different amounts of consciousness. Although it is unclear what a claim of this sort even means, it is plausible that the moral significance of pain will depend in part on the amount of consciousness that the entity undergoing the pain possesses.
Moral Dignity and Pain
Hedonism is the view (roughly) that the only things that matter morally are pains and pleasures.47 If hedonism is true, then the (unpleasant) pains and (pleasant) pleasures of nonhumans matter according to their phenomenal intensities and the extent to which the creatures are aware of them. But if hedonism is false, then then there may be reasons to regard those nonhuman pains as less morally significant than human pains.48 Even if some nonhuman experiences the unpleasantness of pain to the same phenomenal intensity and with the same awareness as a neurotypical adult human, there still might be some difference between the nonhuman and the human which mitigates the moral significance of the nonhuman’s pain.
Let us take just one example.49 Personal autonomy is the ability to, in some sense, govern oneself. Autonomous agents live their lives according to reasons that are their own, and they act according to motivations largely free from distorting external forces. Autonomous agents possess the capacity to reflectively endorse their commitments and change those commitments when they are found to be deficient. The value of personal autonomy features prominently in much of modern Western ethics, and it famously was given central place in Immanuel Kant’s moral philosophy. If personal autonomy is non-instrumentally valuable, we might rate the pain of autonomous agents as worse, ceteris paribus, than the pain of non-autonomous entities, especially if the pain interferes somehow with the agent’s autonomy. Because personal autonomy requires self-reflection, many nonhuman animals are not plausible candidates for instantiating this value.50 Thus, ceteris paribus, their pain may matter less.
Because ethical theorizing is so hard, we should partition our credences over a fairly wide range of plausible normative theories.51 This partition need not be equal, but it should assign some non-negligible credence even to views strongly at odds with one’s preferred theory. No one ought to be certain, even in the mere colloquial sense of ‘certain,’ that consequentialism or deontology is false.
Reflective Equilibrium
All ethical theorizing involves some degree of reflective equilibrium. We have intuitions about particular cases and also intuitions about general principles. When we formulate a general principle, we try to capture as many case intuitions as we can. Sometimes, if we are confident in a general principle, we are willing to adjust our judgments in individual cases. Other times, however, our individual judgments are strong enough that they constitute counterexamples to the general principle.52
When our intuitions about case judgments conflict with our intuitions about general principles, we must decide which to privilege and to what degree. According to the terminology of Roderick Chisholm (1973), the philosophical particularist privileges case judgments over general principles when engaging in reflective equilibrium. The philosophical methodist privileges general principles over case judgments when engaging in reflective equilibrium.53
Let’s explore a potential conflict. Suppose you believe that the conscious experience of (unpleasant) pain is always morally significant, at least to a small degree. This is a general principle. Suppose you also believe that given the choice between the life of a human child and the lives of a trillion ants, the morally correct action, ceteris paribus, is to save the human child. This is a case judgment. Next, suppose you come to assign, on the basis of solid empirical and philosophical evidence, a small but non-negligible chance to the proposition that ants experience morally significant pain. Because of the sheer number of ants, the amount of expected ant suffering in the world will be quite high. Ameliorating ant suffering suddenly looks like one of the most important issues in the world. This, to say the least, is a surprising result.
How, if at all, should you revise your judgment about whether to save the trillion ants or the single human child? If you do revise your judgment, can you provide an error theory for why the initial judgment was mistaken? If you don’t revise your judgment, does that undercut the general principle? Should you abandon your principle? Or maybe refine it? (Perhaps the aggregation of pain does not consist of mere addition. Or perhaps relatively small instances of pain never sum to relatively and sufficiently big ones.)
Some people may regard it as obvious that one should revise one’s initial case judgment in light of the new information about ant consciousness. Perhaps, but one ought also to be careful not to be pushed down frictionless slopes without proper backstops in place. Here we begin to approach “Pascal’s Mugging” territory. For instance: should one assign a non-zero credence to the proposition that plants feel pain? Probably. After all, panpsychism might be true. But there are far, far more plants than ants. Even with an extremely low credence that plants experience pain (and I’ll remind you that some very smart people endorse panpsychism), expected plant suffering will probably dominate expected ant suffering by several orders of magnitude. Now it looks like ameliorating plant suffering is the most important issue in the world.54
It’s true that we could continue to adjust our credences downward until we avoid this result, but that somehow feels like cheating. After all, credences are just something we have; they are not the sort of thing we get to set directly. One might reply: “I don’t have infallible epistemic access to all my credences. I know that potential animal suffering is more important than potential plant suffering. I use this information to infer that my credence must be however low it must be in order to avoid the result that expected plant suffering is greater than expected animal suffering.”
This response succeeds up to a point, but ultimately it is unsatisfying. Suppose we discover that we undercounted the number of plants by some 100 quadrillion. (After all, what counts as a “plant” is a somewhat slippery notion.) Then one would have to adjust one’s credence again. At some point these adjustments begin to look ad hoc. A better description of what’s going on looks like this: there are some propositions the entailment of which serve as a reductio ad absurdum on the theory that entails them. That plant suffering matters more than animal suffering is one such proposition. But if we can use the plant-suffering proposition as a reductio on the theory which entails it, why can’t we use the ant-suffering proposition as a reductio on the theory which entails it. After all, didn’t we start with a strong intuition that a trillion ant lives are no more important than a single human life?
The general point here is not that any particular proposition about suffering is absurd or that we should begin our ethical theorizing with any particularly strong views on the worth of ant-lives versus human-lives. The only point I’m trying to make is that bringing one’s theory into reflective equilibrium can be hard. Sometimes there is simply no non-question-begging method to persuade an interlocutor that the equilibrium she has settled on is worse than the equilibrium you have settled on.
Directions for Future Work
To recap: I’ve discussed eight conceptually sequential steps needed to identify morally significant pain in nonhumans. The eight steps are:
- Determine that other minds exist.
- Check to see if the nonhuman entity in question engages in pain behavior. If so, check to see if there are any defeaters for the explanation that the entity in question feels pain.
- Apply one’s best theory of consciousness to see what it says about the likelihood that the entity in question feels pain.
- Assuming that the entity feels pain, check to see if it experiences the felt badness of pain.
- Determine the phenomenal intensity and phenomenal extension of the pain.
- Determine the degree to which the entity is aware of the pain.
- Determine the entity’s moral standing relative to other entities which experience pain.
- Check to see if your final result constitutes a reductio on the whole process.
There is a tremendous amount of uncertainty, both empirical and moral, surrounding the issue of nonhuman pain. Because the subject is so complex, we should ascribe some credence to views which hold that phenomenal consciousness is rare outside humans and also ascribe some credence to views which hold that phenomenal consciousness, though common, is not terribly morally significant.
Nonetheless, my personal view is that even after folding all this uncertainty into our calculations, we are still left with the result that we should take nonhuman pain much more seriously than the average policymaker does. There are good reasons to think that many nonhumans feel pain and that this pain is morally significant. These nonhumans do not have a voice in policy debate and they do not have a vote. They are powerless to stop the harms we inflict on them, and they are powerless to ask us for help. They are not just systematically mistreated; their suffering is almost wholly ignored.
One of the best ways to help these creatures is to reduce the uncertainties surrounding the issue of nonhuman pain. To that end, Rethink Priorities has been working on an ambitious project to analyze and catalogue 60+ features potentially relevant to phenomenal consciousness and morally significant pain. (A project of this sort was suggested by Luke Muehlhauser in his 2017 Report on Consciousness and Moral Patienthood.) We aim to carefully define each feature and explain why and to what degree it might be relevant to consciousness. We have selected 17 representative species from across the animal kingdom and are currently scouring the scientific literature to see whether and to what extent each species exhibits each of the features. Some of the species are intuitively conscious (e.g., cows), while others are intuitively not (e.g., nematodes). In between are a host of interesting edge cases, like honey bees and octopuses. All this information will eventually be compiled into an easily searchable database. Of course, the project won’t definitively settle whether honey bees or octopuses experience morally significant pain. Nonetheless, our hope is that the database will become an invaluable resource for future consciousness research. In our next essay, we explain this approach in more detail.
Acknowledgments
J.P. Andrew, Elijah Armstrong, Kim Cuddington, Marcus A. Davis, Neil Dullaghan, Sam Fox Krauss, Peter Hurford, David Moss, Katie Plemmons, and Daniela R. Waldhorn provided helpful comments on this essay.
Endnotes
As we’ll see, it’s not enough to demonstrate that nonhumans experience pain. There are a number of ways in which nonhuman pain might be less morally significant than human pain, even to the point that nonhuman pain fails to be morally significant at all. Nonhuman pain might just feel different (along dimensions elaborated below) in a way which renders the pain less morally pressing. ↩
To take just one group of arthropods, there are something like a quintillion insects alive at any given moment, a number which boggles the mind. See C.B. Williams. 1964. Patterns in the Balance of Nature and Related Problems in Quantitative Biology. Academic Press, London: 324. ↩
Elasmobranch fish (i.e., cartilaginous fish, such as sharks) may be an exception. See, inter alia, Ewan Smith and Gary Lewin. 2009. “Nociceptors: A Phylogenetic View.” Journal of Comparative Physiology A Vol. 195, Issue 12: 1096. ↩
For comparison, we also include some non–animals, such as plants and protists. ↩
This list is not exhaustive. Most notably, we’ll set aside difficult questions in metaethics. For example, if moral nihilism is true, then there are no moral facts, and thus no creatures experience morally significant pain, including humans. ↩
Note that not every step is equally problematic. ↩
I here set aside certain conceptually possible but non–actual fanciful devices, such as brain–to–brain hookups. ↩
This is metaphysical solipsism, and the description is not technically correct (it’s a necessary but not sufficient part of the full view). One could believe that one’s mind is the only one which exists without thereby being a solipsist, if, say, one were the sole survivor of some apocalyptic catastrophe. ↩
The ammunition metaphor is adapted from Anil Gupta. 2006. Empiricism and Experience. Oxford University Press: 178. ↩
See, inter alia, Andrew Melnyk. 1994. “Inference to the Best Explanation and Other Minds.” Australasian Journal of Philosophy, 72: 482–91 for a discussion of the issue. ↩
Obviously, this is a simplification. The behavioral similarities run much deeper. ↩
Other explanations are more complicated because they raise more questions than they resolve. Why, for instance, would someone create sophisticated robots programmed to behave as I do? ↩
It’s important to note that one can prefer an explanation without fully believing the explanation. If there are numerous plausible explanations, the best explanation might only warrant a credence of .2. For example, it’s consistent to have a fairly low credence in the claim that invertebrates feel pain and yet think that that explanation of their behavior is more likely than any other explanation of their behavior. See Michael Tye. 2017. Tense Bees and Shell–Shocked Crabs. Oxford University Press: 68. ↩
To put the point another way, ancient hunter–gatherers were justified in believing their fellow humans felt pain, but they didn’t know anything about physiological or neurological similarity. See Tye 2017: 53–56 for more on this point. ↩
This way of formulating the justificatory base leads to the well–known problem of induction, which I here gently set aside. ↩
See, e.g., Tye 2017, especially chapter 5. ↩
Here, “pain behavior” doesn’t mean “behavior caused by pain.” Rather, it is convenient shorthand for “behavioral patterns, that, in humans, are caused by pain.” ↩
This is not an ad hoc view. Functional imaging studies show that, in humans, there is a correlation between the phenomenal intensity of pain and activity in the anterior cingulate cortex and the somatosensory cortex. See Devinsky, O., Morrell, M. J., & Vogt, B. A. 1995. “Contributions of Anterior Cingulate Cortex to Behaviour.” Brain: A Journal of Neurology, 118(1), 279–306. ↩
The neocortex is only found in mammalian brains. ↩
See Merker, B. 2007. “Consciousness without a Cerebral Cortex: A challenge for Neuroscience and Medicine.” Behavioral and Brain Sciences, 30(1), 63–81. It should be noted that this claim only applies to children born without a neocortex. Adults with damaged neocortices remain completely vegetative. ↩
Jarvis ED, Güntürkün O, Bruce L, Csillag A, Karten H, Kuenzel W, et al. 2005. “Avian brains and a new understanding of vertebrate brain evolution.” Nature Reviews. Neuroscience. 6 (2): 151–9. See Tye 2017: 78–84 for a philosophical discussion. ↩
This is actually fairly rare in philosophy. Epistemologists and ethicists often develop radically different theories on the basis of roughly the same common ground. The only other comparable example that comes to mind in philosophy is mereology. ↩
See Schwitzgebel (forthcoming) “Is There Something It Is Like to Be a Garden Snail” for more on the common ground problem. ↩
An alternate explanation holds that the panpsychist and higher–order theorist begin with the same starting assumptions, but the theoretical virtues of their respective theories lead them to remarkably different conclusions. I am dubious of this explanation. ↩
Thomas Huxley is perhaps the most famous philosophical proponent of epiphenomenalism. See his (1874) “On the Hypothesis that Animals Are Automata.” Victorian Review Vol. 35, No. 1, pp. 50–52. ↩
This is too quick. In weird cases an action can be evidence for some state of affairs without bearing any causal relationship to that state of affairs. But the general in–text point stands. ↩
See David Chalmers. 1996. The Conscious Mind. Oxford University Press: pp. 94–99 for the canonical discussion. Note also that Chalmers hedges on whether his view entails true epiphenomenalism. He admits his view entails “something like epiphenomenalism” (150, emphasis in the original). ↩
Ibid. Although phenomenal zombies get a lot of press, they are, according to Chalmers at least, inessential to his broader arguments. ↩
The exception is so–called “interactionist dualism.” Descartes is probably the most famous interactionist dualist. He believed that nonphysical mental states affect the brain via the pineal gland. ↩
See C.D. Broad. 1925. The Mind and Its Place in Nature. London: Kegan Paul: 125. ↩
Carl G. Hempel and Paul Oppenheim. 1948. “Studies in the Logic of Explanation.” Philosophy of Science 15, no. 2 (Apr., 1948): 119. ↩
Even if the features were coextensive in the animal kingdom, it seems physically possible to deliberately design an entity which possessed some but not all of the features. ↩
It is important to emphasize that, as best we can tell, pain asymbolia patients experience something over and above mere nociception. Nociceptors are special receptors used by the body to detect potentially harmful stimuli. Many creatures, including, for example, the roundworm C. elegans, possess nociceptors. Mere nociception does not have an attendant phenomenology, whereas pain asymbolia patients do report a conscious experience–just not an unpleasant one. ↩
There are many ways in which pain experiences are instrumentally good: they alert us to potential damage, they aid in our recovery from such damage, and they enable us to learn to avoid such damage in the future. We shouldn’t wish to be completely without pain, for humans born in such a condition (known as general congenital analgesia) almost always die young. ↩
See David Bain. 2017. “Why Take Painkillers?” Nous for a recent representative entry in the debate. ↩
Of course, we might be overestimating the cognitive sophistication required for second–order mental states, especially if we drop the assumption that higher–order cognition must be propositional. ↩
See Adam Shriver. 2006. “Minding Mammals.” Philosophical Psychology Vol. 19: 433–442 (especially sec. 2) for an accessible overview. ↩
“The modality of the pain” refers to the type of pain (e.g., a “cutting” pain, a “burning” pain, a “throbbing” pain). ↩
See Adam Pautz. 2014. “The Real Trouble with Phenomenal Externalism: New Empirical Evidence for a Brain–Based Theory of Consciousness.” in: Brown R. (eds) Consciousness Inside and Out: Phenomenology, Neuroscience, and the Nature of Experience. Studies in Brain and Mind, vol 6. Springer, Dordrecht, especially sec 2.3 for an overview. ↩
There has been some initial research along these lines in monkeys, but, perhaps for obvious reasons, the subject is not widely studied. ↩
It should be noted that, more so than other sections, this paragraph is entirely speculative. ↩
For the sticklers out there, I am well aware that Einstein taught us there is no good sense to the term “objective time.” I of course assume here that the hummingbird and the human are in the same reference frame. ↩
Of course, certain second–order thoughts about pain might themselves be morally significant. A human who is aware that she is in pain might also come to believe that the pain is just or unjust. ↩
For the record, most philosophers disagree. ↩
It should be stressed that according to Integrated Information Theory, information integration is defined mathematically, so these intuitive examples may not stand up to greater scrutiny. ↩
Failures of so–called “inter–ocular transfer” have also been found in birds, fish, reptiles and amphibians. See G Vallortigara, L.J Rogers, A Bisazza. “Possible evolutionary origins of cognitive brain lateralization.” Brain Research Reviews Vol. 30: 164–175 for a scientific discussion. See Peter Godfrey–Smith. 2016. Other Minds: The Octopus, The Sea, and the Deep Origins of Consciousness. New York: Farrar, Straus and Giroux: 84–87 for a philosophical discussion. ↩
This is ethical hedonism. Psychological hedonism is the (descriptive) view that only pleasures and pain motivate us. ↩
It should be emphasized that it does not follow from the falsity of hedonism that pains and pleasures are morally insignificant. If hedonism is false, it is almost certainly because there are other things which are valuable as well. ↩
Additional examples could be drawn from certain theistic traditions in which God (allegedly) gives humans dominion over (other) animals. ↩
On this subject Kant writes: “The fact that the human being can have the representation ‘I’ raises him infinitely above all the other beings on earth. By this he is a person.. that is, a being altogether different in rank and dignity from things, such as irrational animals, with which one may deal and dispose at one’s discretion.” “Anthropology from a Pragmatic Point of View (1798)” in 2007. Anthropology, History, and Education. (Cambridge Edition of the Works of Immanuel Kant). Robert Louden and Gunter Zoller (eds. and trans.). Cambridge University Press: 239. Some claim that Kant drew the wrong inference from his own theory. See Christine Korsgaard. 2018. Fellow Creatures: Our Obligations to Other Animals. Oxford University Press, especially Part Two: “Immanuel Kant and the Animals.” ↩
See, inter alia, Will MacAskill. 2016. “Normative Uncertainty as a Voting Problem.” Mind, Vol. 125: 967–1004 for more on normative uncertainty. ↩
In the words of Nelson Goodman: “[R]ules and particular inferences alike are justified by being brought into agreement with each other. A rule is amended if it yields an inference we are unwilling to accept; an inference is rejected if it violates a rule we are unwilling to amend.” Fact, Fiction, and Forecast. Harvard University Press (1955): 61–2. ↩
The Problem of the Criterion. Marquette University Press: 15. ↩
Some members of the effective altruism community go further, positing that “atomic movements, electron orbits, photon collisions, etc. could collectively deserve significant moral weight.” ↩