Thursday, August 14, 2008

Is Coming Into Existence an Agent-Neutral Value?

David Benatar argues that bringing someone into existence is always a harm, and grounds his argument in a particular asymmetry - the "goodness" of absent pain, versus the mere neutrality of absent pleasure where no one is thereby deprived.

Seana Shiffrin, on the other hand, doesn't argue that procreation is always a harm, but does refuse to characterize procreation as a "morally innocent endeavor" and argues for a more equivocal view of bringing people into existence. While procreation is not necessarily always a harm, it is often a harm, and procreators should bear moral responsibility for the harm they do. (Shiffrin, Seana Valentine. "Wrongful Life, Procreative Responsibility, and the Significant of Harm." Legal Theory, 5 (1999), 117–148.) Shiffrin defends her view with a different asymmetry - that, while it is fine to harm someone in order to prevent a greater harm to him, even without his consent (the rescue case), it is not fine to harm a person without his consent merely to provide him a benefit. Her core example involves a wealthy recluse, Wealthy, with no other way to help others, dropping $5 million cubes of gold from the air on a neighboring island. Many receive his presents with no complications, but one recipient (Unlucky) is hit with the cube and breaks his arm. While the recipient might, after the fact, be glad to have been hit with the gold cube, and consider the broken arm worth it, intuition suggests that dropping $5 million gold cubes on people is wrong. Unlucky
admits that all-things-considered, he is better off for receiving the $5 million, despite the injury. In some way he is glad that this happened to him, although he is unsure whether he would have consented to being subjected to the risk of a broken arm (and worse fates) if he had been asked in advance; he regards his conjectured ex-ante hesitation as reasonable. Given the shock of the event and the severity of the pain and disability associated with the broken arm, he is not certain whether he would consent to undergo the same experience again.

Shiffrin goes on to flesh out the intuition that Wealthy has wronged Unlucky - for instance, we would say that Wealthy owes Unlucky an apology, and if Wealthy refused to pay for Unlucky's corrective surgery, Unlucky would properly have a cause of action against Wealthy for the cost of his injuries.

Shiffrin's focus on unconsented harm accords well with my thinking on procreation. I wish to question, though, whether it is the benefit/harm distinction that matters when motivating an unconsented harm. In my view, Shiffrin's benefit/harm distinction is unnecessarily confusing and subject to contrary individual interpretations of harm and benefit; the very idea of harm and benefit are, in my view, too subjective to form the basis for the rightness or wrongness of inflicting unconsented harm. I think it is both more correct and more general to say that unconsented harm may be only be done in the service of a genuinely agent-neutral value.

Shiffrin considers, as a possible objection to her framework, that the real reason that a rescue is morally right, while Wealthy's action toward Unlucky is morally wrong, is that in the rescue case, hypothetical consent may be said to exist, whereas not even hypothetical consent exists in Unlucky's case (he is not sure he would have consented ex ante). Shiffrin argues that it is the asymmetry between harm and benefit that grounds our intuition on hypothetical consent, rather than the other way around. She argues that

there seems to be a harm/benefit asymmetry built into our approaches to hypothetical consent where we lack specific information about the individual’s will. We presume (rebuttably) its presence in cases where greater harm is to be averted; in the cases of harms to bestow greater benefits, the presumption is reversed.

My view is that we can be clearer than this. It is not the harm/benefit distinction that is driving the willingness to infer hypothetical consent; it is the different level of agent-neutrality of the inflicted harm's consequence.

Thomas Nagel introduces the concept of agent-relative and agent-neutral value in The View from Nowhere. Agent-relative values are values which an agent holds, but which no one but the agent has much reason to promote. Agent-neutral values are values which anyone has reason to promote, whether or not the promotion of the values would benefit him directly. An agent's desire to climb Mount Everest would be an agent-relative value; he may place genuine value on it, but I have no reason to assist him in his endeavor. However, relieving pain may be said to be an agent-neutral value; if someone is suffering severe pain, I have good reason to alleviate his pain.

In the rescue case, the rescuer causes harm to a person in order to prevent greater harm - to save his life, or to prevent more serious physical injury. Both saving life and preventing physical injury would probably be classified as agent-neutral values. In Unlucky's case, however, the $5 million gold cube could well be seen as something with only agent-relative value. Shiffrin specifies that inhabitants of Unlucky's island are well provided for even without the gold. While there might be an agent-neutral reason to provide people with a certain minimum level of money or material comfort, beyond this, there is not much reason to give substantial gifts to strangers. A person might want $5 million, but I have no particular reason to see that he gets it, while I do have a reason to ensure that his basic nutritional needs are taken care of.

A major problem with the agent-neutral/agent-relative classification is whether agent-neutral values exist at all. Eric Mack, for example, argues that there are no agent-neutral values ("Against Agent-Neutral Value," Reason Papers 14 (Spring 1989) 76-89.) Mack argues that an agent-neutral value must necessarily be an "agent-external" value - something that is valuable in itself, even if no one is ever in a relationship with it so as to value it. Otherwise, all such values are "reducible to [their] value for someone," that is, they are agent-relative (emphasis mine). Few are prepared to claim that there are truly agent-external value in this sense (things that would be valuable even if there would never exist any sentient beings in the world). I find the possibility of the nonexistence of agent-neutral values disturbing, calling to mind as it does relativism/subjectivism, though I could imagine an ethical system that recognized the existence only of agent-relative values, but also recognized reasons other than personal preference for taking the values of others seriously. Interestingly, Mack refers to the possibility for agent-relative values that are nevertheless, in his words, objective; as long as there can be reasons for taking the (agent-relative) values of others seriously, then the project of ethical philosophy doesn't fall into dust.

George R. Carlson (in "Pain and the Quantum Leap to Agent Neutral Value," Ethics, Vol. 100, No. 2 (Jan., 1990), pp. 363-367), while not exactly precluding the possibility for agent-neutral value, argues that Nagel's chief example, pain, fails to be a genuinely agent-neutral value. He argues that while a person might have reason to alleviate the pain of another, these are not agent-neutral reasons. Rather, they are grounded in the perceptions and empathy of the agent.

What I find most concerning with the benefit/harm classification, as well as allegations of agent-neutral value, is that any of the examples so far examined may, depending on the individual circumstances, be either a harm or a benefit. Saving a life would generally be seen as an "agent-neutral" value; however, since I am a suicide, a rescuer saving my life would do only harm to me. Preventing pain is seen as an agent-neutral value; however, hiding my friend's car keys so he cannot drive to a club and get beaten up by his dominatrix friend (and thereby preventing him physical pain) would certainly do him harm, not good. And studies of lottery winners seem to indicate that even loads of unnecessary money can do harm. (As J. David Velleman points out, even choice can be a harm.) Can these values really be especially agent-neutral if they are often harms? Is it not more appropriate to call them the agent-relative values of the majority, rather than genuinely agent-neutral values?

Shiffrin points out a "related asymmetry," from Thomas Scanlon (Preference and Urgency, 72 J. PHIL. (1975) 655–69.). This is the asymmetry between the harm that is is morally correct to inflict on another, and the "harm" that a person may inflict on himself. In Shiffrin's words (summarizing Scanlon),

One may reasonably put much greater weight on a project from the first-person perspective than would reasonably be accorded to it from a third-party’s viewpoint. A person may reasonably value her religion’s mission over her health, but the state may reasonably direct its welfare efforts toward her nutrition needs rather than to funding her religious endeavors.

This "related asymmetry" is, it seems to me, concerned with both the problem of consent and, indirectly, with the idea of agent-neutral versus agent-relative values. A person may consent to "harm" for any reason whatever, agent-relative or otherwise; but in order to inflict harm on another without consent, we must either (a) have such a good model of the person's values that we can infer hypothetical consent based on agent-relative values, or (b) act in furtherance of genuinely agent-neutral values.

The ultimate question, of course, is whether coming into existence is the kind of value that it is morally acceptable to inflict harm on others, without their consent, in order to procure for them. Pain, suffering, illness, unrequited love, shame, sexual frustration, sorrow, disappointment, fear, and death are all guaranteed (or nearly so) by the fact of being brought into existence; these are certainly harms. The pronatalist might argue that despite these certain harms, it is not wrong to bring others into existence, because the unconsented harm in the service of an agent-neutral value: coming into existence. (I find the "hypothetical consent" argument unpersuasive, because we have no model, much less a reliable model, of the agent's future agent-relative values when we contemplate bringing that agent into existence. This is my core problem with R.M. Hare's "Golden Rule" argument that we should bring into existence those who will be happy to exist and not bring into existence those who won't. How do we tell the difference ex ante?)

Is coming into existence an agent-neutral value? The problem we run into at this stage is that we have little theory of what qualifies an agent-neutral value. Carlson's chief criticism of Nagel seems to be a lack of a theory for determining what counts as an agent-neutral value versus an agent-relative value (other than the unsatisfying "pain is awful"). Indeed, there seems to be a genuine question as to the degree to which agent-neutral values exist at all.

Actually, even under Mack's restrictive definition, I think there is, in some sense, a clear example of a genuinely agent-neutral value, a peculiar value that would retain its value even if no sentient beings ever come into existence to appreciate it. This is the value of no sentient being coming into existence. If no beings exist, no suffering can occur; this is good, even though (and precisely because) no being ever come into existence to appreciate this pleasant state of affairs. The alternative would be worse; it is good that this worse option does not obtain, even though the only way anyone would perceive its better-ness would be by the worse alternative coming to pass.

There may be disagreement over whether coming into existence is an agent-neutral value. I certainly think that it is not, but I think that an argument could be made in good faith that it is. I think there is a stronger argument, however, that no one coming into existence is an agent-neutral value - perhaps the only such peculiar value - and, under my theory, an agent-neutral value is one in the service of which unconsented harm may be countenanced.

7 comments:

  1. Great stuff.

    I'm not sure if this part is phrased correctly:

    "It is an interesting objection to the concept of an 'agent-neutral value' that seems, to me, to also WEAKEN the case AGAINST ethical philosophy as a practical endeavor. (If the only reasons I can have for doing anything are relative to my own interests, what point or utility is there in making ethical arguments?)"

    Given your explanation, wouldn't Mack's objection tend to strengthen the case against ethical philosophy as a practical endeavor?

    ReplyDelete
  2. Um, yeah, I need to fire my proofreader . . . I changed things around a bit, hopefully clarifying (what I meant is the second thing you said). The discussion of whether agent-neutral values exist is still parenthetical to the piece, but now I'm really interested in it (and disturbed by the implications).

    ReplyDelete
  3. Curator,

    Thanks for the link to this article. I still have to digest it completely, but I do get the second to last paragraph, which I "read" as your main point - that not coming into existence is an objectively good thing, mainly because the alternative - coming into existence - is actually worse. After all, we don't mourn the non-existence of the nation Kanalaya, simply because it doesn't exist or never existed! It's just a figment of my imagination.

    Now extend this concept to a non-existent canine or feline civilization. We don't mourn the non-existence of such civilizations (fantasy-lovers aside) because they don't exist. If the reverse were true, dogs or cats created a technological civilization while we ended up "swinging from the trees" and went no further, those dogs and cats wouldn't mourn the fact that we humans didn't create a civilization.

    So in the end, it doesn't matter if we didn't exist or not. Therefore, it doesn't do us harm if we weren't ever born.

    ReplyDelete
  4. "...intuition suggests that dropping $5 million gold cubes on people is wrong."

    Gems like this line (especially when taken out of context) are the reason I love philosophy. :)

    ReplyDelete
  5. Two quick thoughts.

    (1)

    If you accept the impossibility of agent-neutral values, you might still have something to fall back on other than preferences. I'm toying with the idea that people cannot be wrong in their assessment of their subjective experiences. That is, people's in-the-moment assessments of their experience is infallible: we can't enjoy an ice cream sandwich while deeming ourselves to be disgusted by it. Our recollection of the experiences, of course, is unreliable, as is our ability to predict what will lead to various subjective experiences -- that is, our preferences. And of course the goodness of the experience doesn't make it ipso facto a good thing to do what gave us the experience: the fact that heroin gives us a blissful subjective experience is on the whole bad for us, as is the act of using heroin.

    There's still the issue of interpersonal comparisons: why is preventing Alice's waterboarding more compelling than preventing Bob from stubbing his toe? An approximate solution would be to say that Alice and Bob's common civilizational and evolutionary history gives us reason to believe that, absent knowledge to the contrary, doing X to one will produce subjective experiences similar in quality and intensity if it were done to the other, and that they will have similar assessments of similar subjective experiences.


    (bridge)

    It may be possible through meditation techniques to have periods of close-to-zero subjective experience, and then assess these periods relative to a "normal" basket of experiences. But then you have the same recall problems as always.


    (2)

    I too can't accept (thus far) the benefit/harm distinction as being controlling. I'm more open to there being a qualitative difference between joy and suffering, as oppose to one being the other but with opposite sign. And even if they were incommensurable, I think everyone would choose to suffer one lousy day at the office and then have a life of joy and bliss, and we wouldn't even have a problem imposing that fate on others.

    So it's fine to create Unbreakable, and also fine to create Unbreakable-minus, which is the name I'll give to the one-lousy-day-plus-lifetime-of-joy-and-bliss creature. It's not fine to create Austrian Basement. Where's the line in between? I don't have an answer, but we can get some sense of where an upper bound on our estimate of where the line should be by asking the people who right now would prefer never to have been (or perhaps those who are the least happy with life, or whose assessments of their subjective experiences are the worst among all persons' assessments) what would be necessary for them to change their preference. Similarly, we could get some sense of the lower bound by asking people who prefer their having come into existence to their not having done so how much worse life would have to be for them to prefer never to have been born.

    This is all very inexact, and I think that uncertainty should make us very wary of creating new persons, but since I would create Unbreakable-minus, I can't really say that this is a black-and-white question and so I can't say we should never create new persons, ever.

    ReplyDelete
  6. Some thoughts....

    Shiffrin is likely wrong. Unlucky is obviously suffering from scope insensitivity. If he was in his right mind I'm sure he'd understand Wealthy helped him. I know I would. (Assuming of course, that there was no risk Wealthy's gold bars would kill someone. You should add a nod to magic hard hats to the thought experiment to make things clearer).

    There are plenty of times when it is acceptable to harm someone in order to provide them a greater benefit later. For instance I once led one of my brothers to a surprise birthday party he greatly enjoyed. I had to lie to him and mislead him to do this.

    Carlson is correct, there are no true agent-neutral values. I think you should replace "agent relative values" with "egoistic values" and "agent neutral values" with "empathy relative values." No agent-neutral values exist, including the value of not creating someone.

    In attempting to devise whether creating someone or not creating someone is a "empathy-relative value" or not we should consider that our conscience ultimately wants to help people achieve their egoistic values. That means it is good to create people, but not good to create people in situations where it will be impossible for them to achieve many of their values (i.e. Austrian Dungeon and Slumworld). So no antinatalism, but no repugnant conclusion either.

    JasonSL, your experiment regarding Unbreakable vs. Austrian basement illustrates this perfectly. Your inability to define a line is understandable, but let me assure you that just because it's hard to tell where the line is on the continuum between Unbreakable and Austrian Dungeon, doesn't mean there is no line. Thomas Sowell calls mistaking a vague line for no line the "precisional fallacy." If you make the precisional fallacy upwards you get the Repugnant Conclusion. If you make it downwards you get anti-natalism.

    ReplyDelete
    Replies
    1. Assuming of course, that there was no risk Wealthy's gold bars would kill someone.

      That's inconsistent. If the death risk is small enough, the same conclusion must hold.

      Delete

Tweets by @TheViewFromHell