Monday, April 05, 2021

Guest Post: 'Save the Five: Meeting Taurek’s Challenge'

[My thanks to Zach Barnett for writing the following guest post...]

At its best, philosophy encourages us to challenge our deepest and most passionately held convictions. No paper does this more forcefully than John Taurek’s “Should the Numbers Count?” Taurek’s paper challenges us to justify the importance of numbers in ethics.

Six people are in trouble. We can rescue five of them or just the remaining one. What should we do? This may not seem like a difficult question. Other things equal, you might think, we should save the five. This way, fewer people will die. 

Taurek rejects this reasoning. He denies that the greater number should be given priority. In effect, Taurek challenges us to convince him that the numbers should count. Can we meet his challenge?

You might be pessimistic. Even if you yourself agree that the numbers do count, you might worry that... just as it’s hopeless to try to argue the Global Skeptic out of Global Skepticism... it’s equally hopeless to try to argue someone like Taurek, a Numbers Skeptic, out of Numbers Skepticism. But that’s what I’ll try to do.

Let’s start by examining some different forms that Numbers Skepticism can take. Some Numbers Skeptics are driven by considerations of fairness. Often, they hold that we are required to randomize, to ensure that everyone is given an appropriate chance of rescue. For example, Taurek himself suggests flipping a coin to decide whom to save. 

Here are six human beings. I can empathize with each of them. I would not like to see any of them die. But I cannot save everyone. Why not give each person an equal chance to survive? Perhaps I could flip a coin. Heads, I [save] these five. Tails, I [save] this one. In this way I give each of the six persons a fifty-fifty chance of surviving. Where such an option is open to me it would seem to best express my equal concern and respect for each person. Who among them could complain that I have done wrong? And on what grounds? (p. 303)

So that’s one view: give each stakeholder the same chance of rescue.

Jens Timmerman, in his paper, "The Individualist Lottery," also appeals to randomization, but in a different way. Rather than flipping a coin, he proposes holding a lottery involving the six people and allowing the winner’s interests to decide the outcome.

[Consider] the individualist lottery. To give the claims of A, B and C equal weight, a coin will not do. We need a wheel of fortune with three sectors, each of which bears the name of one islander. The person whose sector comes up is saved. If this person is A, both B and C perish. If B’s sector is selected, B is saved. Having reached the island, the rescuer then incurs an obligation to save C. Similarly, if C wins B is also saved. We neither count, nor aggregate, nor quantify; nor do we arbitrarily assign roles to individual islanders. We need not even count the numbers of people on their respective islands. Crucially, no one’s claim is ever balanced off or discounted. Everyone has the same chance of winning the lottery. In addition, some people stand a good chance of benefiting from someone else’s good luck. Depending on the numbers, it is—more, much more, vastly more—likely that the many will be saved. Being stuck on an island and losing the lottery, whoever perishes will undoubtedly bemoan his ill fortune; but he cannot complain about unfair treatment by the person in charge of the ship. (pp. 110–111)

In the case at hand, Timmerman’s individualist lottery gives the group of five a 5/6 chance of rescue.

Not all forms of Numbers Skepticism call for randomization, though. A simpler view might just permit us to save either group. Such a view could be motivated along the following lines.

Suppose I can save either of two people, one of whom is happier than the other. I’m not required to save the happier one. Nor would I be required to save the healthier one, if one of the two is healthier and expects to live a longer life. Morality is permissive in these cases. Once we reject utilitarian thinking in these simple cases, we will discover that we should do the same in the five vs. one case.

See Tyler Doggett’s paper, “Saving the Few,” for a more sophisticated development of this sort of thinking.

In short, some Numbers Skeptics are motivated by fairness and favor randomization, while others are motivated by a certain sort of non-utilitarian thinking. Next, we’ll see that these forms of Numbers Skepticism encounter serious difficulties when they are applied to some variations on the original scenario. (The argument I’ll give has the same initial premise as Ben Bradley’s paper, “Saving People and Flipping Coins.” And some key points I’ll emphasize are also stressed by Caspar Hare, in his paper, “Should We Wish Well to All?”)


The Ex Ante Case

Let’s think about an ex ante version of the original case. Our six people are currently together, but they are soon to be randomly divided into two groups: Five will be stranded on the Big Island; the sixth will be stranded on the Small Island. We must decide, in advance, where to send the ship. Which island should we choose? Or should we randomize?

In this version of the case, I believe that virtually everyone, even the Numbers Skeptic, should grant that the ship should be sent to the Big Island.

After all, if you’re a Numbers Skeptic who’s concerned with fairness, then you hold that everyone’s chances should be equal. If that’s right, then there’s no objection to sending the ship to the Big Island because doing so gives everyone the same chance of rescue: 5/6. (Alternatively: there’s no need to randomize in the ex ante case, for randomization is already built into the setup. There’s nothing gained by stacking one random event on top of another.)

On the other hand, if you’re more taken by the anti-utilitarian motivation for Numbers Skepticism, we should examine just how far that thinking extends. Suppose one person is in trouble. Option A has a 5/6 chance of saving her, while Option B has a 1/6 chance of doing so. Few would say that choosing B is permissible. And yet, if you send the ship to the Small Island ex ante, that’s in effect what you’re doing, six times over. You’re giving six people a 1/6 chance of rescue, when you could give all six a 5/6 chance of rescue instead.

So all sides should agree that, ex ante, we should save the five. But ex post—after the six are sent to their respective islands and after we learn where they were sent—the Numbers Skeptic no longer agrees that we are required to save the five. So the key question for the Numbers Skeptic is: What distinguishes the two cases?

There are two options. Is it the fact that, in the original case, when our decision is to be made, the locations of the six people have, in fact, already been settled, metaphysically speaking? Call this the metaphysical view. Or is it the fact that, in the original case, when our decision is to be made, we know who was sent to which island? Call this the epistemic view. I’ll argue that neither horn of this dilemma is particularly attractive.


The Metaphysical View

According to the metaphysical view, what matters is whether the facts about who will be on which island are settled, regardless of whether they are known.

Imagine that the six people will be assigned to their respective islands by the roll of a die. The die will be rolled at midnight, but no one will learn of the result until the following morning. However, we rescuers must decide where to send the ship before the morning comes.

According to the metaphysical view, it matters a great deal whether our decision is made at 1159pm or 1201am. If we make the decision at 1159, we will be required to save the five. If we wait until 1201, then we may be required to flip a coin, or we may simply be permitted to save the one.

This is odd. The mere fact that the die has been rolled doesn’t seem to change the situation meaningfully. In contrast, learning the result of the die roll—learning who would be sent where—seems to alter the situation dramatically. This suggests that the metaphysical view was not the right path. So the epistemic view seems more promising.


The Epistemic View

According to the epistemic view, the key difference between the ex ante and ex post cases is that only in the ex post case do we know who is on which island.

But what does it take to “know who is on which island”? Suppose that the six people are complete strangers to us, and that they’ve already been sent to their respective islands. At this point, the epistemic view would require us to save the five (since we don’t know who is where).

Now suppose we say to ourselves, “One of the six people is on the Small Island. Let’s call that individual ‘Sam’.” Arguably, by doing this, we successfully refer to the person on the Small Island. But do we now ‘know who is where,’ in the relevant sense? Clearly not. All we did was coin a name. So the situation hasn’t meaningfully changed.

Now, you might object that the fact that ‘Sam’ is on the Small Island is true by definition—thorny reference issues aside. Let’s imagine learning something a bit more substantive.

Suppose, again, that the six people have already been sent to their respective islands. We rescuers decide that we’ll use the letters A through F to refer to the six, assigning those letters alphabetically by surname. We don’t know the six people, but we can now refer to each person uniquely. Next, suppose that, before we decide where to send the ship, we learn something: Person B is on the Small Island. This is not true by definition. It’s a bit more substantive: we’ve learned that the person whose name comes second in an alphabetical list is the one who would die if we save the five. Does this knowledge compel us to randomize? That’s some heavy lifting for such a tiny fact.

At this point, we can ask: What kind of knowledge is sufficient to convert the ex ante case into the ex post case? Would it help if we started with their social security numbers and then learned which of the social security numbers belongs to the person on the Small Island? Would it help if we knew their first names or full names? What about if we were to examine photographs of their faces and then see the face of the person on the Small Island?

You get the point. There’s trouble here, for the epistemic view. What kind of knowledge about the to-be-affected parties must we have, so that Numbers Skepticism goes into effect? I’m skeptical about whether a principled story can be told here.


Concluding Thoughts

Taurek’s paper shows us that it’s difficult to convince someone who doesn’t antecedently care about numbers to do exactly that. But we’ve seen that there’s a close connection between caring about numbers in the sorts of cases Taurek et al had in mind... and caring about lowering the probability of harm for every member of a group. It may be possible to convince someone to care about the former if they antecedently care about the latter.



References

Bradley, Ben (2009): “Saving People and Flipping Coins,” Journal of Ethics & Social Philosophy 3: pp.1–13.

Doggett, Tyler (2013): “Saving the Few” Noûs 47: pp. 302–315.

Hare, Caspar (2016): “Should We Wish Well to All?” Philosophical Review 125: pp. 451–472.

Taurek, John (1977): “Should the Numbers Count?” Philosophy & Public Affairs 6: pp. 293–316.

Timmerman, Jens (2004): “The Individualist Lottery: How People Count, But Not Their Numbers” Analysis 64: pp. 106–112.


- Zach

25 comments:

  1. Thanks for the great post, Zach! As you might have guessed from our conversations, I think Taurek's best bet is the Epistemic View. And if I had to defend the view, I'd look for companions-in-guilt.

    Your basic objection is that Taurek has to draw an arbitrary line. Sure, it seems reasonable to save the many ex ante, and supposedly it's reasonable to save the few ex post. But how much do I have to know to trigger the switch? There doesn't seem to be any principled answer; the path from total ignorance to being a know-it-all is paved with heaps of insignificant facts.

    But won't there be similar arbitrariness on any view that posits incommensurable values? (In particular, ones that give you insensitivity to mild sweetening, a la Chang 2002?) Suppose I'm choosing between two jobs: call them Job A and Job B. If all I know is that they are jobs, I should be totally indifferent between them, and the moment any point is added in favor of one of them, I should prefer that one. But if I know tons of facts about them -- Job A is a rewarding career in the arts, while Job B is a lucrative career in business -- it might be rational to think of them as on a par, and to keep thinking of them that way even if one is slightly improved (I'll make $5 more a week). I think most ethicists these days are at least open to this kind of incommensurability. But a similar arbitrariness worry looms. How much do I have to know about the jobs before they become incommensurable? (Would it be enough to know that Job B will pay me weekly, whereas Job B will pay me every fortnight? What if I learned the names of the people I'd work with? What if I learned about what I'd be doing on the average Tuesday from 1-1:05pm?)

    So it seems like your objection to Taurek might really be an objection to any kind of incommensurability. If Taurek's in trouble, so are lots of others!

    ReplyDelete
    Replies
    1. Hi Daniel, great comment! That's a really helpful comparison.

      In the jobs case, I'm inclined to think that, in order to regard the jobs as incommensurable (in the sweetening-resistant sense), you have to actually know about some incommensurable value difference between them. This provides a non-arbitrary boundary. We can straightforwardly rule out value-irrelevant differences (in payroll schedules, names, etc.) as strictly irrelevant to introducing incommensurability.

      So that then makes me wonder whether a similar move could be made in response to Zach's fascinating argument. Perhaps the Taurekian could offer a story about what particular features normatively ground the incommensurability of distinct human lives? Perhaps surprisingly, it cannot just be their bare humanity or personhood, because we already know about that in the ex ante case. But perhaps learning more about the normatively rich details of someone's life -- the distinctive details about what values their life instantiates -- can non-arbitrarily ground incommensurability here.

      Curiously, this would seem to be a matter of degree. Perhaps if you learn a little about the specific value inhering in the one's life, that's enough to make them incommensurable with two -- but still outweighed by five -- and it's not until you get more (relevant) details that they're able to balance out a larger number of other competing lives...?

      Not very Taurekian, admittedly, but it would seem to be the most defensible view in the vicinity.

      (But I'll be curious to hear Zach's thoughts...)

      Delete
    2. Thanks for commenting Daniel! You do a good job of sticking up for Taurek et al. The analogy to incommensurability is a very insightful one, and I'll have to think about just how far, if at all, the argument generalizes. But in response...

      First, if Taurek et al take your advice, they'll have to concede that their view is irrelevant to cases involving strangers. This will apply to real life cases—say, involving distribution of aid to people far away, with whom we have never interacted. If Taurek's view is untenable in cases involving strangers, and really is only fully applicable to cases where we are well acquainted with all of the to-be-affected parties. This seems worth acknowledging.

      Second, I'd argue that there's more behind the objection to the epistemic view than an appeal to arbitrary-line-drawing (though admittedly that's part of it). I didn't say so in the post, but part of what's driving the worry is something Richard touched on: some facts we might learn, like social security numbers, seem not just negligible but *irrelevant*. So part of the challenge for the epistemic view proponent is to explain which features are relevant and how.

      Richard actually offered a proposal here—the relevant features are the ones that bear on how much (and what kind of) value the person's life contains or creates. So if we learned that the person on the Small Island had an abnormally value-rich life (maybe ~5x richer than average), then we'd be permitted to save them over the one. I like this view, but I don't think I could invent a less Taurekian view if I tried.

      One final point. When I think about seeing photographs of the six faces... and seeing the face of the person who would not be rescued... that actually *does* get my intuition going for Taurek. At the same time, it's just so obvious that appearance facts shouldn't matter. This combination makes me suspicious of the intuition in the first place.

      Delete
    3. Oh, just to clarify: the view I was imagining does not require an "abnormally value-rich life" in order to justify saving the one. My thought was that a perfectly ordinary life might be incommensurable in value with five other lives combined, but only once the (ordinarily!) normatively rich details of the life come into view.

      So, for example, I may initially have most reason to save Beth and Case over just Ann, but once I learn about Ann's life projects and what a dedicated mother she is, etc. etc., I could then permissibly choose either option (even if I know that Beth and Case each have lives that are similarly normatively rich -- their value is no longer so precisely comparable as to yield the result that the two determinately outweigh the one).

      Delete
    4. Ah, right. I was attributing to you the view that made more intuitive sense to me. But I see how the view you actually had in mind is at least somewhat more faithful to the Numbers Skeptic.

      Suppose that five people are on the Big Island, and you are well acquainted with all of them. Either Ann or Beth is on the Small Island, but you don't know which. You're well acquainted with both Ann and Beth. What would your view say about this case?

      Delete
    5. Hmm, tricky! To avoid the risk of trivialization, I'd guess the view had best require non-disjunctive knowledge of particular values at stake in order to generate incommensurability. (So: save the five in the "Ann or Beth" case.) But that does seem rather ad hoc in its own right.

      (I'm happy for the view to fail, of course -- just trying to play devil's advocate here.)

      Delete
    6. Agreed. And I guess the point would carry over to a case where we know that it's Ann on the Small Island, but we have two competing conceptions about what Ann is like: either she's a painter or she's a physicist. Even then, we'd have to save the five.

      But interestingly, if Ann and Beth turn out to have very similar profiles, so similar that we take their lives to be instantiating the same sorts of value, in approximately the same amounts... then we'd actually be permitted to save the one (whichever one it turns out to be)?

      Delete
    7. Ha, yeah, it'd seem so! At least, so long as the other wasn't then amongst the five. For if Ann is on the Small Island, and Beth is on the Big, and they have qualitatively identical value profiles, it'd seem difficult to hold that the incommensurability still holds. ("We can't just let *the painter* die!" "Which painter? There's another just like her over here!")

      Delete
    8. What a great thread!

      I agree with everything Richard said, and I agree with Zach that this is a serious (and persuasive) problem for Taurek. It sounds to me like his view might only work if the one is someone you "know and like" as opposed to a complete unknown. Well, really, I suppose liking doesn't matter; what matters is that you know enough about the one to get incommensurability. Big open question what counts as "enough."

      The Ann vs. Beth case is interesting. I'm not sure what Taurek (on the Epistemic View) should say in reply. The best heuristic I know is to ask if I can take the perspective of the one. But if I really know the exact *same* things about both people, how can I take one's perspective rather than the other's? (Can I imagine being Ann rather than Beth?) FWIW, I get the sense that some kind of "acquaintance" could make a decisive difference here. Even just eye contact could do.

      Mysterious...

      Delete
  2. I would have thought the Taurekian has a straightforward response. If there is a fact of the matter about who will be on the small island, then one must randomize. On many views, future truth is compatible with non-maximal present chance. On those views, the Taurekian rejects the ex ante case. Going to the big island without randomizing is not fair to the person who will be on the small island, even though it raises that person's chance of survival. On other (stranger) views on which future truth is not compatible with non-maximal present chance, the Taurekian takes the metaphysical view. On those view, the difference between 1159 and 1201 is *huge*. At 1159 there is no fact of the matter about who will be on the small island, and at 1201 there is, and all along the Taurkeian principle is this: If there is a fact of the matter about who will be on the small island, then one must randomize.

    ReplyDelete
    Replies
    1. On the sort of picture you're envisioning, what is the Taurekian motivation for requiring the rescuer to flip the coin? I'd have thought that the whole point of Taurek's coin is to give all parties an equal chance. If this is the motivation, then in a case where all parties already have an equal chance, there wouldn't be any reason to flip a coin, and so there would be no reason to flip the coin in the ex ante case. (Based on my reading of the Taurek paper, this seems to be Taurek's own view, judging from his remarks about policies in the final section of the paper. But it may be that the best Taurekian view is not the one Taurek himself would have embraced.)

      Additionally, on the view that allows future truths about random events... isn't there already a fact of the matter about how the coin will land? If so, then given how you're construing fairness, isn't the coin arguably just as "unfair" as saving the five? Suppose it's true now that Zach will perish if the coin is used. If so, then someone could argue that flipping the coin is unfair to Zach, because while it gives him an equal chance of rescue, if the coin is flipped, he will not be saved.

      Regarding the metaphysical view, I'll admit that the argument in the post is pretty thin. I'd like to be able to add a bit more to that intuition pump. I've never felt that the difference between pseudorandomness and true randomness was morally important (as long as the pseudorandomness has the hallmarks of good pseudorandomness—i.e. is practically indistinguishable from true randomness). Somewhere Dennett discusses an example where a lottery number is chosen, and then people buy tickets afterward. Obviously, the result of the drawing would have to be kept secret. If it were all done carefully, would this lottery be unfair in any important sense? I'm inclined to think not. But I am not sure how to argue with someone who disagrees. Do you have any ideas?

      Delete
    2. [Reposting as a reply...]

      The point was more about the structure than the recommendation. (Insofar as I am inclined to the Taurek view, I actually like the view that both options are permissible. After all, randomization isn't always an option.)

      As for (pseudo)randomness: That's not the distinction I would stress. I think the important distinction is between cases where there is and is not a fact of the matter (at the time of decision) about who will be on the small island. That difference is morally important because if there is a fact of the matter about who will be on the small island, then it is not their interest for you to go to the big island, even if going to the big island increases their chance of survival.

      Delete
    3. I thought you were defending the "one must randomize" view specifically. But it sounds like maybe you'd agree that there's a potential problem here for views that require randomization in order to ensure equality of chances. Maybe the Doggett-style view is less vulnerable here, since that view is not committed to the claim that chances are relevant.

      In any case, I do understand the structure of the response. I do think you're right that it will matter to some whether there is a fact (at the time of decision) about who is on the small island. As I see it, here's a question for that view:

      Joe is in trouble. A die has been rolled, but we were not told the result.

      Option A: Save Joe iff 6 was rolled.
      Option B: Save Joe iff 6 was not rolled.

      I think we're required (in the sense of moral requirement I consider most interesting) to choose B, even if this turns out to lead to Joe's demise. This verdict suggests to me that minimizing *epistemic* chances of harm is morally relevant and desireable. And this would seem to count in favor of saving the five once it's settled who is where but before we know who is where, since saving the five minimizes everyone's epistemic chances of harm.

      Delete
    4. Uncertainty matters when you do not know which of your options are objectively permissible. Your case illustrates that. Another illustration is closer to Taurkek's cases. The one is either on small island A or B; I do not know which. The five are on the big island. The Taurekian should say that going to small island A is impermissible, even if the one is in fact on small island A.

      But the Taurekian (who thinks both options are permissible) should think that going to small island A is permissible when I learn that the one is on small island A, even if I do not know which of the six is the one is. (Else, Reflection failure.)

      In the case of de re ignorance, I increase each person's epistemic chance of survival (relative to my evidence) by going to the big island. But I know that going to small island A is objectively permissible, so the epistemic chances aren't relevant---or so I think the Taurekian should say.

      Delete
  3. That makes sense and is a very good response. It explains the difference in a principled way. But it opens the door for one last worry—one which I doubt will be new to you, since you've probably discussed related things with Caspar. Anyway, this is the last thing I'll try... thanks for the thoughtful criticism you've given.

    You are asked to make six decisions in succession. A single die is to be rolled, and that one roll will have consequences for six different people. You know about the whole setup before you begin.

    For the first potential victim, v1, you can make it the case that v1 is saved iff 1 was rolled, or you can make it the case that v1 is saved if 1 was not rolled. For v2, you can make it the case that v2 is saved iff 2 was rolled, or ou can make it the case that v2 is saved if 2 was not rolled. And so on.

    Making the higher probability choice in each case guarantees that five are saved and one dies (since one and the same die roll is being used across all six potential victims). Making the lower probability choice in each case guarantees that only one is saved. A different package of choices might save all six, or might save none.

    Since you agreed that Option B is required in the Joe case, I'm tempted to think that you'd say we are required to make the corresponding choice in each of the six occasions here, since they're not intrinsically different from the Joe case. But in effect, those six decisions, taken together, are equivalent to saving the five in the ex ante case.

    Strictly speaking, it's open to you to say that, in the sequential case, we're required to choose in such a way that will result in the five's being saved... while maintaining that, if we're just making a single decision that will carry through all six situations, then we'd be permitted to save the one. It's a certain kind of incongruity, but maybe it's not bad. My question for you is... can you think of any other context where the intuitively correct view is committed to this same sort of incongruity? Or maybe you'd take a different position on the sequential case than the one I've attributed to you.

    ReplyDelete
    Replies
    1. A good worry and a good question. Satan's Apple, maybe?

      Delete

  4. Hi everyone!

    Very interesting post and replies.

    I find the objections pretty good, though I already found Taurekian view very counterintuitive to say the least. Just to add two cents regarding the "fairness" view, one of the consequences I see as particularly weird is the doubling down on failure so to speak, at least if randomizing is seen as obligatory (if only permissible I reckon the consequence is still very weird, even if somewhat less so):

    At first, there are 5 people on the big island and 1 on the Small Island. Alice can send the drone ship to one of the islands, but not both. So, she has an obligation to give every person the same chance.
    She has a coin, so she should toss the coin and send the ship to the Small Island if it's tails, Big Island if heads (or a similar procedure of course). Here's a variant based on the same approach to fairness:


    Before Alice tosses the coin, the computer on board the drone ship tells her that due to rough seas, the chances of reaching the Small Island in time are only about 1/4, whereas reaching the Big Island in time are almost certain (let's say 1 to simplify the numbers, but it's the same with a bit less if we want to avoid probability 1). What to do? Well, in addition to the coin, Alice has her laptop computer. So, she runs a program that simulates a loaded coin that lands tails with probability 4/5, and she sends the ship to the Small Island if it's tails, Big Island if heads. Then everyone has a 1/5 chance of being rescued, so Alice did what she was morally required to do.
    But what if it's 1/10? Then she sends the ship to the Small Island with probability 10/11, and so on. In general, as the probability that the rescue of the single person on the Small Island would fail increases, the probability with which Alice ought to send the ship to the Small Island increases as well - which does look like doubling down on failure to me.

    Here's another variant, in which I do not know what this fairness approach says - but any answer is weird! This time, Alice first tosses the coin thinking the ship will make it to the island that she chooses, and it's heads - so, it's the Big Island. Great! The five will be rescued!
    Alas, a few seconds later and before she can send the ship, she learns that the chance of getting to the Small Island in time is only 1/10. Then what? Is she obligated to send the ship to the Big Island because it was heads before, saving the five? Or should she consider the new information and go for the procedure that sends the ship to the Small Island with probability 10/11? Something in between?

    ReplyDelete
    Replies
    1. That's an interesting criticism.

      My first thought was that the case could be a problem for the coin view but not the lottery view. But then I was thinking that the coin view might be able to handle it as well, and what your case really shows is that my post doesn't accurately capture the motivations for the coin view.

      Let's do the lottery view first. Timmerman would recommend that we hold a lottery among the six people. If the one is chosen, then obviously she will request that we try to save her, no matter how rough the waters. If any of the five is chosen, then we will save the five. This won't maximize expected lives saved, but it seems like a reasonable result, given Timmerman's motivations.

      But then I was thinking... maybe Taurek could say something similar on behalf of the coin? He could say: Here we have two groups that want me to do different things. There's no reason to suppose that the larger grou's claim is stronger. So we might as well use a coin. I'll flip a coin and do whatever the winning side wants.

      This doesn't ensure equal chances of rescue for all parties. But Taurek might point out that it would be unreasonable to expect to be given an equal chance of rescue once one is stranded behind such rough waters.

      What this reveals, though, is that what's motivating Taurek's view isn't really a desire to equalize probabilities of rescue (as I intimated in the post), but rather to have a way of adjudicating their competing claims that doesn't simply assume, without argument, that the claim of the many trumps the claim of the few.

      I don't think that this reconstrual of their motivations should affect my criticism of the view, though, since in the ex ante case, everyone wants the same thing, and unanimity seems to be a pretty ideal and unobjectionable way of adjudicating the claims of the six.

      Delete
    2. I tend to agree that your ex ante case is not negatively affected, though I think it might be positively so. For example, consider Jack's reply that if "there is a fact of the matter about who will be on the small island", one should randomize - and rejects going to the big island without randomizing. But under this reconstrual of Taurek's motivations, what the six unanimously would want you to choose ex-ante trumps facts of the matter about who will be on the small island, and so that would seem to block Jack's reply, if I'm reading this correctly (of course, it might be objected that the six would not want you to do that, but that seems very improbable to me...).

      That aside, I think that, while this alternative motivation avoids the 'doubling down on failure' objection, it creates other difficulties for the Taurekian (in addition to the strengthening of your ex ante case, if I'm reading this right). For example, here the Taurekian mentions the reasonableness of the one's expectations. But one might similarly say it would be unreasonable to expect equal chances of rescue once the alternative to saving you is to save the other five. Why? Because that is not how humans usually behave, as one can tell by plenty of observations (and if the Taurekian denies that, then the view is subject to empirical falsification).

      Delete
    3. Good points all around. Admittedly, the ex ante unanimity of the six is partly explained by their ignorance. If what matters are their fully informed preferences, then there isn't unanimity in the relevant sense. It would then still be open to Taurek to say that 1 claim against 5 is a wash, and so the coin is called for (not becuase it gives equal chances, but because it's a way of respecting incompatible claims, which are equal or incomparable). For this reason, the example Jack and I discussed at the end of our exchange might be especially useful in responding to this one particular development of the Taurekian line.

      The second point is a really interesting one. On the one hand, Taurek clearly cannot say chances of rescue must be equal no matter what (if we can save X but not Y, such a view would tell us to save neither to equalize their chances at 0). But if the roughness of the waters counts as a relevant consideration, why not the numbers on either side? I don't doubt that there might be something he could offer here, but it seems like a line of thought worth pursuing further.

      Delete
    4. Good points; with respect to your example with the sequential choices, I've been thinking about it, and I might be missing something, but I don't see any good answer for the Taurekian: each potential answer seems to run into some trouble.

      In re: actual vs. fully informed preferences, would the answer be the same if they all - rationally - signed a document and/or stated beforehand stating that they want the ship to be sent to the Big Island?
      Either way, it seems to me this raises a public policy worry:
      Let's say Alice is the mayor of city A. She was elected by a majority. To avoid objections involving promises, let's say promised to govern to the best of her abilities - not to hold a referendum to see what the majority wants in each case! So, anyway, she sees that there are a number of problems with the police department. Most people want police reform, which would save some lives. A tiny number want police abolition. However, those are their choices resulting from things like limited information and irrationality. If fully informed, most would still want police reform - but not those who would be killed by police negligence or malice, or even unfortunately by police bullets if the police are just doing their job and get engaged in an intense firefight.

      So, does Alice have an obligation to toss a coin, in order to give equal weight to the preferences of those who would be killed by police, and not due to their own faults?

      Or let us say Alice is the president of country A, and country B launches an invasion of A. Should she toss a coin to decide whether to fight back or surrender without fighting? After all, some civilians (in each country) will die if A fights back, but wouldn't otherwise, and vice versa.


      By the way, after further consideration, I think the Taurekian variant consisting in respecting incompatible claims to the same degree doesn't escape the 'doubling down on failure' objection without falling to a similar problem:

      Set SSI:=Success in the rescue on the Small Island, and FBI:=Failure on the rescue on the Big Island. Suppose P(FBI)=0 (or 0<P(FBI)<1/1000000000 if one does not want P=0). This view says (if I understand it correctly) that for every n, if P(SSI)=1/n, one still has an obligation to send the ship to the Small Island with probability 1/2. Now presumably, if we had P(SSI)=0, then it would be obligatory to pick the Big Island (with probability 1 or as probable as one can)...yet, one should send the ship to the Small Island with P=1/2 if there is some nonzero chance - no matter how remote - of rescuing the person in the Small Island. That seems pretty odd. An alternative would be: for some very large n, with P(SSI)=1/n, one ought to send the ship to the Big Island rather than randomize with 1/2 probability. But then regardless of whether the transition happens at a precise n (knowable or not), or there is no objective fact of the matter as to the specific n, we get a big jump from giving equal respect to the claims of the five and the one (thus sending the ship to the Small Island with P=1/2), to giving zero respect to the claim of the one (thus sending the ship to the Big Island), all happening when P(SSI) is extremely low, and without an account as to why this jump happens. I think the Taurekian could offer a fix, but it would require some argument - and the reply might raise other concerns.

      Delete
    5. Excellent comments. Admittedly, Taurek does point out that if the rescuer is in a position of social responsibility, like a lifeguard, then he/she might have a quasi-contractual obligation to save the greater number, since that's in effect their job. He doesn't intend for his view to cover such cases. This clarification may not completely resolve the issues raised by your mayor case, but it complicates them somewhat.

      I did find your second worry to be really troublesome for the Taurekian view. I was under the impression that the best way forward for the Taurekian was to say that both people's claims count, so long as they have some chance of being saved, and so a coin should be used. But you're right: this leads to a sharp line between cases that don't seem to deserve one. Moreover, if two people are in trouble, there's always at least *some* chance that my decision will cause a successful rescue—maybe aliens are monitoring me and will suddenly materialize if I *try* to save the person who seems otherwise unsaveable. In other words, the chance of rescuing someone is never truly zero.... which means I'd have to flip a coin no matter the circumstances.

      Delete

    6. Nice points!

      In re: public policy, that's fair enough, and I agree Taukek's point complicates the objection, but I think a case can be made as follows:

      The year is 2070. Instead of a Small Island and a Big Island, we have a Small Space Station SSS and a Big Space Station BSS. 5 people are stranded in the BSS, 1 in the SSS. Both are damaged, so they don't have much time left. We have two scenarios:

      S1. Alice is a gazillionaire and has her own self-flying spaceship. She can choose to send it to the SSS or BSS, or randomize.

      S2. In Country A, thanks to advances in encryption, they have secure online voting system. Every eligible voter has a cell phone-like device (just much more advanced and with more functions). The government provides a basic model free of charge for anyone who wants it; better models can be bought, and all models are good enough for voting. Using this capacity, Country A has implemented a system of partially direct democracy, by which many decisions are regularly and quickly put to a vote (by the president or by 10% of eligible voters), and people can choose whether to vote and how every day. There is only one ship that can reach either space station in time, and the matter is put to a vote. Bob is a citizen of country A.

      Q1. In S1, is it permissible for Alice to choose BSS without randomizing?
      Q2. In S2, is it permissible for Bob to choose BSS without randomizing?

      The Taurekian (this variant at least) answers Q1 with a 'no', apparently because that that would be unfair to the person on the SSS. But then, the same would seem to apply to S2. After all, if it's unfair to pick BSS without randomizing in S1, it's also unfair in S2. And it seems very odd that it would be impermissible for Alice to choose BSS without randomizing but permissible for Bob. So, if the Taurekian answers Q2 with a 'yes', one would ask: What is the relevant difference? Bob has no official job, and has not been elected. Granted, arguable any voter is in a position of some social responsibility. However, this raises two issues:

      a. Would an individual voter be in a position of greater social responsibility than a person who can bring about on average much bigger social effects than a single vote, even if by private activity?
      b. Why would a position of social responsibility make it permissible to make an unfair choice that would otherwise be impermissible?

      To answer Q2 with a 'yes', the Taurekian would have to answer a. with a 'yes' as well, and give some answer to b. Maybe the Taurekian can meet these two challenges, but it does not look easy to me.

      On the other hand, answering 'no' to Q2 leads to the following problem: back in say 2022, there is a referendum: police reform vs. police abolition. Is it permissible not to randomize one's vote? The Taurekian would have to say 'yes' (a 'no' would be too big a bullet to bite I think, as it leads to randomizing all sorts of votes), but then one asks: what's the relevant difference? The Taurekian might say one is ex-ante, the other ex-post, but then that would mean ex-ante it's permissible not to randomize regardless of whether there is a fact of the matter as to who will be harmed. And then, why not ex-post?

      Delete
    7. That's an interesting line of argument. Suppose there are three of us on Alice's self-flying spaceship, and we vote to determine which space station to visit. I assume Numbers Skeptics would say that we are not required to vote to send the ship to the BSS.

      So if there is a difference, it probably isn't coming from the presence of voting per se, but the fact that, in a democracy, there are certain rights and responsibilities incumbent on each voter, which might involve something like promotion of the public good—even though, yes, your typical voter has far less control over the outcome than pilots of self-flying spaceships do.

      Delete

    8. That's an interesting variant. I was considering the Number Skeptic who says it's obligatory for Alice to randomize, but to be fair, Taurek does not seem to hold that randomizing is obligatory, but permissible. So, it seems the Number Skeptic would say it's permissible for Alice, or for three people on the spaceship. But what if Alice controls the ship and calls for a vote among the population (and advertises it sufficiently)? Would there be a difference between the vote organized by the government and that organized by Alice?
      There is a difference in who owns the resources for the rescue, and the arguments Taurek uses to support the distinction between private individuals and people with official jobs seems to be based mostly on the fact that the latter are using a resource that is not theirs, and should use it in a way that those who own the resource want to. But the voters are those who own the resource and are deciding how to use it. My impression is that this sort of argument would support the permissibility (but not the obligatoriness) of randomizing the vote in the government-organized referendum too.

      Anyway, I was trying to use this line of argument to boost the ex-ante case by ruling out one of the responses, but it seems it would require further arguments, so I'll leave it at that; nice post and arguments.

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.