Monday, November 24, 2008

Morality in a Multiverse (revisited)

Chad Orzel argues (ht) that 'multiverse' type hypotheses (e.g. suggesting that every possible cosmos exists concretely) have no moral implications:
Knowing that some alternate-universe version of me will kick a puppy doesn't make it all right for me to kick puppies, any more than knowing that George Bush is down with waterboarding makes it all right to torture prisoners. The right thing to do is the right thing to do, regardless of what anybody else does or doesn't do, in this universe or any other.

This argument suffers from the twin flaws of (i) resting on a blatantly false general principle, and (ii) badly misunderstanding the opposing arguments. I'll tackle the latter point first.

Nobody's arguing that it's okay to do bad things just because someone else does. It's bad to torture people, even if GWB does, for the obvious reason that mimicking his harmful actions will cause even more harm to result. But in the multiverse case, we are supposing that every possibility will be realized exactly once. This leads to a vital point of disanalogy. We are supposing that my otherworldly counterpart will kick a puppy if and only if I don't. So whether I kick the puppy or not makes no difference to the total outcome. Whatever I do, the multiverse as a whole will contain exactly the same number of kicked puppies. That is why, as a concerned advocate for the welfare of puppies everywhere (and not just in my corner of the multiverse), the multiverse hypothesis implies I have no reason to refrain from kicking puppies. It's a consequentialist argument, not a "But if Georgie did it why can't I?" whine.

As for the general principle: others' actions are in fact highly relevant to determining what one ought to do. This is because the ultimate outcome of my actions may depend in part on others. For example, if my neighbour is suffering from a rat infestation, I might well offer him some rat poison. But this is no longer so advisable if I learn that he desires to poison his wife. So it's just false to claim that right and wrong are determined "regardless of what anybody else does or doesn't do". Other actors can affect the downstream consequences of my actions, and downstream consequences matter.

Although Chad's stated argument fails, perhaps the underlying idea can be saved. In a non-consequentialist spirit, we might claim that one ought to be virtuous, even if this does no good. For example, some have claimed that we shouldn't invest in unethical industries (tobacco, etc.) even if our abstention drives the price down, causing others to invest correspondingly more in our place. To abstain in this way is to be what Vera Bradova calls a 'Moral Dupe': "one who makes sacrifices on behalf of his/her own conscience... and in so doing aids and abets those who do not."

My views on this haven't significantly changed since writing my earlier post on this topic, so I'll just echo the concluding passage:
The most plausible ethical views allow that actions have other morally relevant features besides consequences, but nevertheless recognize that these other features are ultimately grounded in consequential concerns. Ethics is important because (we assume) our choices can affect others and change the world. If this assumption is false, as modal realism [or the multiverse hypothesis] would have it, then our decisions - and hence the norms governing them, i.e. ethics - are inconsequential, in the most derogatory sense. To care about ethics even when it makes no difference would be arbitrary and fetishistic.

The only escape, I think, is to embrace a more 'partialist' value theory, and say that what matters are the consequences around here, never mind the puppies over yonder. I don't think much of that option, though.


  1. This is a problem for causal decision theory, not consequentialism.

  2. Really interesting post.

    The "multiverse" theory is the most extreme contradiction of Occam's razor I could possibly imagine. You explain away problematic features of this universe by positing an infinite number of alternative universes. And that's a parsimonious solution?

  3. It is important to note the difference between many worlds quantum mechanics and modal realism. It looks to me like you are only talking about the latter, as you say every possibility is instantiated once, rather than that many possibilities are instantiated according to a probability measure. I'm not sure if your argument would work as well against the many worlds interpretation and I would suggest not saying 'multiverse' without explaining which type.


    It depends on how you interpret Occam's razor. I think that plausible versions all concern complexity of the base theory rather than number of objects. The move to the astronomically large universe rather than the ptolemaic model was correct on plausible versions of Occam's razor and so would 'multiverse' theories if they are less complex theories.

  4. Anyone read Quentin Smith's paper on this: “Moral Realism and Infinite Spacetime Imply Moral Nihilism"?

  5. Georges - to reiterate Toby's point: once we've posited electrons, we don't think it matters (to Occam's Razor) how many tokens there are of this type. Why should the principle apply differently to cosmii?

    Toby - sure, though I thought I was pretty explicitly discussing the hypothesis that "every possibility will be realized exactly once". There might be other views in the vicinity which don't claim this, in which case the argument is moot. But I read Chad as dismissing the moral significance of all multiverse-type hypotheses, and not just the ones (if there are any) on which it turns out the ultimate consequences are contingent on our choices after all.

    Se7 - interesting link! QS's argument is far more radical (and, I think, implausible) than the one I've discussed here. He thinks we can derive a form of fatalism even if our actions really do affect the ultimate outcome, so long as either option leads to infinite value. I disagree, because I think one world containing infinite value may yet be judged better than another by means of dominance reasoning (e.g. if some moments or parts of the world are better, and none are worse). So I reject that aspect of his value theory. He also seems to think that every inch of inanimate matter has positive intrinsic value, which I think is even more absurd.

  6. Perhaps I'm not following this as closely as I should but I think the multiverse idea is irrelevant to morality because there is no consquences caused by it.

    If you refrain from kicking the puppy that doesn't cause 'evil you' to kick the puppy in the other universe. 'Evil you' kicks the puppy in the other universe, you still kick or not-kick the puppy here.

  7. Boonton -- the particular multiverse hypothesis I have in mind is the claim that every possible cosmos is realized exactly once. It's not a matter of "causation", exactly, but if it's necessarily the case that exactly one Richard-counterpart will kick a puppy, then note the following two implications:

    (i) If I kick the puppy, my counterpart won't.

    (ii) If I don't kick the puppy, my counterpart will.

    Again, it's not exactly a claim about causation (in the usual sense). But the practical upshot would seem to be much the same, no?

  8. Richard, there is an interesting ambiguity in the description. If you take multiverses to be Lewis-style, causally isolated universes, then each universe is a distinct world. That (strangely) seems to matter more to one's deliberation about what to do. But if the multiverse is the set of all possible universes in a single (perhaps loosely) causally related world, then it matters much less what others do. We are all in the very same world; indeed, in the only possible world. Unlike the case of independent worlds, there is nothing any of us can do to increase or decrease the value of our particular world. It stays the same. Everything that could happen, does happen (in some universe or other), so our world is fatalistic: it is true here that p entails Np, for all p. (suppose for reductio that p and M~p. That's true iff there is some possible non-actual world in which ~p is true. But there is no such world, by hypothesis). So deliberation seems less important. Whatever does happen in our world, must happen, so there is little reason to worry about what I or others do.

  9. Great stuff. I think this is a difficult problem for modal realism. Could you say more about your views on the following Richard?

    1. Partialism. Any decent ethics will permit me to save my Mum before a stranger, and plausibly not just because of downstream advantages of closer family ties and the like. So why not save worldmates? I am attracted to a concentric circles view: other things equal (which in the case of colonial histories etc. will rarely be the case) we should care for family first, friends second, acquaintances third, co-nationals fourth, co-earthians fifth. Or rather a more nuanced version of the same idea. Why stop at worldmates? Well, I don't suppose we should. We are free to lament the cruelty of the principle of recombination.

    2. Selfish reasons. One should help others, of course, but one should also help oneself. Suppose it doesn't matter to the multiverse as a whole whether I am well-behaved or not, or whether I am closer or further away from the flourishing people. Still it might matter to me. That's a reason - an ethical reason, in the sense in which ethics is broader than morality - for me to be good, if being good will be good for me. (Combine this with the first point to preserve morality.)

    3. Non-consequentialism. Surely there is some intrinsic value in the performance of the virtuous actions and the possession of virtuous character. If consequentialism is true, it matters not at all whether I perform these actions and have this character, or you do. But surely this can't be right. There seems to be a difference here between the first and the third person perspectives. Perhaps it doesn't matter much whether Tom or Jerry is the virtuous one, but it matter whether Tom is or I am. (Recall Gerry Cohen's discussion of the uncomfortable bourgeois arguments for incentives.)


  10. "Boonton -- the particular multiverse hypothesis I have in mind is the claim that every possible cosmos is realized exactly once. It's not a matter of "causation", exactly, but if it's necessarily the case that exactly one Richard-counterpart will kick a puppy, then note the following two implications:"

    This, I think, can be examined without the multiverse. With 5+ billion souls on earth someone, somewhere is going to kick a puppy. Given that a puppy is going to get kicked, should you kick one too? It would seem the answer is still no as you have the ability to control only your behavior, not everyone else's whether they exist in this universe or others.

    If all possiblities exist then this is really an argument against free will. Every possible you exists ranging from perfect you to perfectly evil you and trillions of in between yous. If you kick the puppy you are simply demonstrating that we are in one of the middle universes with an imperfect Richard. If you refrain we may hope we are in the universe with the better Richard.

  11. This comment has been removed by a blog administrator.

  12. Off-topic comments deleted, as per my comments policy.

    Boonton - see the GWB analogy I discuss in the main post. To get a truly analogous one-world case, we need to suppose the ultimate outcome is known (with absolute certainty) in advance. Time travel provides a nice example: if I know that Puppy was kicked exactly once (at time t), and then I travel back in time to just before t, is it wrong for me to kick Puppy? Well, we know history has already settled that Puppy is kicked exactly once: the only remaining question, from our epistemic position, is whether Puppy gets kicked by me or by someone else. That doesn't seem like a difference that matters, at least so far as Puppy's welfare is concerned. The expected outcome is the same whatever I choose.

    Barry - I'm not so taken with the idea that moral truths are beholden to our preconceived notions of what's "decent". But now that you mention it, I think even partialists shouldn't care especially about world-mates. This is because the most plausible version of partialism allows us to grant moral weight to personal relationships (in proportion to their strength, perhaps), but mere geography doesn't ground relationships (in the relevant sense). So all strangers should fall in the same circle. There is not the faintest (non-instrumental) reason to favour a co-national you've never met over an otherwise similar person who lives a few miles further distant. Or so I'd claim.

    Perhaps in addition to personal relationships, we can care about people who have certain qualities -- e.g. who share our culture and values. But again this is not a consideration that distinctively privileges our worldmates over their close counterparts.

    On non-consequentialism, see the quoted passage in my main post. There are typically instrumental reasons to care especially about our own characters. But once we stipulate all that away (as in the multiverse hypothesis) I can't see any further reason to care about such things -- wouldn't that be fetishistic?

  13. Thanks for your reply Richard. By 'decent' I meant to invoke something like Frankena's maxim, elusive though it is to spell out. We get data for ethical theorising *somehow*, and robust intuitions of some sort or other must be methodologically admissible. I have a robust intuition that an ethics which didn't permit me to save my Mum (or yours) before a stranger would be ipso facto less impressive. But this is a little orthogonal.

    More interestingly, if we grant some partiality, there is the question of in virtue of what, exactly, is partiality justified. You're right to say it is surely not just geography. I don't know exactly what to say here. As you suggest, involvement in a shared enterprise of value- and norm-creation is surely a good candidate. Mutual welfare affectance is another. Merely sharing a relevantly similar existential predicament, I reckon, would be enough as well. These are all reasons to care about what happens to others, and for them to care about what happens to you. My concentric circles metaphor is less helpful when we look this closely, for we cannot easily rank cultural involvement, welfare affectance, shared predicament and such like in a way which makes it clear who we should care for the most, who the least, and so on. But we needn't work it out precisely here. Intuitively we can make our some sort of structure.

    I don't want to rest my case on the specifics, which are up for grabs. But for illustration a couple of examples which I would assent to are the following. I think some sort of political republicanism is attractive, according to which states are manageably small, in which civic participation is encouraged, trappings of nationalism such as national orchestras and sports stars and so on are supported, etc. To some extent such countries as New Zealand and Scotland approach this ideal today. To the extent that they do, and perhaps a little more, and other things being equal, one would have obligations to one's fellow Scot or New Zealander before a non-national. Of course other things will rarely be equal, and the practical significance of nationalism in today's world will be at best slight. But I think the theory holds all the same. Another example: before the world got as small as it is today, perhaps the Papua New Guinean really didn't have reason to favour the Colombian over the Next-Worldian. But perhaps in an extenuated sense, he did, since for example such counterfactuals would be true as that were the world under attack and the two were somehow provided with the means, they would each have an interest in the other's preservation. I'm struggling, but I think the theory is safe enough. When we get to the current day, our mutual affectance is such that caring for New Guineans to some extent really is caring for oneself.

    Then there is an argument from demandingness to lower the amount of consideration due to those who get, as it were, further away on whatever scale we manage to come up with.

    So: there are various candidate reasons to care about worldmates. The most significant difference between them the next-worldians is causal (dis)connection. On the Lewis story there isn't any causal connection between us and them. So all we've got is symapthy for shared predicament. That's not nothing, but given demandingness and our small number of ethical resources, perhaps in practice we need give the next-worldians no regard whatsoever.

  14. Hi Barry, I worry that most of those considerations you point to have an ultimately instrumental (indirect utilitarian) basis, and so no longer apply on the assumption that the same multiverse will result no matter how we act in practice. (Again, see my quoted remarks on the ultimately consequentialist basis for superficially non-consequentialist principles.)

    The exception is insofar as we may have a fundamental, non-instrumental bias towards ourselves and loved ones. (This also relates to the 'selfish reasons' section in your earlier comment.) So let me point out that this kind of partiality + multiverse has potentially quite repugnant implications...

    Although the having of virtuous character might be a pro tanto benefit to one, it seems clear that it could be outweighed by other benefits. So now suppose that you (and those close to you) can obtain a net benefit to well-being by imposing massive costs on those you're less partial to. (Suppose you can enslave the population of Texas, or some such.) Partiality would seem to imply that you have most reason to go ahead and do this: the impartial outcome is unchanged (across the multiverse), but you get to switch those who matter most to you (your this-worldly self and loved ones) into some of the best welfare positions, at cost of switching the positions of this-worldly Texans with some of their worse-off other-worldly counterparts -- a difference you (fundamentally) care much less about.

  15. This seems to raise very general issues about free will and moral responsibility, not specific to modal realism. It's very tempting to read the modal realist as committed to thinking that given that you live in a certain possible world, there is only one course of action you can take, sentences beginning "if I were to do otherwise..." will have impossible antecedents. This could be true even for those PWs where some things aren't determined by the whole past. On the other hand, maybe facts are indeterministically assigned to PWs as they evolve, but then we're in questions about how indeterminism could support free will.

  16. Imagine that the different possible worlds are different possible starting positions of a very large finite deterministic square 'Game of Life' realm. Many of these starting positions result in sapient Turing machines, of which you happen to be one.

    Now you are deciding whether or not to painfully torture and eat a crying baby (running on GoL physics, of course) you happened across at the roadside. If your internal structure is such that you will do so (or won't), then by deterministic Game of Life physics all identical structures (elsewhere in your world, and in other possible worlds/starting-condition consequences) will also do so. In other words, if it's the case that you will eat the baby then the multiverse has lower total value than it does if you won't. But whether you will eat the baby or not is a necessary, logical truth that could be calculated using the Game of Life rules.

    But this is just the usual problem of morality in a deterministic universe. Do you think that determinism renders decision theory and morality irrelevant?

  17. Carl - no, I don't see how your example relates to my discussion, unless every possible starting position is guaranteed to be instantiated exactly once (which is not at all the "usual problem" of determinism).

  18. If I live in any particular universe, then my actions reflect both contingent and logical truths. The contents and value of the multiverse as a whole (with each possibility realized once) depend on various logical truths that are unknown to us. The particular world in which we find ourselves is a contingent truth.

    Now, a logically omniscient being would know the exact value of the universe (in utilitarian terms), and in that sense the amount of value is fixed. But I don't know the logical truths.

    I may know that IF I take a certain action then the value in the multiverse will be higher than if I don't, which would give me a reason to do so under Evidential Decision Theory (or a superior alternatives that avoids all the standard objections while retaining all the usual advantages), but I can't *cause* the logical truths to be different.

    Now consider a universe with reversible deterministic physical laws, which is the only actual universe. My actions reflect a mix of contingent and logical truths. The contingent truths are the laws of physics and the contents of the universe (at some particular time, since the states of the universe before and after any particular time can be deduced from the laws and that state), while the logical truths are mathematical and logical principles with which we could calculate past and future states from the laws and current state.

    Since *I* is a rigid designator attaching to me in this particular deterministic universe, it is either logically necessary that I will eat the baby, or logically necessary that I won't. I can't *cause* mathematical and logical facts to change, but if I don't eat the baby then I will get good news about those logical truths, and if I do eat the baby I'll get bad news. Under Evidential Decision Theory (or a superior alternative) I have a reason not to eat the baby.

  19. "I may know that IF I take a certain action then the value in the multiverse will be higher than if I don't"

    No, we can know the total outcome will be the same no matter which decision I make, since I know my counterpart will make the opposite decision.

  20. "No, we can know the total outcome will be the same no matter which decision I make, since I know my counterpart will make the opposite decision."
    You're switching between the question of which possible world we're in and the question of what worlds are possible, i.e. what the contents and value of the multiverse are. The opposite decision may be logically impossible, or may occur in a greater or less number of possible worlds.

    I can have partial knowledge about mathematics, so that I know that either both conjecture X and conjecture Y are true or that both are false. I might also be able to prove that, if X is true then the set of all possible worlds will include some amount of happiness, but if X is false then it follows that the set of all possible worlds include less happiness. Finally, I might learn that if Y is true then I will take some action (or most of the entities in my epistemic position across the multiverse will). Under evidential decision theory, that gives me reason to take the action.

  21. "Ethics is important because (we assume) our choices can affect others and change the world. "

    Suppose that I make a thousand exact copies of you in a deterministic universe (maybe a cellular automata world), and put all 1,001 in boxes. It's logically impossible for you to act differently from your duplicates given these physics. In front of each of you is a button. If at least 200 of the copies press their buttons, then trillions will experience blissful lives, but if less than 200 press their buttons then trillions will be horribly tortured.

    Do you have a reason to press the button? No matter what you do, your button-pressing won't cause the outcome to be better.

  22. Button-pressing or lack thereof.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)