Friday, April 21, 2006

Multiversal Ethics

If modal realism is true (i.e. all possible worlds are equally real) then choice is meaningless. You can't change what happens, but only who it happens to. It is already fixed that every possibility will occur in some world or other. If we have choice at all, it is merely that we get to choose which world is ours.

Perhaps I can make it so that I am the good guy rather than the bad guy. But this makes no broader difference. Others will be helped and harmed (respectively) either way. I just get to choose which of 'me' and 'my counterpart' gets assigned to play which role. Put another way, I have various counterparts performing various actions, and I get to choose which one of these guys to locate myself within. This cannot be a morally significant decision. It makes no difference to the worlds. It's like choosing which movie to watch. We don't really influence the events therein. We merely choose which events to view, which pairs of eyes to see out of. The spectator's self-locating decision affects no-one but themselves.

David Lewis denies that his modal realism has such ethically repugnant implications. He claims that we should only care about our own world, and making this a better place. But apart from the crude tribalism, it's not as if we can change the world itself. Each world is what it is, and according to modal realism they are all equally real. So all we can do is change which world is ours. We can make "this world" a better place de dicto, by changing which world 'this' refers to. We don't thereby change the world itself (de re). As I wrote once before:
All we'd be doing is moving ourselves so that we were 'closer' to the well-off people rather than the suffering ones. And that hardly seems like a virtuous move (by my intuitions anyway).

Jeremy doesn't see it this way though:
If my action causes the bad, then I am to blame. If it causes the good, I'm to be evaluated positively. So if I'm the one who does the bad thing, and some duplicate of me in another part of the multiverse does the good thing, then I'm to blame and he is to be congratulated. So my not doing an action might logically (but not causally) entail someone else doing it, and it would lead to a result that's exactly similar to what happens if I do the other action. But that doesn't mean my action isn't bad if it's the bad one. In the world where I do the bad action, my action is bad. In the world where my duplicate does the bad action, his action is bad. The total amount of good or bad in the universe is irrelevant to whether my action right here is bad.

That means we should blame the one who does the bad thing, even if any choice leads to the same resulting future when you factor in the whole multiverse. This means that, even if this view that we have no evidence for is true, moral evaluation still makes sense.

By appealing to our standard moral practices of praise and blame, I think Jeremy fails to fully take on board the radical implications of modal realism here. Sure, we can still distinguish the good and bad actions. But an agent's choice between them has no significance, for the reasons explained above. The actions will all take place regardless of the agent's choice. So it makes no sense to say that he "should have done otherwise". He did! (At least, his counterpart did.) At worst you might criticize an agent for choosing to locate himself in the life he chosen. Perhaps he has poor aesthetic taste, prefering to experience the bad actions than the good ones. But again, it's not as if his self-locating/spectative decision impacts upon anyone else.

Worse, you might have suspicions about these "locative powers" according to which agents have the semi-magical ability to influence in which world their consciousness resides. Perhaps the most coherent interpretation of modal realism would simply deny that the "agents" in each world have any real choice at all. And then moral evaluation goes right out the window. (We can't even justify blame on pragmatic grounds, since nothing can influence multiversal consequences.)

Finally, Jeremy suggests that non-consequentialists are unaffected by the argument:
The argument is that the net result wouldn't be any different if I did an action usually considered better [or] worse, and therefore the action isn't really better or worse because the consequence of either action would be the same. If consequentialism is true, then the only morally relevant features of the action are its consequences, but anyone who denies consequentialism isn't going to buy this.

Again, this seems to underestimate what's going on here. The most plausible ethical views allow that actions have other morally relevant features besides consequences, but nevertheless recognize that these other features are ultimately grounded in consequential concerns. Ethics is important because (we assume) our choices can affect others and change the world. If this assumption is false, as modal realism would have it, then our decisions - and hence the norms governing them, i.e. ethics - are inconsequential, in the most derogatory sense. To care about ethics even when it makes no difference would be arbitrary and fetishistic.


  1. But if actions ought to be evaluated ethically not for the actual consequences but for the consequences the agent can reasonably forsee (as I understand is the case in the most popular forms of consequentialism) then what is the problem? We cannot know if the multiverse of all possible worlds really exists, and any agent who just assumed it existed and therefore didn't care about the consequences (in his world) of his actions would be acting wrongly. Even if God looked from above and knew for a fact that the multiverse existed, he could still morally blame an agent for acting in ways that, for all the agent knew, could have had very bad consequences (in case the agent's world had been the only one, something the agnet didn't have any way of knowing was not the case).

    Imagine I am given a gun and offered the choice to shoot at B. I don't know it, but X is watching me and if I choose not to shoot, he will kill B anyway. So my choice makes no diffrence to the outcome. Does it follow that I am not to blame if I shoot? Of course not; I ought to be blamed because I didn't know that the outcome would be the same. (A non-consequentialist like Jeremy would probably say that I am to be blamed even if I did know that the outcome would be the same, because the action itself is wrong. I am assuming consequentialism, though). Even if David Lewis gave me extremely clever philosophical arguments to persuade me that X will kill B in any case, I don't think I should trust them to the point of thinking it doesn't matter if I pull the trigger.

  2. Actually, I see model realism as being even worse than you characterize it. It seems to me that given the strong modal realism you criticize, choices do not really exist at all. You choose to make this world better, but you also choose to make it worse it just as strong a sense. It would seem that an event can only be characterized as a choice if it involves the rejection of some alternative, but no alternatives are being rejected here. While it may seem from any perspective that some alternatives were rejected, they really weren't.

    We don't get to choose to see the world from a certain pair of eyes, but instead see the world from all possible individual sets of eyes. Every possible world in which we could exist, we do exist and as such no choice is involved in the matter at all.

  3. I don't have the time at the moment to look carefully at your argument, but I do want to say one thing about how you frame this. The view I've been talking about is not Lewis' modal realism. Maybe it will end up making no difference with this issue, but Lewis very carefully distinguishes his view from the many-universes scenario that physicists are talking about. On that view, all the possible ways something can be are realized in a way that is causally connected with our spacetime continuum. The actual world, then, contains all the possible world that you might have thought there were. Then there are all sorts of other possible worlds that contain only certain parts of this actual multiverse, and those ones don't exist.

    Lewis' view, on the other hand, is simply that all possible worlds are concrete. They are not part of our multiverse (if indeed this is a multiverse). These things are not causally connected in any way. That's crucial for several things Lewis wants to say, so it's important to distinguish between the two views.

    I don't have any sense offhand if the difference between Lewis' view and the one I've been talking about will be relevant to what I want to say about ethics, but I think it's worth pointing out that it wasn't Lewis' view that I had in mind.

  4. Now that I've read the post, I have two more things to say. One is that you also seem to be ignoring individual obligations to individual people. Even if I've got a duplicate in another universe, I have no special obligations to those people, and I do have special obligations to my family. So even if you want to characterize my action of being faithful to my wife or providing sustenance for my kids as entailing some duplicate to do otherwise, it seems to me that my obligation to my own family takes priority. I should first meet my obligations to my own family, even if that entails (non-causally, I would say) that my duplicate treats his family terribly.

    My second point is hard for me to characterize exactly as I want to, but consider the following cases.

    Expanding Misery Case: One person starts with total misery, everyone else (an infinite number of
    people) with total pleasure. The total misery expands to more people, continuing to expand forever. No one decreases in misery. It just spreads to more people. No one dies. Once they get misery, they keep it forever.

    Expanding Pleasure Case: One person starts with total pleasure, everyone else (an infinite number of people) with some total misery. The total pleasure expands to more people, continuing to expand forever. No one decreases in pleasure. It just spreads to more people. No one dies. Once they get pleasure, they keep it forever.

    Which case is better in utilitarian terms? The correct answer depends on something that I think we're disagreeing on. On standard utilitarianism, the Expanding Misery case seems better. The world is constantly getting worse, but the total number of people without misery and with pleasure is infinite. When you take into account the sum total, you never reach a point when the amount of misery outweighs the amount of pleasure. In the Explanding Pleasure case, the reverse is true. When the sum total is taken into account, the world is always in the negative, and it's an infinite negative.

    But I just think that's a refutation of utilitarianism as standardly construed. It seems totally wrong to think the Expanding Misery Case is better. I diagnose the problem as following. Even with a hedonistic consequentialism, it seems to me wrong to care about some abstract total. What we should care about is the status of each person. In these cases, what you say about each person is exactly opposite of what you say about the total. Each person will eventually get the misery or pleasure that is spreading, even if they don't have it yet. They'll have what they start with for a finite length of time, and then they'll have the other one forever.

    I don't think the kind of view we're talking about is analogous to that. But I do think your argument relies on the sort of thing that the wrong view on these cases relies on. It relies on the view that we should care about the total state of the world rather than the particular parts of the world relevant to us. If I pick any person and asks what's good for that person, and then sum up my answers, it gives the opposite result in the expanding misery and pleasure cases from what I get if I just ask about overall good or bad. Here if pick any person and ask about the direct consequences of that person's actions rather than asking about what's true of the world overall given one action or given another. Once you move away from thinking about the overall world because of these cases, I think you have to stay true to that and ask about what's true for X person in X context and the results of that person's particular actions. In modal realism and in the many-universes model of the actual world, you don't have my actions causing my duplicate to do anything. You have my actions causing what happens around me. You also have my duplicate's actions causing what happens around him. My modified utilitarian method would then lead me to evaluate my actions according to how it contributes and my duplicate's according to how it contributes, and it simply doesn't matter how the world overall is affected (or in this case isn't). My actions do get the positive evaluation, and my duplicate's do get the negative if I do the right thing, and vice versa if I do the bad thing.

    I'll have to think a bit more about the foundations of ethics before responding to your last paragraph. I'm not sure what I think about that. I don't generally like consequentialist foundations of ethics, though, even if it leads in practice to something non-consequentialist. I'm not sure what sort of view I do like, though. I don't think ethics is arbitrary. I'm not sure what I think of your fetishistic claim, but I'm not sure I easily will agree.

  5. Sorry, that was me. My wife's been using my computer while hers has been down, and she forgot to log out of my Blogger account.

  6. Hi Jeremy, I'm not sure if modal realism even allows us to improve the lives of particular people. That's the point of all my talk of "location", and improving the world 'de dicto' rather than 'de re'. Consider two wife-counterparts, A and B, where A is treated well by her husband, and B is not. Those are the facts, and nothing you do can change the circumstances of either A or B. At most, you can "relocate" your self so that you are A's husband rather than B's. But you still can't help anyone (and so, a fortiori, can't help any particular individuals whom you might have thought you had obligations towards).

    Now it may be that I'm just looking at modal realism in completely the wrong way; that we really can have a de re influence on our world, and not a merely 'locative' or 'de dicto' power to decide which world to make ours. But that would take some explaining, to show where my picture goes wrong.

    Note that I've actually discussed that Infinite Spheres of Utility puzzle before. The emerging consensus from discussion with Blar was that the two scenarios are simply incommensurable. I don't think the expanding pleasure is better -- after all, at any given moment, there are infinitely many people (real people! nothing abstract about them) suffering, and only a finite number of happy people. But anyway, that's discussed in the other thread so I won't repeat it here.

    Suffice it to say that it's certainly not a "refutation of utilitarianism". The puzzle isn't anything to do with moral theory. It's superficially an issue in value theory (which any plausible value theory will need to address), but is more deeply a mathematical problem (perhaps falling under "decision theory") arising whenever two infinite dimensions exhibit this sort of inverse relationship.

    Finally, you write: "it seems to me wrong to care about some abstract total. What we should care about is the status of each person."

    I think that's pretty well refuted by Parfit's identification of Badness Without Harm. It's wrong to ruin the environment for future generations, even if there's no particular person who is made worse off (because the alternative would be for different people to live better lives in their place).

    In any case, that's all moot if my first point holds.

  7. It's always good to hear some hardcore philosophy from a cabin mistress. ;-)

  8. If modal realism offended deeply enough our sense of agency and casual potency would that count as intutive evidence against it, or are such intutions not relevant to this domain?

  9. I'm a bit suspicious about employing such intuitions. Like when people argue that determinism must be false because they intuitively feel like they have contra-causal free will. It's not as if the thesis is inconsistent with our having those intuitions. It's just inconsistent with the intuitions being true -- but what independent evidence for that do we have? (Though it's surely better to be intuitive than not. I guess it just depends how much weight one gives to unsupported intuition. Hard to see how one could settle the matter either way.)

  10. I agree that modal realism unmakes ethics, but that means that normatively we can assume that it isn't true, as it being true cannot influence our normative behavior, can we not?

  11. You mean like a pragmatic argument? "Our reasoning is only worth a damn if modal realism is false. So we might as well take this as given -- if we're wrong it doesn't matter anyway!" Yeah, I like that.

  12. For a given decision there is a PA probability I will choose option A, and a PB probability I will choose option B. If A is the more moral choice then I am more moral the greater PA is. The ratio between possible worlds where I perform action A over the number of possible worlds where I perform action B is PA/PB. Thus, me being a moral person is equivalent to there being a smaller number of possible worlds in which the immoral action is performed. Or said another way, the process of our consciouness making moral choices, is a part of the universe determining what the possible worlds are.

    I don't actually know crap about modeal realism, so I apologise if I've missed the point.

  13. Hi Ben, the problem is that according to modal realism every logical possibility is actualized. So there's no room for our choices to influence anything, including "what the possible worlds are."

  14. "You mean like a pragmatic argument? "Our reasoning is only worth a damn if modal realism is false. So we might as well take this as given -- if we're wrong it doesn't matter anyway!" Yeah, I like that."

    I generally dislike these arguments, whether here or in the contexts of consciousness, free will, moral realism, etc. Usually they are an excuse for running away from the fact that our faulty cognitive architecture give us some intuitions that things we care about are impossible.

    The 'pragmatic' arguer gets attached to one formalization of the faulty intuition (ignoring alternative formalizations that would also have intuitive appeal) and throws away the baby with the bathwater.

    'Utilitarian' does this on free will here:


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.