Monday, September 24, 2007

Maximizing over Infinite Time

Mathew Wilder asks:
Consequentialism aims at maximizing the good in the long term, or on the whole. But what if the universe is infinite, temporally speaking? Then it seems that there are no actions that maximize the good (or that every action does so) because there will always be an infinite amount of good (and bad) in the future (and in the past as well, if the universe is truly infinite).

I recall reading once about how the notion of a multiverse where every action/decision results in another universe seems to make moral choices worthless, from a consequentialist view-from-nowhere, since every good and bad possibility is an actuality. [Yup - RC.] However, it seems plausible that we could focus the scope of our consideration on the universe in which we live without being open to an accusation of arbitrariness.

But, even if we keep our focus on the only universe which we experience, if it is infinite, then how are we to non-arbitrarily judge what maximizes the good? Should it be what maximizes the good in ten years, or one hundred, or a million? Why should the tenth year matter, but not next year, or all of the infinite years to come?

Now, it is clearly disputable that the universe will continue infinitely, but it certainly seems plausible. Do you think I have hit on an interesting problem, or has this been dealt with before?

Sounds interesting. (If anyone is familiar with the literature on this topic, feel free to provide references in the comments!) Cf. my post on the infinite spheres of utility paradox.

My initial thought is to clarify that what the consequentialist wants to do is to bring about the best world practically possible. And it seems that even when comparing worlds that contain (equally) infinite value, we can judge that some are better than others. For a simple example, consider a case of 'domination', i.e. where one world is (finitely) better than another at each of the infinite moments in time. Clearly, this world is also better overall, even though we cannot attribute a higher quantity of value to it (since both are just countably infinite).

[N.B. This is a puzzle for value theory generally, not anything peculiar to consequentialism -- cf. R.M. Hare.]

Anyway, I'll throw open the comments for anyone else who wants to chip in...

7 comments:

  1. http://www.nickbostrom.com/ethics/infinite.pdf

    Personally though, I'm not convinced that we can apply value theories, or any form of decision theory, to and infinite universe.

    ReplyDelete
  2. I in the past have used the infinite multiverse argument to defend god as a benevolent utilitarian.

    I think that that case stands apart from the infinite universe case where you can effectively have a "greater infinity" (at least in any sense that you or I care about).

    GNZ

    ReplyDelete
  3. in more detail

    > where every action/decision results in another universe seems to make moral choices worthless

    one could value being hte good version of yourself as oppsoed to the evil one. or to make your good world the actual world from your perspective.

    > how are we to non-arbitrarily judge what maximizes the good?

    statistics mean that it doesn't matter. if you maximize welfare for an arbitrary period of time it has a greater chance to do so for other periods of time than to try to minimize welfare (etc). Ideally one would take all information available.
    the question of "what time" is in itself the arbitrary problem.

    > Now, it is clearly disputable that the universe will continue infinitely, but it certainly seems plausible.

    Maybe - but I suggest not very plausible.

    GNZ

    ReplyDelete
  4. 3 things:
    - I don't see any problem with ordering inifinities. We can take the difference in the utility between the two options, integrate to infinity, and see whether the result is positive or negative.
    - If the span of future time over which our current actions will have any influence is finite, this solves the problem.
    - From a practical perspective someone who values all future times equally probably shouldn't consider far future times simply because of the large uncertainties in any predicitons.

    ReplyDelete
  5. If you want something old, Stoicism presents an infinite world. The line, "it seems plausible that we could focus the scope of our consideration on the universe in which we live without being open to an accusation of arbitrariness" might as well be an echo of the Stoa.

    So let's see, try Seneca "On Constancy" or "On the Good Life".

    ReplyDelete
  6. Relevant?:
    http://www.qwantz.com/archive/000604.html

    I don't think an infinite time sequence entails infinite amounts of good. A deep freeze or other similar "one-way" events (e.g. human extinction) seem to mean that the time frame in which goodness can be achieved may be finite.

    ReplyDelete
  7. Here's what I think the consequentialist should be up to. In a choice among relevant actions a-n, they must choose the best consequences C. The C(a) is a sum of the following potential outcomes of a., SUM[O(a)1 - O(a)n], where O(a)1 is (say) the hedonic quality of possible outcome O1, multiplied by the probability that O1 happens, given a. This by itself won't solve the problem, because if these sums are infinite they might not converge on a value.

    But here's the solution: Because we're interested only in picking the best action from among relevant alternatives, we only need sum over outcomes that get assigned different probabilities conditional on the action. This is why happenings in the far future are correctly ignored: it's because the probability of a good outcome a hundred years from now, conditional on my kicking a puppy, are the same as the probability of a good outcome a hundred years from now conditional on my not kicking the puppy. We just have no reason to think that either of the available actions will have either a positive or a negative impact on the far future. Looking at the far future does not help us choose between a and b in this case. (In some cases it does, like in decisions about whether to permanently steralize the universe.) What helps us choose between actions a and b lies in the outcomes which acquire one value for their probability conditional on a, and a different value conditional on b. So those are the only ones we need to sum over. We focus on the difference-makers since the others are known to be a wash. But these differences will not be infinite even in an infinite universe. This has to do with the fact that probabilities of outcomes conditional on specific actions get far more hazy in the far future, and pretty quickly the probabilities of any outcome given a is inidistinguishable from the probability of the same outcome given b.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.