tag:blogger.com,1999:blog-6642011.post112391646402776860..comments2023-10-29T10:32:36.914-04:00Comments on Philosophy, et cetera: Expecting InfinityRichard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger48125tag:blogger.com,1999:blog-6642011.post-1124448067647139572005-08-19T06:41:00.000-04:002005-08-19T06:41:00.000-04:00Neat, thanks for the links.And that makes 50 comme...Neat, thanks for the links.<BR/><BR/>And that makes 50 comments! My most productive comment thread yet :)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124447471356398592005-08-19T06:31:00.000-04:002005-08-19T06:31:00.000-04:00Yes, sorry, in retrospect I can see how your earli...Yes, sorry, in retrospect I can see how your earlier comment says that. <BR/><BR/>After doing some reading on expected utility and diminishing marginal utility (<A HREF="http://cepa.newschool.edu/het/essays/uncert/bernoulhyp.htm" REL="nofollow">this</A> is a concise introduction to Bernoulli's original expected utility hypothesis with DMU. It's a little confusing, since my browser doesn't render the math symbols correctly, but still readable.), this seems like exactly the sort of dilemma DMU was invented to solve. If I understand correctly (not a great assumption, I admit), you're arguing that *if* DMU doesn't apply, then we should trust the expected value calculation. This seems clear, but it's not clear at all to me that DMU doesn't apply. I don't really have an argument for this, except perhaps that DMU not applying leads to what is to me the wildly implausible conclusion that one should play the angel's game in your example. I admit that this isn't much of an argument. <BR/><BR/>Aside: the page I linked to above is part of a <A HREF="http://cepa.newschool.edu/het/essays/uncert/choicecont.htm" REL="nofollow">much longer essay</A> on various modern theories of expected utility. I didn't get much out of later parts of it -- my browser's mangling of the math got to be too much for me once it got into serious proofs -- but I wanted to point to the <A HREF="http://cepa.newschool.edu/het/essays/uncert/intrisk.htm" REL="nofollow">introduction</A>, which has a very interesting comparison of various schools of thought regarding the definition of probability.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124411867914306612005-08-18T20:37:00.000-04:002005-08-18T20:37:00.000-04:00Yes, and that's precisely what I was disagreeing a...Yes, and that's precisely what I was disagreeing about. There is no possible value of N for which the utility difference between $N and $10m is greater than the utility difference between $10m and nothing. Extra money gets less and less valuable, the more of it you have. That's what DMU is all about, and that's why our intuitions in such examples as you point to are worthless when considering expected <I>utility</I> (rather than expected <I>dollars</I>).<BR/><BR/>Suppose an angel gave you a one-off chance to bet with units of your own happiness (supposing that happiness can be quantified in such a way). Would it be worth betting 1000 happiness-units (the loss of which, we may suppose, would leave you feeling quite miserable) for a 10% chance to win a million units (which we may suppose is happier than any human has ever been before)?<BR/><BR/>It seems to me that we have no grounds for going against expected utility in such a situation. Even if the chance of winning is low, this can be compensated for by appropriate large rewards, so long as said reward (unlike material resources) does not exhibit DMU.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124356803495926272005-08-18T05:20:00.000-04:002005-08-18T05:20:00.000-04:00Richard, I think driftwood's point (if I can be pa...Richard, I think driftwood's point (if I can be pardoned for putting words in someone else's mouth) is that the size of the reward is not relevant because the experiment cannot be repeated. <BR/><BR/>Consider if you were walking down the street one day and were given a choice of two games to play. In game A, you have a 50% chance of winning $1 and a 50% chance of losing $1, but in game B you have 10% chance of winning $N and a 90% chance of losing ten million dollars. The catch is that you only get to play once. Which game should you choose to play? I suggest you should choose game A, regardless of the size of N. If you could repeat the experiment, the choice would clearly depend on the size of N, but since you can't, the stakes are too high to play B. (I'm assuming, of course, that "you" can't afford to lose ten million dollars. I know I can't...) <BR/><BR/>This doesn't necessarily have anything to do with Pascal's wager, at least not directly, just with the relevance of expected value calculations.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124343316861717652005-08-18T01:35:00.000-04:002005-08-18T01:35:00.000-04:00"it seems the odds of a "win" are of more concern ..."<I>it seems the odds of a "win" are of more concern than the size of the payout.</I>"<BR/><BR/>I think our intuitions are corrupted by diminishing marginal utility, even when we try to stipulate that DMU won't apply in a particular example. It's just so fundamental to the way we think: the difference between benefits of 1-100 <I>seems</I> of greater significance than the difference between 101-200, even though an appropriate definition of utility guarantees that they are (objectively) exactly the same.<BR/><BR/>So if one ever came across a magic casino where betting was done in 'utils' rather than 'dollars', it isn't clear to me that there's any rational reason to go against the expected utility calculations (at least when only finite values are involved).<BR/><BR/>But it's pretty difficult to really conceive of utils properly, so I'm not too sure about any of this. If we can't trust our intuitions either way, what else is there for us to fall back on? (Perhaps generalized theories that are known to work in <I>other</I> domains? Can we trust them to work as well in novel situations?)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124313513864349792005-08-17T17:18:00.000-04:002005-08-17T17:18:00.000-04:00That's a really good point, driftwood, and may wel...That's a really good point, driftwood, and may well be why expected utility calculations lead to such odd results.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124228850184057112005-08-16T17:47:00.000-04:002005-08-16T17:47:00.000-04:00My point was just the simple one that a single num...My point was just the simple one that a single number--exected utility--doesn't account for everything that we are interested in. In this "save your soul", I was assuming that each "soul" had one shot at the lottery based on what they believed at time of death. How other "souls" fair is of little interest.<BR/><BR/>So all I was pointing out is that it seems the odds of a "win" are of more concern than the size of the payout.driftwoodhttps://www.blogger.com/profile/06307983826591633035noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124181798213971242005-08-16T04:43:00.000-04:002005-08-16T04:43:00.000-04:00driftwood,first post - My main concern with your t...driftwood,<BR/><BR/>first post - My main concern with your theory (which is possible) is that it sounds like the main justification is "it is what I do". Believing in the theory is, it would seem, a gross breach of your theoretical god's rules and not very disimilar to concepts like "I am the messanger for god".<BR/><BR/>Tough isn't it? ;)<BR/><BR/>Second post - it is very difficult to properly calibrate your utility scale conceptually - it is very tempting ot start talking about two icecreams being twice as good as one or somthing along those lines. I think that is the problem you are having. <BR/>It is posible for you to argue that it is not possible to have a utility ten times more than you winning 1 million dollars - I think this is the point you are really making.<BR/><BR/>furthermore I would have thought a utilitarian should not maximize utility across just his own life he should maximize it across EVERYONE's life. So the numbers are repeated - if it is infinitly unlikely you would require an infinite number of incidents or maybe greater than an infinite number of incidents - but that is possible (I am thinking of physics as opposed to just philosophy).Geniushttps://www.blogger.com/profile/11624496692217466430noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124145035185133452005-08-15T18:30:00.000-04:002005-08-15T18:30:00.000-04:00As to the theme of this thread, besides the proble...As to the theme of this thread, besides the problem of whether the expected value is well defined, I think that what is overlooked is the problem that this is a one-off offer. Expected value reasoning works well when there are repeated cases, but may not be the best tool to get at what you want.<BR/><BR/>I’d rather take a 1 in 10 chance of winning a million dollars instead of a 1 in 100 chance of winning a billion. (Ignore for the moment that my utility is not linear here—it is not.) Although the billion dollar wager has a expected value 100 times higher, I place more value in merely winning at all. So I’ll take the wage that gives me 10 times better odds.<BR/><BR/>Having an unlimited reward makes this particular problem worse to the point of making the process meaningless.driftwoodhttps://www.blogger.com/profile/06307983826591633035noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124144272138958282005-08-15T18:17:00.000-04:002005-08-15T18:17:00.000-04:00You have run across the idea that god is a Logical...You have run across the idea that god is a Logical Positivist? This god designed the universe carefully to give no hint of divine intervention and gets angry at people for believing in the unsupportable idea that it was created by a god.driftwoodhttps://www.blogger.com/profile/06307983826591633035noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124098830895621232005-08-15T05:40:00.000-04:002005-08-15T05:40:00.000-04:00The reason I included all the limit stuff in the c...The reason I included all the limit stuff in the calculation is that I was trying to model an infinite (eternal) reward/punishment. The original Pascal's wager (as described by Richard) relies heavily on the fact that the reward is infinite. I was trying to include discussion of the punishment too, while leaving the "infinite" part of the argument intact. Perhaps it would have made more sense to call the variable "t" and claim it referred to time, but oh well. Ice cream is, of course, heavenly, but unfortunately it is finite, and unless you're talking about some sort of Zeno's paradox in ice cream, I don't see that it makes sense to subdivide it infinitely. <BR/><BR/><I>"but that relies on you successfully becoming a......Nihilist"</I><BR/><BR/>No, I'm not claiming that it's impossible to pick a religion at all, just that Pascal's wager is not the way to do it. <BR/><BR/><I>"That is unless your point is that you have an objective, easy and superior method for determining what religion is correct in which case that does indeed clearly obsolete the argument - by the way please tell me !!!!"</I><BR/><BR/>Hehe, no, I can't help you there. I do suggest, however, that everyone interested in religion should check out the <A HREF="http://www.venganza.org/" REL="nofollow">Flying Spaghetti Monster</A>.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124097410542213322005-08-15T05:16:00.000-04:002005-08-15T05:16:00.000-04:00Covaithe! Usually one tries to avoid having to do ...Covaithe! Usually one tries to avoid having to do maths with infinity but you seem to be intentionally spreading it right through your analysis!<BR/><BR/>Let’s say I have your problem <BR/><BR/>And I take the option 1 -170n<BR/>Option 2 -5n<BR/>Option 3 -215n<BR/><BR/>Now I choose my scale of utility and my point of zero utility. My choice of scale determines the size of the numbers and my choice of "zero" determines their size. I can define zero as lets say "1 billion n" in this case the problem (rounding) looks like this<BR/>Option 1 = "negative 1 billion n" <BR/>Option 2 = "negative 1 billion n" <BR/>Option 3 = "negative 1 billion n" <BR/><BR/>So the most logical scale in decision matrix would seem to be one where you place the zero in the middle because that is one of the few options that does not obscure the decision (besides it doesn’t offend the theory of relativity). In this case the numbers rattle off in different directions and one can hover around at zero if you want.<BR/><BR/>You can apply the same theory in reverse to any problem in order to make it unsolvable.<BR/><BR/>Let’s say I have a choice between eating cheese and eating ice-cream eating ice-cream has a utility of 1 "unit" and eating cheese has a relative utility of -1 "unit". So I will reset the two utilities reversing what we did above and say cheese is 1 "unit" and ice-cream is 3 "units" (still the same incentive to choose ice-cream over cheese) but then we wonder how big a "unit is" we discover that there is an infinite number of up units in it and that a sub unit is just as reasonable as a "unit" as a measurement - so we define it as 1*n as limit n-->infinity and 3*n as limit n-->infinity. Now 1*n infinity can be described as an infinity so we conclude each option has a reward that can be called an infinity and we cant solve for an answer<BR/><BR/>But one wonders why one would try to make a problem harder to solve, but then just after that you do actually solve it….. <BR/><BR/>> It's possible to compare unbounded sequences using l'Hospital's rule <BR/><BR/>Cripes that was easier than I thought – anyway this is another way to conceptualize what I was adressing above from anotehr angle.<BR/><BR/>> The probabilities and the weighting of the various and punishments in this model are completely subjective. <BR/><BR/>Indeed<BR/><BR/>> If nothing else, this example convinces me that actually trying to calculate Pascal's wager for any actual religion would involve so many subjective, unverifiable numbers that any answer would be pretty much useless.<BR/><BR/>Hmm your arguments don’t seem to have lead entirely in that direction to me but having said that I can see how you could have concluded it and it is a reasonable position Infact it is a pretty bullet proof - but that relies on you successfully becoming a....... err...... what do they call those people who don’t believe anything because they believe it is all useless and not understandable and generally all depressive? Nihilists?<BR/><BR/>That is unless your point is that you have an objective, easy and superior method for determining what religion is correct in which case that does indeed clearly obsolete the argument - by the way please tell me !!!!Geniushttps://www.blogger.com/profile/11624496692217466430noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124067405762066632005-08-14T20:56:00.000-04:002005-08-14T20:56:00.000-04:00I've been thinking more about your original post a...I've been thinking more about your original post and trying to figure out exactly how the expected reward argument falls apart the way you point out. I think I have an idea: this formulation of the expected utility of believing only takes into account the (possibly infinite) reward if that belief turns out to be correct; it ignores the (also possibly infinite) punishment if that belief turns out to be wrong. The expected utility calculation shouldn't just be p*Reward, it should be p*Reward + (1-p)*Punishment. <BR/><BR/>What follows is a simplified attempt to calculate Pascal's wager this way. I don't think it works, and I'm not sure that any useful conclusion can be drawn from it, but it may be interesting. Or maybe not. :)<BR/><BR/>Consider a very (very very) simplified universe in which there are only three possible belief choices: religions A, B, and C.<BR/><BR/>Religion A says that if you believe in its god, when you die you will feast forever with him in the hall of heroes, which is supposed to be a really great party. If you don't believe in it, you will be cast out into the cold emptiness outside the hall to starve and freeze forever.<BR/><BR/>Religion B says that if you believe in it, when you die you will sing the praises of its god forever, thinking pure thoughts and basking in his radiance, but if you don't believe, you will be cast into a burning hellpit to be consumed by flames forever. <BR/><BR/>Religion C is atheism, which should be fairly self-explanatory. <BR/><BR/>Suppose that a person named Fred in this world is trying to decide which religion to believe in. After thinking about the various religions for a while, he decides that the probabilities are as follows: <BR/> p(A) = the probability that A is true = 0.3<BR/> p(B) = the probability that B is true = 0.2<BR/> p(C) = the probability that atheism is true = 0.5<BR/>Suppose further that Fred really likes parties, isn't very afraid of the cold, thinks that singing and pure thoughts sounds kind of boring, but is really afraid of the idea of hellfire. <BR/><BR/>I suggest that Fred might model the expected utility of believing in, say, religion R as follows. First I'll consider what I call the nth partial reward for if R turns out to be true. Obviously this reward is different for believers and nonbelievers. I arbitrarily set PR(n,A,believers), the nth partial reward for correct believers if A turns out to be true, to 100*n. Since Fred doesn't like singing and thinking pure thoughts as much as he likes parties, I'll set PR(n,B,believers) = 50*n. And the reward for atheism is nothing, for believers or nonbelievers: PR(n,C,believers) = PR(n,C,nonbelievers) = 0. Now for the negative rewards. Since Fred is very afraid of hellfire, I'll set PR(n,B,nonbelievers) = -1000*n. He's not so worried about cold and hunger, so PR(n,A,nonbelievers) = -50*n. <BR/><BR/>Now we can get an nth partial expected utility for believing in a religion X. I'm not going to write out the general expression because without a chalkboard, the notation is horrible. So here's the expression for the nth partial expected utility for believing in A: <BR/> Util(A,n) = p(A) * PR(n,A,believers) + p(B) * PR(n,B,nonbelievers) + p(C) * PR(n,C,nonbelievers)<BR/> = .3 * 100n + .2 * -1000n + .5 * 0<BR/> = -170n<BR/>Similarly, <BR/> Util(B,n) = .2 * 50n + .3 * -50n + .5 * 0<BR/> = -5n<BR/>and<BR/> Util(C,n) = .5 * 0 + .2 * -1000n + .3 * -50n<BR/> = -215n<BR/><BR/>Then I can take the final expected utility as the limit as n increases without bound of the partial expected utilities. It turns out that with the coefficients I've assigned, all the utilities are infinitely negative. Looks like poor Fred is doomed no matter what he does.<BR/><BR/>Some observations. <BR/><BR/><BR/>1) No matter what probabilities you assign and how you weight the various rewards, all of your final expected utilities are going to come out infinitely positive, negative, or zero in this model. In retrospect this doesn't seem surprising; we're talking about eternity after all. It's possible to compare unbounded sequences using <A HREF="http://mathworld.wolfram.com/LHospitalsRule.html" REL="nofollow">l'Hospital's rule</A> (which I should have remembered earlier), but in this case that puts us right back at the nth partial utilities. Which, maybe, is where we should be: it's not necessarily unreasonable to talk about comparing eternities by comparing finite slices of those eternities, especially when the eternities in question are supposed to be unchanging. In that case, this model shows that for Fred, he should pick religion B. <BR/><BR/>2) The probabilities and the weighting of the various and punishments in this model are completely subjective. If Fred hadn't been so afraid of fire, it's quite possible that A would have come out on top. <BR/><BR/>3) If we only consider religions that reward believers and punish nonbelievers, then it's impossible for atheism to come out on top in this model. Unless there is at least one religion that punishes believers and rewards nonbelievers, atheism will always have the lowest partial expected utilities of the available options. Even if you assign atheism a finite reward (e.g. if you claim the atheist has a more satisfying life or something), this disappears as soon as limits come into play. <BR/><BR/>If nothing else, this example convinces me that actually trying to calculate Pascal's wager for any actual religion would involve so many subjective, unverifiable numbers that any answer would be pretty much useless.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124022095373271972005-08-14T08:21:00.000-04:002005-08-14T08:21:00.000-04:00Genius: No toastmasters, but my inability to say s...Genius: No toastmasters, but my inability to say something concisely goes a way toward it - if I'm going to be wordy anyway, I might as well make the best of it!Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124019341333582282005-08-14T07:35:00.000-04:002005-08-14T07:35:00.000-04:00Thanks Covaithe, your rephrasing is very helpful i...Thanks Covaithe, your rephrasing is very helpful in clarifying these things. :)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124016139074876952005-08-14T06:42:00.000-04:002005-08-14T06:42:00.000-04:00"the Pascalian problem could be restated in more f...<I>"the Pascalian problem could be restated in more formally acceptable terms anyway. E.g. replace every sloppy instance of the word 'infinity' with the more formal 'limit of n as n tends to infinity'."</I><BR/><BR/>Exactly. :) Your discussion of limits of the sequences n and n/2 is precisely how one should deal with this question, and leaves the rest of your arguments intact. (And convincing!)<BR/><BR/>I might rephrase one of your sentences as "if we have a sequence of numbers that increases without limit, and we divide each of those numbers by any finite positive amount, then the new sequence still increases without limit." "[Tends|goes] to infinity" is of course a common phrase in mathematics when dealing with sequences; it's how most people, even professors, pronounce the notation "n --> (sideways figure-eight)", but I try to avoid it whenever possible, preferring "n increases without limit". Saying that sequences "tend to infinity" suggests that there's some sort of goal (or limit) that they are approaching, called "infinity", when actually the sequence does not have a limit.<BR/><BR/>This rephrasing is also relevant to your next comment, wherein you're not quite sure how to show that the "values" of the sequences are the same as they "tend to infinity". Talking about the "value" of the sequences themselves makes it sound like you're talking about some kind of limit or limit-like property, when of course neither sequence has a limit. The property that I suspect you're trying to describe is that neither sequence has an upper bound. When I use the "increases without limit" phrasing, these things all seem much clearer to me.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124015456977118522005-08-14T06:30:00.000-04:002005-08-14T06:30:00.000-04:00Brandon is better at academically explaining his p...Brandon is better at academically explaining his position in philosophy than me it seems - and his name makes people less defensive and promotes less arogance. Having said that I dont mind arogance in people I debate - well not too much anyway.<BR/>I could almost accuse brandon of taking toastmasters !Geniushttps://www.blogger.com/profile/11624496692217466430noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1124015106226795132005-08-14T06:25:00.000-04:002005-08-14T06:25:00.000-04:00BTW brandon seems to be right (about "pascalians")...BTW brandon seems to be right (about "pascalians") and I generally support his main point and believe that it is fundimentally superior approach.<BR/>Also I note (2) is flawed as I explained a couple of times previously and "one final point" is also pretty intuitively flawed in the context that you are using it - as surely you can tell.Geniushttps://www.blogger.com/profile/11624496692217466430noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1123995961850806032005-08-14T01:06:00.000-04:002005-08-14T01:06:00.000-04:00Richard, I can understand the suspicion, but it do...Richard, I can understand the suspicion, but it does make sense: we want our formal apparatus to be universal, but not formal in the sense that we want it to cover everything; rather, we want it to apply exceptionlessly to a particular domain. And decision theory is generally recognized to presuppose certain things in any given case (it doesn't tell us how to determine the probabilities, for instance). Also, no formal argument of itself tells you what to do with itself; what we do with a given formal argument is another question, depending on what our purposes in appealing to the formal argument at all are (and there are other considerations, e.g., depending on the risks and utilities, it's possible that decision theory itself might suggest that decision theory is not the most rational way to make a decision in a given case). But in a sense, one doesn't even have to step outside decision theory to give the Pascalian room to maneuver in the face of something like the lottery ticket case; because we can't assume the decision matrix for wagering about how to wager will turn out to be an implicit decision matrix for wagering about whether God exists. It depends on how one evaluates the risks, gains, etc. of wagering about how to wager.<BR/><BR/>The question of motivation is an interesting one. I assume it varies from Pascalian to Pascalian. It's noteworthy, though, that in the notes we have of Pascal's own version, it's the <I>agnostic</I> who starts the discussion by saying that since reason doesn't tell us whether God exists, Christians aren't justified in believing that God exists. Since all we have are fragmentary notes, we don't know if this is how the dialogue Pascal intended to write would have actually started; but it wouldn't be surprising if it did. So, in Pascal's case, what motivated his argument was what he thought a particular argument was a likely objection to Christian belief in his own culture; and he responds to it in terms that he thinks the objector would be likely to understand. Pascal himself is very clear that Christians are <I>not</I> in the state of minimal evidence assumed by the Wager; while he allows that there are lots of uncertainties, he is very clear that there is 'inside information'. We have two things to lose (the true, which pertains to reason, and the good, which pertains to the will) and two things to stake (knowledge, which pertains to reason, and happiness, which pertains to the will); the Wager only deals with the will, because he's addressing an agnostic, and one of the things he tries to do with it is to give the agnostic a reason to take more seriously the inquiry into whether Christianity is true (i.e., if the agnostic is somewhat persuaded by, but still doubtful about, the Wager, he should seriously and open-mindedly investigate claims to additional evidence that might make the choice easier by suggesting what is actually true).<BR/><BR/>I suspect that most Pascalians, though, just assume (1) that people are interested in the question of whether God exists or not; and (2) they do not have good reasons for either option. (1) is plausible; (2), depending on the actual context, could be much more dubious.Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1123993522010186762005-08-14T00:25:00.000-04:002005-08-14T00:25:00.000-04:00richard,Please make up your mind what you want me ...richard,<BR/><BR/>Please make up your mind what you want me to deal with and I will deal with it. There is absolutly no lack of understanding - I know what you are saying - it is just you who dont know what I am saying, apparently. And have probably stopped reading the posts properly.Geniushttps://www.blogger.com/profile/11624496692217466430noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1123993341821057502005-08-14T00:22:00.000-04:002005-08-14T00:22:00.000-04:00Brandon, that's an interesting point. I'm suspicio...Brandon, that's an interesting point. I'm suspicious of it, because I would want our formalisms to be universally applicable (such as general expected utility formulas which can applied to any specific situation). But the idea of separating out different <I>types</I> of practical decision, and assessing them against distinct rational standards, is an intriguing one. I guess it could potentially get the Pascalian out of the problem I've raised here.<BR/><BR/>But how would the Pascalian motivate the original wager, without appeal to some general principle along the lines of (P) above?Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1123992091898265102005-08-14T00:01:00.000-04:002005-08-14T00:01:00.000-04:00(which wouldn't necessarily be the case)Whoops; I ...<I>(which wouldn't necessarily be the case)</I><BR/><BR/>Whoops; I meant "(which wouldn't necessarily be the same as in the case of Pascal-type wagers)." The point is that (for instance) we are not considering the sorts of gains, losses, risks, and uncertainties relevant to wagering about God; we are considering the sorts of gains, losses, risks, and uncertainties relevant to wagering about how to wager.Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1123991870980984752005-08-13T23:57:00.001-04:002005-08-13T23:57:00.001-04:00(Sorry Brandon, my previous comment was addressed ...(Sorry Brandon, my previous comment was addressed to Genius.)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1123991835573697052005-08-13T23:57:00.000-04:002005-08-13T23:57:00.000-04:00Look, it really isn't that complicated. Here's the...Look, it really isn't that complicated. Here's the problematic reasoning:<BR/><BR/>1) If given an exclusive choice between two options, it's rational to take the option with the highest expected value.<BR/><BR/>2) The expected value of a guaranteed infinite reward is no greater than the expected value of a 1-in-a-million chance thereof. <BR/><BR/>Therefore,<BR/>3) It's no more rational to choose the guaranteed infinite reward than it is to <I>instead</I> (note that these are meant to be exclusive choices!) choose the improbable one.<BR/><BR/>Clearly (3) is false. The problem lies in (2). But (2) follows from the Pascalian principle (P) I described in my second comment.<BR/><BR/>One final point: your recent comments confuse a negative return with the absence of a positive return. These are not the same thing. I suspect this is what's muddling you up.<BR/><BR/>Now, for goodness sake, just stop and re-read the main post and the comments so far. If you <I>still</I> can't follow the argument, I can't be bothered explaining it any further. This is getting tiresome.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1123991737474695722005-08-13T23:55:00.000-04:002005-08-13T23:55:00.000-04:00I wasn't talking for myself but for most people wh...I wasn't talking for myself but for most people who use Wager arguments. They don't reject the formal apparatus; their use of it is governed, however, by the supposition that an actual practical decision needs to be made, come what may. And therefore there's a lot that's actually going on outside the formal apparatus. This is actually clear simply from the application of the apparatus itself, for it's generally recognized that one has to construct multiple decision matrices to evaluate the Wager; how these interrelate and precisely how they should be constructed depends on a number of suppositions the formal apparatus doesn't involve. So the actual matter of coming to a decision involves more than constructing a decision matrix or two; it involves coordinating them, and if they are not equally important to the decision, weighting the conclusions, etc. For instance, most people who accept Pascal-style arguments are also Jamesian; that is, they'll only look at options that come up as live options for real-life practical purposes.<BR/><BR/>The lottery ticket case arises only on the assumption that you've already isolated as an option for practical consideration wagering on God only if you win the lottery; which means (presumably) that a higher-level decision-theoretic inquiry has already been made (if not, then the <I>practical</I> reasons for picking out this as the option would have to be gleaned from the context). It is not a wager about God; it's a wager about how to wager about God (lottery-ticket wise), and would have to be compared to other wagers <I>of this sort</I> (e.g., dice-wise, coin-wise, asking-a-random-person-wise, reading-tea-leaves-wise) in light of the goal of <I>that type of decision-making</I> (which is not the same as in the case of Pascal-type Wagering) and the type of reasons that are both available and relevant to such decision-making (which wouldn't necessarily be the case). So the lottery ticket case doesn't even to be a part of Pascal-type wagering. Indeed, the Pascal-type Wager may be one of the options of this sort of decision; the Pascal-type Wager is one way to wager about God. But the whole decision-matrix for the lottery ticket case becomes otiose if you have reasons to prefer Pascal-type wagering in the first place (and Pascal-type wagerers do have such reasons, e.g., the reasonableness of actually thinking through the matter to see what you might decide on the merits of the options themselves, before going ahead and deciding on the basis of something as dubious as a coin or lottery ticket). Hajek's argument makes the error of assuming that arbitrarily constructed decision-matrices will always be relevant to the practical wagering. Whether they are relevant actually depends on the practical situation and the goals of one's inquiry.Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.com