Saturday, December 17, 2005

AAPC Summary: Part II

Continuing from Part I...

Alan Hajek introduced the Pasadena Paradox, which arises because the expected value of a 'Pasadena game' is given by a conditionally convergent series, and thus the terms can be rearranged to yield any sum whatsoever. In the absence of any reason to privilege any particular ordering of the terms, we are left with a perfectly well-defined game (with well-defined payoffs) that has no expected value. So decision theory can't tell us how much we should be willing to pay in order to play this game, or, say, whether to accept if offered a million dollars to give it a go. Very puzzling.

An interesting distinction Hajek raised was between "undefined" terms, e.g. 1/0, for which there can be no possible value, as opposed to cases where there is "no fact of the matter" (NFM), but perhaps because they can amount to any value. The latter cases are slightly less troublesome, as they allow for supervaluation. For example, if someone offers you a choice between (i) the Pasadena game or (ii) the Pasadena game plus $100, then clearly the latter choice is preferable. We can make sense of this because whatever ordering of the terms you choose, (i) will then have an expected value of x (for any real x, depending on the ordering you chose), whereas (ii) will have the greater expected value of x+100. But nothing like this works for undefined terms. There's no sense to be given to the claim that 1/0 + 100 is any more than 1/0. Undefined terms are completely incommensurable, because they can't be assigned any value at all, so there's nothing to compare. NFM terms, by contrast, can be assigned any value whatsoever, so you can make the comparison for each particular case and then supervaluate to the general (NFM) case. I thought that was kinda neat.

I've already blogged about Lauren's talk on epistemic defeaters, Adrian Walsh on thought experiments, and Heather Dyke on the Representational Fallacy.

Charles Pigden gave a fun talk in defence of conspiracy theories. He pointed out that many people do in fact conspire so it is unreasonable to dismiss a theory just because it makes claims of a conspiracy. Al Qaeda conspired to carry out the 9/11 attacks. We all accept this conspiracy theory. Bush, Blair & co. previously held to a conspiracy theory about why inspectors could find no WMD in Iraq (they thought Saddam was conspiring to hide them). Turns out they were wrong, but I doubt they would have been swayed at the time by someone pointing out that they were holding to a (shock horror) conspiracy theory. Indeed, it seems odd to even label it as such. Typical usage of the term "conspiracy theory" is usually reserved for theories which allege conspiracy on the part of Western governments. Yet history tells us that Western governments are not always ethical or trustworthy, and have been known to conspire against their enemies, so again, it isn't clear why "conspiracy theories" are necessarily illegitimate.

Emily Gill offered a "response-dependent" (or dispositional) theory of explanation, suggesting that S explains D iff epistemically virtuous agents would judge that S explains D. One oddity about this is that it collapses the distinction between true and apparent explanations, making it impossible to be faultlessly mistaken in one's judgment whether S explains D. I guess the sense of "explanation" she had in mind is the more subjective one, perhaps tied to the notion of a hypothesis or what I would call a possible explanation of the phenomenon. But my linguistic intuitions are pulled towards a more objective or factive sense of the word, whereby "S explains D" entails "D because S". Explanations provide "reasons why", and as I understand it, "S explains D" means that S is in fact a reason why D occurred, and not merely that S could or would (counterfactually) be a reason why D. On my view, false (attempted) explanations are no explanation at all. Put another way, not all possible explanations are actual explanations.

Matthew Minehan tackled the intersection of ethics with metaphysics. It was an interesting and novel approach, though I found his arguments unconvincing. First he argued that consequentialism has a "supervenience problem", because two identical actions might have different moral status depending on their disparate consequences. But the obvious response for the consequentialist is to point out that the two actions actually have different non-moral properties too. For example, one might have the property of 'producing more happiness than any available alternative action', when the other does not. Such differences are what explains the moral difference in the actions.

Minehan's other main argument was that consequentialism collapses into ethical egoism under trope theory. Consequentialism states that we should maximize goodness, which is unproblematic if 'goodness' is a universal shared by all lives. But if each 'goodness' is an abstract particular (trope) then (the argument goes) 'goodness' is a different thing for each person, and consequentialism will just tell us to maximize that particular which is 'goodness' for us. But this doesn't follow. There's no reason why consequentialism couldn't simply tell us to give moral consideration to all the particular 'goodness' tropes (in relation to their 'weight'), without regard for whose they are.

Finally, Dave Chalmers argued that some ontological questions (e.g. whether there are mereological sums of arbitrary objects) might not have any determinate answers. He distinguished between "ordinary" and "ontologial" assertions, mirroring Carnap's internal/external distinction. For example, we might ordinarily say that Santa lives at the North Pole (speaking within the Christmas mythology framework), or that Santa doesn't exist (within the framework of actual concrete objects), but philosophers might argue about whether fictional beings exist (perhaps abstractly) in some 'absolute' sense.

Now, the problem for ontology is that the world might not come with a built-in "absolute domain" which exhaustively specifies all the objects in the world. And if not, then we can't apply the existential quantifier to it, or make existence claims with determinate truth values. Instead, Chalmers suggests, we need to add a "furnishing function" which maps from worlds to domains. Some of these will be 'inadmissible' (for whatever reason), but perhaps there are multiple 'admissible' functions that could associate our world with a domain. If so, then supervaluation might yield at least some determinate answers (say, if no admissibly furnished worlds contain concrete unicorns, then we can hold "concrete unicorns exist" to be determinately false). But in other cases the answer may be indeterminate, e.g. if one admissibly furnished world contains numbers, and another doesn't, then there's simply no fact of the matter whether numbers exist in our bare world. Interesting stuff.

1 comment:

  1. Thanks for this Richard, missing the AAP NZ for the first time in 5 years hurts, and reading at least some summary of the papers you went to has been very interesting.

    Cheers
    David Hunter

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.