Monday, May 29, 2006

Ends and Evidence

We would usually take ourselves to be going wrong if our beliefs conflict with the available evidence. But I think it would be a mistake to conclude from this that evidence has any intrinsic normative significance. We shouldn't (fundamentally) aim to have justified beliefs. Rather, we should aim to have true beliefs, and evidence is merely our best means of assessment. To think otherwise would amount to a fetishistic confusion of means and ends. Rationality is like wealth: useful, but easily mistaken for something more.

The point generalizes. Consider any worthy goal G. What matters is achieving the goal. To improve our chances of achieving it, we should tend to respond rationally to evidence, and act in a way that is "justified" in this conditional sense. But it would be a mistake to think that this justification is what matters. To say, "You ought, given the evidence, to X" does not necessarily mean that you ought (objectively) to X. The evidence may be misleading. So if we interpret it as a kind of wide-scope requirement [as Clayton suggests], it may be that you ought to reject the evidence rather than do X. (That might be irrational. Does this matter? We're supposing the normative goal here is G, not "being rational".)

Even if there's something to be said for being rational (perhaps it is an intrinsically valuable character trait, for example), it at least seems to be "counter-deliberative", as Clayton explains:
Let us say that the rational choice is the choice that sides with what the agent not unreasonably takes the balance of reasons to require. Whenever the agent is in a choice situation, she will not be able to distinguish in thought the choice that is on balance best supported by the reasons from the one that is best supported by the reasons as she takes them to be. But which one is the BASIS of her choice from her perspective? The reasons and not the reasons as she takes them to be.

This ties in with Kolodny's transparency account: the rational choice is that which it seems we ought to do. From the first-personal perspective, we cannot distinguish appearance from reality. (To believe that P is to consider P true.) In deliberation, we hope to identify the best choice. We do this by settling on what seems best. But we wouldn't treat the appearances as having any independent force, over and above the objective facts. I can treat the fact of P as a reason, but not (typically) the fact of my believing P. Note the strangeness of the following reasoning: "I can't decide whether it would be best to X or to Y. I suppose X-ing would be more rational. Oh, so that's another thing in its favour! I shall do X!" Also, Clayton points out that we would advise people to believe what's true, not what they have evidence for.

That's all to suggest that we shouldn't confuse the normativity of ends and their rational means. I'm sure there was some further reason why I wanted to write about this topic, but I can't for the life of me remember what it was. So I'll finish with a quote from Hare:
The winner of a game of backgammon is the player who first bears off all his pieces in accordance with the rules of the game, not the one who follows the best strategies. Similarly in morals, the principles which we have to follow if we are to give ourselves the best chance of acting rightly are not definitive of 'the right act'; but if we wish to act rightly we shall do well, all the same, to follow them.


  1. Okey-dokey! I take the whole post for granted except for wot I really think has gone up the spout!
    Wot is that fab surpassin' WHATCHAMACALLIT, Richard?
    Normativity? Why could summat be irrational & then normative?!
    I consider myself a 'philosophist', so I'm extremely curious to hear anything non-rational of you!
    Although I suppose rationality serves as a means, but I daresay true beliefs must be justifiable enough all the same!
    In the mean time, rationality isn't necessarily or merely to be evidential.

  2. While the ultimate goal maybe G, couldn't we say that it is another of our goals, derivative though it maybe be, to choose the best strategy for accomplishing G? And wouldn't following the evidence be the best strategy? While I agree that you are right concerning the "objective" standpoint, my worry is that there is no such thing as an objective standpoint from which anyone can see things. Yes, objectively the goal and the strategy are different, but I simply do not see any reason for each person, from their non-objective standpoint, to not equate the two.

  3. You have mirror neurons so you can see other people's points of view. So I suggest, if you want to be objective you just need to cultivate them and a healthy cynicism towards your own positions.

    Of course many people are terrible at it and some are almost entirely incapable (eg sociopaths).

  4. I think truth is indeed the ultimate goal of philosophers, scientists, and probably all life forms. It is quite a slippery quality, though, and pertains in pure form only to hypothetical statements. Statements about the world all fall down in that there may be error.

    Perhaps the best way of viewing truth about statements about the real world is to see it as an asymptote - we can talk of more and less true, even if we can never talk of 100% truth. The more information we have, the closer to 100% we will get. We may even get to 100% but never know for sure. We can't even know how close we are.

    A good analogy from my work in optimization using 'hill climbing' lends itself. For a highly complex problem, we have a 'feasible solution', meaning it is a solution and it could be optimal (there may be no feasible solutions to some problems, btw). Next we attempt to find another solution that is better. So long as we have a clear problem definition it is easy to show whether one solution is better than another, or equal. If the newly generated solution is better we can discard the old one (or keep for historical reasons) and continue from there. This will mean our solutions can only get better. But it also means that we can't know how far we are from the optimal solution, nor how long it will take us to get there. We may be stuck eternally in local optima.

    I think our progression towards truth is like that. We are climbing a mountain range cloaked in cloud, and even if we think we're at the top, we could just be at the top of a lesser peak. Our altimeter tells us we're higher than we've ever been, but ultimately we can't be sure that we're even on the right mountain. The more people doing it, the longer we do it, the more sure we get that we're at the top, but ultimately we can't really be sure that we're not just halfway.

    Belief is the point which we are currently at, the set of statement we currently have which are as consistent as we can get them with the evidence. Or the current feasible solution, in the other analogy.

    So, Richard, you are quite right that belief is not a particularly ambitious goal. Justification is a tool for improving our position, and it is therefore very useful to have, but having perfect justification is not our aim. That merely takes us to the limit possible from any set of statements and evidence. We have had an exquisite ability to justify things since language began, but our knowledge of what is true has steadily increased nonetheless. This is because evidence has continually increased and slotted into the web of justifications, leading us up the mountain of truth.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.