Why, when asking oneself whether to believe some proposition p, must one immediately recognize that this question is settled by answering the question whether p is true? Truth is not an optional end for first-person doxastic deliberation, providing an instrumental or extrinsic reason that an agent may take or leave at will. Otherwise there would be an inferential step between discovering the truth with respect to p and determining whether to believe p, involving a bridge premise that it is good to believe the truth with respect to p. But there is no such gap between the two questions within the first-personal deliberative perspective; the question of whether to believe p seems to collapse in to the question whether p is true.
He calls this phenomenon "transparency". I deny that it exists. Sure, we usually take truth to settle the question of what to believe, but there is no "must" about it. Consider the following scenario:
(Rage) Suppose that shopkeeper Apu suffers from a rare mental illness such that if he forms the belief that someone is a thief, his anger will overwhelm him, and he will violently attack (and possibly kill) the accused. Apu knows this, and knows that such behaviour is wrong, but he cannot help it. One day he finds some food missing from the store. He wonders whether Nelson - a young troublemaker - is to blame. Apu deliberates over the matter, within arm's reach of two large buttons, which activate and control his brain implants. Apu knows that if he hits the red button, the implants will cause him to form the belief that Nelson is a thief. But if he hits the blue button, they will cause him to form the belief that Nelson is innocent. (In each case, the implants will also wipe any prior beliefs that are inconsistent with this conclusion, including Apu's memory of pressing the button, and making the decision to do so, etc.)
Now, Apu is trying to decide what to believe. He decides that he has conclusive moral reasons to avoid falling into a violent rage, and so (given his illness) these are also reasons to believe that Nelson is innocent. However, upon considering the evidence, he begins to get the sneaking suspicion that Nelson is in fact guilty. Before the rage can overwhelm him, Apu desperately hits the blue button. He then believes that Nelson is innocent, and life goes on as normal. The End.
Isn't it obvious that, in the (Rage) scenario, the question of whether Nelson was truly guilty or not does not settle the question of what Apu should have believed? Certainly Apu, as I've described him, did not experience the phenomenon of "transparency". He was deliberating over what to believe, but for him this did not merely collapse into the question of what is true. Indeed, the result of his deliberations was that he should believe something that he (presently) thought was false. (That's why he needed to press the button, so the brain implants could do what is psychologically impossible by willpower alone.)
So I think Shah is simply wrong about transparency. There can be reasons for believing other than truth-indicative ones, and this may be recognized by agents who are deliberating over what to believe.
I instead agree with Dretske, who claims that false beliefs are bad in much the same way as foul weather is bad. We just happen to dislike both of them, and understandably so: they can muck up our plans! But there is nothing essentially normative about belief. A false belief is not necessarily a bad one, and even if our beliefs do usually aim at truth, it simply isn't true that they must do so -- as the (Rage) scenario demonstrates. It is an unfortunately narrow view of doxastic rationality that takes only truth-indicative reasons into consideration. Why exclude moral and practical reasons in such a way? The sphere of reasons is larger and more diverse than Shah makes out.
Some might respond that a belief must, by definition, be governed by evidence or indicators of truth. Otherwise it's not a belief, but some other mental state: perhaps a 'delusion'. I don't think that's a helpful definition. The sort of mental state I'm concerned with is the type through which the agent represents the world as being a certain way, and which influences the agent's behaviour, and causes him to say things like: "I think that such and such is the case." etc. It is a separate question how these states are formed, or how responsive they are to evidence, etc. But answers to those questions are not going to affect whether or not something qualifies as being this sort of mental state that I've described. Now, I think the word "belief" is a pretty good name for this mental state, but if others disagree, we can call them "sbeliefs" instead. Because what I'm interested in here is how agents (should) deliberate over how to represent the world, not the terminology we use to describe it.
Must sbelief aim at truth? Well, my earlier arguments suggest not. There are other (e.g. moral) sorts of reasons that can influence how we should represent the world to be. And that - rather than how we define the word 'belief' - is the important thing.