If we want to know whether a certain policy or legal change would be a good idea, we should presumably consider the expected consequences. Very roughly speaking (abstracting away from uncertainty and hence the need to weight various possible options by their probability), we should carefully assess what would happen if the policy is / isn't instituted, and we can then assess which result is morally better. (I don't mean to assume consequentialism here -- if a policy violates rights, for example, one might deem it morally "worse" on those grounds.) Call this approach "thinking realistically about policy outcomes." (Also known as "thinking like an economist.")
It's surprising rare (excepting economists, of course). Most people seem to just think about the most salient aspect of a proposed policy, and form a positive or negative impression based on that. E.g., (1) Americans love the mortgage tax deduction -- it helps you to afford a house! Never mind what a regressive policy this is, or how it distorts the housing market. (2) The Farm Bill's agricultural subsidies help hard-working farmers! Again, never mind that it's opposed by basically every economist from across the political spectrum as wasteful, distortionary, and anti-competitive (harming the global poor). (3) A revenue-neutral gasoline tax would cost me at the pump! Never mind that it would help disincentivize harmful pollution, congestion, etc., and reward those who use less.
But this lack of realistic thinking doesn't just affect the unreflective "folk". Academics too -- perhaps especially the further left-leaning ones -- commonly seem to echo this mistake, albeit in more sophisticated ways. A few examples spring to mind: (1) Bioethicists regularly neglect the "unseen harms" of over-regulation. (2) My previous post on sex selection highlighted the fallacy of moving from moral qualms about people's having certain preferences to the policy conclusion that acting on such preferences should be banned. The first comment responding to my post then repeated the very same mistake! (3) A Facebook friend suggested that kidney markets are bad because they create incentives for the rich to keep people poor (on the off chance that they one day need an organ, and so could get it for slightly cheaper, I guess?). This was suggested by a very intelligent philosopher (and 'liked' by another). But is there any realistic story to tell of how this "incentive" could be anything other than completely negligible in magnitude? I worry that a certain style of ideological thought seems to be substituting for carefully considered empirical predictions about the particular situation at hand.
These various examples are all quite different from one another. But they all serve to illustrate how easily we can be led to (possibly misguided) policy conclusions by methods other than thinking carefully about the likely outcomes of policies. I find the ubiquity of this tendency both surprising and worrying. Anyway, it seems useful to identify the general phenomenon, to more easily guard against it in future.