Saturday, April 25, 2015

The Best Case for Voting

To follow up on my last post, let's consider a Regan-esque case for voting.

The set-up: Suppose there are two candidates, Good and Bad, and a large population (e.g. several million voters).  90% of the population are unreasoning voters, and suppose that each such voter is (independently) 0.55 likely to vote for Bad, and 0.45 likely to vote for Good.  Suppose that the remaining 10% of the population consists of utilitarians, who are initially disposed not to vote (unless their voting will be instrumental to changing the result from Bad to Good).  I am one such, and I wonder whether I should bother voting.



The verdict: Any one such utilitarian, reasoning in isolation, can be extremely confident that their individual vote will make no difference, to the point that the expected value of voting is effectively zero (cf. J. Brennan, The Ethics of Voting, pp.19-20).  That slight bias in favour of Bad, played out over millions of independent chance events (as we're imagining "votes" in this situation to be -- note that this is not an accurate model of real elections) creates a probability distribution with a vanishingly small chance of a tied result (much smaller than if everyone's votes were entirely random).

However, suppose that the group of utilitarians have all been persuaded of Regan's cooperative utilitarianism.  And we know this about each other.  Then we're disposed to play our role in producing the best collectively possible outcome, holding fixed the behaviour of others (who aren't disposed to cooperate with us to this end).  In effect, our power has been magnified, as we can reason not just for ourselves but for the whole group.  And whereas any one individual has no chance of overcoming the odds created by the rest of the population, the group as a whole very well can.

Bad is expected to receive votes from 49.5% of the total population (give or take a very little, to accommodate chanciness -- but the variation swiftly decreases as the population size increases). Before our group intervenes, Good is expected to receive votes from 40.5% of the total population (again, give or take a little).  Our group makes up 10% of the population.  If we all vote, we bring Good up to an expected 50.5% of votes, i.e. an expected victory (indeed, an almost certain victory given the stipulated odds and population size).  If the expected benefits of Good's electoral victory outweigh the aggregate costs of our all voting, it would seem the collectively best outcome is for us all to vote, and so we shall.  Since it's reasonable to play your part in producing the best collectively attainable outcome, it is reasonable for you to vote in this situation.

Almost.  One wrinkle: Not everyone in the group needs to vote in order to effectively ensure Good's victory.  So if there's any cost to voting (be it puppy torture or lollipops), the group might do better to have merely some portion of its members (between 90% - 100%) vote.  The exact result will depend on the details of population size, the expected benefits of Good's victory, and the expected costs of voting. I'm not really interested in the math here, though, so let's suppose it works out that expected utility is maximized if the group's members have a .91 chance of voting.  Then here's what you (and each other member of the group) should do: Roll a 100-sided die. If it lands on 91 or less, vote.  Otherwise, don't.

Of course, the real world is rather messier.  Our probability function over possible outcomes should be much "wider" than the above model would suggest. (We can't so confidently assign odds to the general populace's voting behaviour.)  The number of people who are even aware of, yet alone convinced of, Regan's cooperative utilitarianism is probably under 100, not a large enough voting bloc to have significant impact, and they aren't all aware of each other.

But perhaps some "rough enough" approximation could still apply.  There could be a large-ish bloc of citizens that you regard as relevantly like-minded to yourself when it comes to electoral decisions.  You might accept the common-sense moral principle that it makes sense to cooperate with like-minded individuals to collectively achieve better results, be disposed accordingly, and trust that others do likewise (at least inchoately).  And you might figure that, even if your individual vote can't make a difference, the collective votes of your like-minded bloc of citizens can make a difference. And so you do your bit, voting -- exercising your cooperative disposition -- in the hopes that the others (of like disposition) will do the same. (Due to the wider spread of your probability distribution, uncertainty regarding the size of your "like-minded" bloc, and the absence of evil demons causing trouble just to make a point, you likely won't have grounds for wanting less than 100% of your group to vote.  But if you did, I guess you could always add in the dice-roll step, after adjusting for the risk of fellow bloc members failing to follow through on the above reasoning, even inchoately...)

If the expectations that went into the above line of reasoning were all reasonable, then it seems the resulting behaviour -- voting -- is thereby rationalized.  Indeed, it may even be irrational not to vote, insofar as it is a rational failure to neglect salient opportunities for collectively achieving better results -- and an election is surely a salient such opportunity, if anything is!

2 comments:

  1. "Then we're disposed to play our role in producing the best collectively possible outcome..."

    I argued against Parfit's version of that principle here: http://www.jstor.org/stable/2265292

    ReplyDelete
    Replies
    1. That's very different. You discuss a view on which we should forego marginal benefits in order to act in the same way others do when a group of people performing that act collectively has good results. I agree this is a bad view. The Reganesque view I propound here does not have that feature. Your role in producing the collectively best outcome could be to perform a different action from what others are directed to do. This will be so if your performing the different act has higher marginal value than your performing the shared act. The only difference from purely individualistic Act Consequentialism is that it has all morally-motivated agents co-ordinate on the optimal equilibrium point, rather than allowing them to possibly fall into suboptimal equilibria. (See here for an introductory explanation.)

      [P.S. The Parfit 'group beneficence' view is motivated by cases whereby it seems "a group of individuals makes a difference even though benefits never diminish when one fewer person than any number of persons in that group contributes." I think such cases are misdescribed in a way that makes them incoherent. So the view you (rightly) reject is, in my view, unmotivated to begin with.]

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.