Friday, November 20, 2020

What Should Editors Ask of Referees?

I've previously discussed how frustrating confused referee reports can be for the author, and how the system might actually be made more efficient by allowing authors to (briefly!) respond to these reports before a verdict is reached.  But I think there's a more systematic problem, in that too many referees (seemingly) base their verdicts on bad criteria, such as whether they can think of an objection to the paper. (One otherwise-brilliant philosopher once told me that he has a deliberate policy of rejecting any paper that he disagrees with!  Few would explicitly endorse this, I imagine, but many more may follow a similar rule de facto.)  So I've been wondering what steps a journal editor could feasibly take to try to counteract this.  In particular, are there particular questions that it would be worth asking referees to explicitly address in their report, that would better reveal the truth about a paper's merits?

Wednesday, November 11, 2020

Legality is No Excuse

Suppose you discover that your elected representative is a (literal) Nazi, who enjoys using racist slurs and openly advocates for reinstituting slavery and apartheid.  Horrible, right?  Further suppose that whenever anyone objects to this, his co-partisans excuse it on the grounds that it isn't illegal: he is "100% within his rights" to have, and advocate for, atrocious views.  This would be a ridiculous defense.  It neglects the obvious fact that it's possible to exercise your legal rights in ways that are morally wrong.

Unfortunately, people seem extremely prone to conflating ethics and legality in just this way.  This can then be exploited by politicians to deflect criticism without offering any actual justification or defense.  Witness Mitch McConnell: "President Trump is 100 percent within his rights to look into allegations of irregularities and weigh his legal options."

It's nuts that a line like this has any rhetorical force.  If only our media, and our citizenry, were more philosophically competent!

Thursday, November 05, 2020

Political Beliefs, Uncertainty, and the Expected Value of Paralysis

Jason Brennan argues that most people can't have the faintest clue about the expected value of voting for either candidate in an election:

They don't know the difference in the value between the candidates and they don't know they probably of being decisive. If it's not rational to buy a lottery ticket in such situations, why would it be rational to vote?

But I think his framing is misleading.  It's true that a robust, precise, and dialectically persuasive estimate would take a lot of work.  But it would also take a lot of work (much more than Brennan does here!) to show that most people have no good reason to think that one candidate would be better than the other, or that they epistemically must be indifferent.  Yet that is what Brennan really needs to show, if he is to undermine the rationality of voting.

Tuesday, November 03, 2020

Hedonism, Egoism, and Implausible Restrictions

Rational Egoism claims that self-interest is uniquely rational; concern for anything besides your own wellbeing is (on this view) strictly irrational or unwarranted.  Value hedonists claim that pleasure is the only thing that has value: it follows that caring about anything besides pleasure is strictly irrational and unwarranted. (Though, due to the paradox of hedonism, it may be rational to try to acquire such irrational concerns, if this would actually serve to better promote your happiness.)  Both views are, I think, deeply implausible, in a distinctive kind of way.

Compare subjectivist (preferentist) views, which place no substantive restrictions on the content of what we may rationally prefer and pursue.  While these, too, strike me as implausible -- for broadly Parfitian reasons -- I can at least get myself into the mindset of appreciating their liberality.  "Who's to say what's really worth pursuing?"

But Hedonism and Egoism are different. These views purport to offer objective normative constraints on what really matters, but their substantive content just seems ludicrously unmotivated.  I mean, one's own wellbeing is indeed something that matters.  And pleasure is certainly of value.  So that much is right. But why on Earth would anyone think that either of these was the only thing that matters? (What could be more obvious than that love also matters?  That we shouldn't -- or at least needn't -- be indifferent to the possibility of our loved ones being secretly replaced by robots?) 

Thursday, September 17, 2020

The Optimal Use of Suboptimal Vaccines

There's an interesting piece in the NYT warning against hasty FDA approval of a (potentially) suboptimal covid vaccine.  I'm especially interested in the third reason they offer:

Third, the F.D.A. must consider the impact of an emergency authorization on existing vaccine studies. There are almost 40 continuing vaccine clinical trials, and several have been identified as promising. Those trials are blind, which means that the participants do not know whether they are getting the vaccine or a placebo. Would some participants drop out and opt for the newly authorized vaccine, undermining those studies? Would the trials be able to recruit additional participants who would risk getting a placebo once there was an authorized vaccine?

It would seem pretty messed up if we had to deprive millions of people of a potentially life-saving vaccine (say, one that might cut covid fatalities by half) because that was the only way to ensure that we could find unvaccinated volunteers to test (potentially) better vaccines (to find one that would ultimately bring about the swiftest end to the pandemic).  Ideally, we should want most people to make use of the suboptimal vaccine in the meantime, whilst incentivizing/compensating research participants to forego the current vaccine and instead stick with their trials, to help society find an even better vaccine.

If only our society had invented some means of exchange that could be used to provide the necessary incentive/compensation!  Failing that, I guess we must simply watch as more people die...

Thursday, September 10, 2020

Against Prudish Research Ethics

We're all familiar with prudishness as it applies to sexual ethics: the prude thinks certain sex acts are immoral, even between happily consenting adults.  They also hold that sex work is inherently degrading, and that others should not be allowed to offer monetary compensation in exchange for one's sexual labour.  The prude is not willing to tolerate others engaging in consensual and mutually beneficial exchanges in this arena if they don't stem from what the prude regards as the "right" motivations and take place within "approved" institutional arrangements (e.g. marriage).  It's a deeply illiberal perspective that has thankfully fallen out of favour in recent decades.  We may, of course, have reasonable concerns about the exploitation of sex workers in practice.  But it's increasingly recognized that the best response to such practical concerns is to improve the options available to those in desperate circumstances, not to deprive them of (what they evidently regard as) their current best option.  So I think it's fair to say that liberals have won out over sexual prudes in our current cultural milieu.

Sadly, the reverse appears true within the arena of research ethics.  Research prudes think that certain kinds of medical research (e.g. involving voluntary infection) are unethical, even when all involved are happily consenting adults.  They disapprove of offering monetary compensation to research participants, to make participation worth one's while when it otherwise would not be.  They are not willing to tolerate others' engaging in consensual and mutually beneficial research arrangements if participation doesn't stem from what the research prude regards as the "right" (i.e. non-financial) motivations.  It's a deeply illiberal view that unfortunately still predominates, with cultural bastions like the New York Times routinely dismissing controversial ("queer") research possibilities as "unethical", without argument.  We may, of course, have reasonable concerns about the exploitation of research participants in practice.  But it's depressing how hastily people assume that the best response to such concerns is to paternalistically deprive others of an option that they might well have reasonably preferred over their available alternatives.

Perhaps the most important difference between the two arenas is that the research prude's illiberalism is vastly more harmful.  Medical research has immense positive externalities.  So preventing it has immense negative externalities.  You're not just harming the would-be research participants (not to mention undermining their autonomy), you're also harming all those who end up suffering from medical conditions that could have been cured or prevented had the research gone ahead.  Missed opportunities are rarely salient, and so do not provoke the outrage that they truly deserve.  But on any reasonable estimate, the death toll of research prudishness is surely monstrous.

Monday, August 24, 2020

Synopsis of Parfit's Ethics

I previously shared my manuscript on Parfit's Ethics.  But I figure I might as well offer a bit more detail about what interesting content can be found in each chapter, in hopes of enticing a few more readers to give it a look.  (I should be able to take into account any comments received within the next few days; after that, I'm not sure whether I'll get a chance for further revisions.)

1. Rationality and Objectivity - A simple summary of Parfit's arguments against Rational Egoism and Normative Subjectivism.  Briefly evaluates the arguments against Parfit's non-naturalist normative realism.

2. Distributive Justice - Explains Parfit's priority view, and suggests a way to improve upon it (that basic goods or welfare contributors, rather than welfare itself, might have diminishing marginal value).  Explains away the arguments of anti-aggregationists.  Summarizes Parfit's views on "moral mathematics" and collective harm. 

3. Character and Consequence - Explains "rational irrationality", and extends it to critique Parfit's understanding of "blameless wrongdoing" (or virtuously-acquired viciousness). Defends self-effacing moral theories.  Assesses Parfit's argument that common-sense morality is directly self-defeating.

4. The Triple Theory - Assesses Parfit's Triple Theory, including a critique of the underlying motivation for his convergence-seeking project.

5. Personal Identity - Summarizes Parfit's key arguments for reductionism about personal identity, and adds a related "container/content" argument of my own (in section 5.1).  Along the way, also argues (i) that Parfit was mistaken to view reductionism as metaphysically contingent, and (ii) that Lewis' 4-D view is just a terminological variant of Parfit's reductionism.

6. Population Ethics - Briefly surveys the Non-Identity Problem and the Repugnant Conclusion (specifically, whether it can be avoided without having even worse implications).

Wednesday, August 19, 2020

Vulcan Interests and Moral Status

Inspired by David Chalmers' recent Zoom talk on 'Consciousness and Moral Status': consider affectless (but otherwise phenomenally conscious) vulcans. They can perceive, and think, but have no positive or negative feelings of any kind.  Do they matter?  Is there anything in their lives that is (intrinsically) good or bad for them?

I think these are importantly distinct questions.  One way to see this is to note that even if (as I am inclined to think) their vulcanized lives contain no basic goods or bads, this very fact might be (extrinsically, comparatively) bad for them.  We would have strong moral reasons to devulcanize them, if possible, and provide them with the capacity for valenced experiences.  Crucially, this moral reason stems from concern for the very being that already exists: It is importantly different from simply bringing into existence a new conscious being where none was before.  So vulcans have one important kind of moral status -- they are morally considerable individuals -- even though in the ordinary run of things (i.e., while remaining a vulcan) nothing we did could be basically good or bad for them: their welfare is stuck at zero.

Thursday, August 13, 2020

Innocuous vs Unjust Systemic Discrimination

It's now widely recognized that problematic discrimination need not involve malicious attitudes: certain political structures might systematically disregard the interests of ethnic minorities, for example, even if nobody involved was "racist" in the traditional sense of harbouring prejudicial attitudes.  Still, sometimes people -- even highly-respected philosophers! -- move from this to the opposite error of assuming that any disparity in group outcomes is in itself constitutive of unjust discrimination against the disadvantaged group.  I've found this especially common in debates about QALYs. (One may, of course, raise reasonable questions about how QALY values are determined in practice: perhaps they fail to accurately track the welfare facts in some cases, adjusting down for certain disabilities that are actually harmless. But my target here is the more sweeping complaint that any form of the metric will be "ageist" and "ableist" simply in virtue of its being systematically disadvantageous for the elderly and (detrimentally) disabled, relative to an alternative system that sought to indiscriminately save as many lives as possible.)

Tuesday, August 11, 2020

Moral Theory and Motivational Contingency

Parfit sometimes suggested that Act Consequentialism might best be understood, not as a moral theory, but as an external rival to morality.  The implicit thought seems to be that morality involves some essentially social mode of thought (perhaps concerned with public codes, or norms of praise and blame, or what principles you'd want everyone to accept), AC does not depend upon any such mode of thought, but it nonetheless offers an account of what you really ought (rather than just "morally ought") to do.

I'm not a fan of that conception of morality, largely because it seems to devalue it, depriving morality of its interest and significance -- if it is not what you really ought to do, then who cares what you "morally ought" to do?  Putting that worry aside, though, it does seem interesting that (only?) consequentialists are apt to regard their normative commitments as not contingent upon being the true moral theory.  That is, if I learned that Kantianism was true, my response (besides incredulity) would be to say, "So much the worse for morality."  In that sense, I don't particularly care about morality, and don't think anyone else should either.  (You should lie to the murderer at the door to save lives, and I don't care about any sense of "wrong" in which doing more good is "wrong".)  But I suspect that, if the roles were reversed, Kantians would not react analogously.  (Can any Kantian readers confirm?)

If that's right, I wonder if anything interesting follows from this observation.  Might it suggest that consequentialism matters more than other theories?  Others seem to depend upon inheriting the normative halo of "morality", whereas consequentialist properties have such clear intrinsic significance that no further halo is required!  Ha, maybe.  More neutrally, it may just be that other theories are so closely tied to some essentially social mode of thought that it's harder to make sense of them mattering without also being what matters morally -- and such a connection needn't have any implications for how much they matter.

A related question (which I owe to David Enoch): how -- if at all -- would (and should) your motivations change upon learning that the meta-normative error theory is correct, and nothing objectively matters at all?  Insofar as you had proper de re desires for what you previously took to be good, it isn't clear why these would be contingent or conditional upon your normative (or meta-normative) beliefs.  Our concern for each individual's welfare, at least, presumably should not be conditional in such a way.  Perhaps deontological concerns -- e.g. to avoid violating rights even if it would overall help people more -- are more appropriately conditional on their actually being right, leaving deontological moral motivation more vulnerable to the threat of moral scepticism?

Another possible exception, even for consequentialists, might involve distant-future non-identity cases (about which it seems difficult to muster direct concern and motivation).  Consider some case where we can either help the already-existing global poor (with no expected downstream effects), or we can bring it about that a whole generation of people a million years hence will be much better off (than the different generation that would otherwise exist in their place).  For purely theoretical reasons, I currently prefer the greater distant good.  But if I came to believe in error theory, I would be much more tempted to just go with my gut, indulge my narrow sympathetic impulses, and help the already-existing people.  (Perhaps a more virtuous consequentialist would have a less theoretically-contingent commitment to the distant future!)

Any thoughts?  Which of your moral motivations do you regard as theoretically contingent in these ways?