My Epistemic Argument against Vigilantism assumes that if the vigilante is probably mistaken in thinking that X is immoral, then they shouldn't coercively interfere. Alex denied this, suggesting that if the stakes are high enough, even a low probability that X is morally outrageous could suffice to justify such coercion. Better safe than sorry, you might think. But I think this is a deeply misguided way of thinking.
First, note that this approach leads to absurd consequences. Suppose I'm 99% sure that abortion is perfectly permissible, or at least not as bad as genocide (as some pro-lifers claim). But I do think that preventing genocide is over 100 times as important as the right to abort. So should my 1% concession to the possibility that abortion might be atrocious therefore commit me to denying women the right to abort? (If blasphemy might be the ultimate sin, should we outlaw it just in case?) Surely not! What's atrocious is to force baseless (or probably mistaken) moral views on others.
There's something bizarre about trying to calculate the "expected value" of possible deontic outcomes. Despite the superficial similarity, it's not really like an expected utility calculation. Let us distinguish real probabilistic reasons from mere probabilities of real reasons. If Chaos Theorists determined that there was a 1% chance that U.S. abortions could cause a real genocide to occur on the other side of the world, then that might justify banning abortion. There is a real reason here that definitely exists, namely the unlikely chance to prevent genocide. But in the earlier ("abortion is genocide") case, it is unlikely that there is any real reason at all. (If abortion is permissible, as I assume is most likely, then there's nothing at all to be said for banning it.)
It's difficult to formalize the exact difference here (suggestions welcome!), but I hope you get the intuitive idea. "Expected utility" and "expected deonty" (for want of a better term) invoke two different moral 'levels'. EU is purely first-order, attempting to determine the one true answer to what we have most reason to do. ED is second-order, weighing up competing responses to the former question, by balancing the likelihood and putative force of merely possible reasons.
But here's the thing: we should do what we have most reason to do. That's all. There's no need to insure ourselves against merely possible reasons. They have no actual normative force. It doesn't matter how incredibly forceful a counterfactual reason is, it pulls no weight in the actual world.
Stronger still: no moral hedging is allowed. We must work with the reasons as we take them to be. Hence, the contribution of moral beliefs to practical reasoning is all or nothing. If I reasonably believe that abortion is permissible, then my moral reasoning will start from the unhedged premise that abortion is permissible. Not "99% likely to be permissible"; just "permissible", simpliciter. Beliefs may come in degrees, but reasons don't, and we base our decisions on the latter.
This shouldn't be confused with dogmatism. We should question our moral beliefs, of course. And as our credence falls, we should become more reluctant to use them as basic premises in our practical reasoning. We may come to doubt that there are real reasons there after all, and hence reject the premise entirely. Nevertheless, it is always a binary decision. We can reason from the assumption that abortion is permissible, or we can refrain from making that assumption. What we can't do is reason from the probability of its permissibility. This is because we must judge what actual reasons we think there are, and work from there. Rational agents cannot reason from what they take to be merely possible reasons.
Hence, if you judge that X is in fact permissible, you should give no weight at all in your practical reasoning to the possibility that X is the gravest evil in the universe. [But don't let English grammar mislead you, that 'should' needs to have wide scope!] Of course, it might influence your theoretical reasoning -- you should take extra care when judging X's permissibility -- you wouldn't want to get this one wrong! But given that I am reasonably sure (I mean, reasonable in being mostly sure) that X is permissible, the alternative possibilities provide me with no reasons whatsoever.
It's worth noting that this theoretical issue has important practical implications. It's not unusual for both pro-lifers and vegetarians to argue from uncertain obligations to actual ones. (That is, they argue from the seriousness of "getting it wrong", to the conclusion that a responsible moral agent will "play it safe" and oppose the killing of fetuses and/or animals.) But if I'm right here, then we should reject such arguments.