Friday, February 05, 2021

The Parochialism of Metaethical Naturalism

I've previously suggested that naturalism can't account for substantive boundary disputes (and I mean to turn that into a proper paper sometime soon).  But as I've been working on my Moral 2-Dism paper I've found another sense in which metaethical naturalism entails a troubling kind of parochialism.  It's this: to avoid the Open Question Argument, naturalists now hold that there is an a posteriori identity between certain moral and natural properties (on the model of water and H2O).  This entails that moral terms are 2-D asymmetric, i.e. have differing primary and secondary intensions.  This in turn means that what our moral terms pick out at a world may differ depending on whether we consider the world 'as actual' or 'as counterfactual'. But this is objectionably parochial: (our assessments of) the moral facts should not differ depending on our location in modal space.

Compare 'water'.  On Twin Earth, the watery stuff is something other than H2O.  Given that our watery stuff is H2O, we judge that Twin Earth lacks water.  But suppose an oracle informs you that you've been deceived: actually you've been on Twin Earth all along, and the actual watery stuff of your acquaintance has never been composed of H2O.  You'll now reconsider, and judge that Twin Earth (but not H2O-Earth) has water. Our 'water'-judgments are, in this way, "parochial": they depend upon our (historical) location in modal space.  We may need to revise them upon revising our beliefs about which possible world is actual.  And that seems fine for the term 'water'.  Natural kind terms are inherently parochial, insofar as they're about those kinds of things we found around here.

Ethics should not be so parochial. In principle, we can assess the normative truths Ni that apply to any given possible world Wi: "If Wi, then Ni."  Such conditional normative judgments must be a priori if knowable at all: they should not suddenly jump around if an oracle informs us, "By the way, you're actually in W1 yourself. . . just kidding, really W2!" If I judged that N1 applied to W1 beforehand, I should continue to believe this (of W1) even after learning that I'm actually in W2. To do otherwise would seem to violate a kind of anti-parochial or universalizability constraint on moral reasoning. (Compare the non-universalizing thinker who, when asked to assess the morality of a hypothetical action, first demands to know, "But is the agent in question me?") Such constraints thus commit us to 2-D Symmetry.

To avoid parochialism, then, one must reject synthetic / a posteriori metaethical naturalism.  (See my paper for more details.)

23 comments:


  1. Hi Richard,

    It's an interesting argument, though I tend not to agree, because it seems to me that morality is in this sense 'parochial' (though the word seems kind of loaded to me), regardless of whether naturalism is true.

    More precisely, we have the following parallel.

    1. In the case of water, the assessment one makes about whether H2O is in all possible worlds (viewed as counterfactual) water, is sensitive to one's information regarding whether in the actual world, the watery stuff is water. For example, if an oracle tells you the watery stuff around us is not composed of H2O, then you assign very high probability to the hypothesis that stuff made of H2O is not water in any possible world, viewed as counterfactual (some philosophers disagree with that btw, and assign a much lower probability, which may be relevant to the Open Question argument, but I'll leave it aside for now).

    2. Let X be any behavior. For example, we may take 'X:=someone with a UK middle class income, ordinary amount of info, access to the internet, etc., chooses to buy in a supermarket and eat meat because she likes it', and then we would have to make it more precise, but any general behavior X will do.
    Then, it seems the assessment one makes about whether X is immoral in all possible worlds (viewed as counterfactual) is also sensitive to some portion of one's information about the actual world, in particular human psychology. Consider the following hypothesis:


    H1:
    Humans have a mental system ("MS" for short) that gives an output associated with moral reprobation (to different degrees) given certain informational input. MS is of course imperfect, and it's not literally universal: for example people committed in psychiatric institutions might have a sense that is widely different from the rest of the population, and there are very slight differences among the general population, as there are in other systems, e.g., color vision. But in general, MS is a species-wide system that reliably gives some specific outputs given certain inputs. That is one reason why people are not fighting each other all the time on the streets: moral disagreement is very salient, but it is a drop in an ocean of agreement, which happens in daily life without humans even realizing it. Also, the vast majority of disagreements are the result of differences in the input, even if those who disagree are not aware of that. This happens because the input is extremely complex and the differences in experiences of two people often results in different inputs, in cases of disagreements.

    Now compare the two following scenarios:

    S1: You are told by an oracle (with sufficient probability, or however you make that assessment in your water examples) that you are in W11, in hich H1 holds, and MS yields a 'moral reprobation' outcome when assessing behavior X, even as information increases arbitrarily (at least, as much as a human can hold).


    S2: You are told by an oracle (with sufficient probability, or however you make that assessment in your water examples) that you are in W12, in hich H1 holds, and MS yields a 'not moral reprobation' outcome when assessing behavior X, even as information increases arbitrarily (at least, as much as a human can hold).


    It seems to me (again, assuming one can trust the oracle, but we're doing the same in the other case) that one should assign higher probability to X's being immoral in S1 than one should in S2. Then, it seems to me morality is 'parochial'.

    It might be objected that the above is an epistemic matter, but then, as far as I can tell so is the water case. I don't see a way around this one.

    ReplyDelete
    Replies
    1. Hi Angra, I don't think our moral verdicts should be at all conditional on conventional dispositions of moral judgment or 'reprobation'. What you describe strikes me as just another example of objectionable parochialism.

      Delete

    2. Why?

      I mean, you are human. You make moral judgments using a human moral sense (or whatever you call it). If the human moral sense yields a 'wrong' verdict on X and yours does not, it's difficult to see how yours is not malfunctioning. It's not that you reasoned better about some property of the action (e.g., intent or consequences), given the "information increases arbitrarily" condition in my stipulation.

      Delete
    3. My views on philosophical deference are explained here. I make moral judgments using my moral sense. I see zero reason to evaluate this relative to some species standard. I have no reason, independently of my own judgments, to expect the "human moral sense" to be accurate.

      Delete


  2. More generally, I would argue that a priori judgments are also sensitive to a posteriori information. For example, I may reckon a priori that a certain statement TH is a theorem in, say, Number Theory, though it has not yet been published. I make that assessment a priori. If a good mathematician tells me she encountered a subtle error in the proof, I will properly lower my probabilistic assessemnt that TH is a theorem, and probably by a significant margin. If, in the future, a superhuman AI that previously made billions of assessments that humans later tested, and all of them turned out true, tells me it found a subtle error in the proof of a complicated statements that has already been published as a theorem (even one I read and reckoned in the past I understood), and further can prove it is false, then I will very significantly lower the probability I assign, and with good reason.

    ReplyDelete
    Replies
    1. Yes, I agree about the possibility of testimonial evidence for (or against) a priori knowable claims. For Chalmers' 2-D semantics, what's relevant to the primary intension must abstract away from that and invoke a more idealized kind of judgment -- what you would judge if logically omniscient, say. (For example, the primary intension of 'water' should be XYZ at Twin Earth even if we build into the description of Twin Earth that all the "experts" there insist that water is H2O. A logically omniscient agent would feel no pressure to defer to such fallible beings.)

      Delete
    2. I think there are problems with an omniscience clause.

      For example, take rules of behavior in wolf societies. That is an empirical matter for humans - it is a posteriori. It is however a priori for wolves. Sure, they need some experience in wolf society to develop normal minds that will grasp those rules. But the same goes for the development of our minds to grasp any human-a-priori knowledge. If advanced aliens (say, a-wolves) evolved from wolf-like beings, one would expect their a-priori knowledge would be pretty different from ours (in particular, I'd say they would not know morality without studying humans or sufficiently similar primates, though they'd have a priori knowledge of a-wolf-morality; I know you disagree, but we do not need the morality example for this to work, but just different a priori knowledge).

      So, in re: logically omniscient judge, if that includes omniscience of a priori knowledge (if it's just logic, it doesn't capture morality, so that would not work), one would have to specify species, but that, has the following problem:
      [indent]
      1. There are no sharp boundaries between them and specie, but the omniscience stipulation seems to require something like a unique and sharp class of all things that are a priori knowable.

      2. The term "species" above shouldn't be limited to some biology definition. It encompasses all sorts of different kinds of minds. Just as rules of behavior in a social species A can be a priori for members of A but not for members of B (even if they're both about equally capable of, say, building advanced spaceships), there are all sorts of other things that can be a priori given a certain mind. I do not see why 'water is H2O' would not be so. Rather, I can see why it would almost certainly not be so in practice, in flesh-and-blood aliens in our actual universe (unless it's infinite, then I don't know). But I don't see why it would not be a priori for very different possible beings. In fact, if we move aside from flesh and blood and include future AI (even actual ones), maybe it will be a priori for them too.
      [/indent]
      Of course, you might argue that the above is mistaken, that 'water is H2O', or the rules of behavior, etc., are not a priori to any potential being, and so on. But it seems to me that that would require some argument.
      At least, I do not know that moral naturalists generally would agree with you on those matters.

      Delete
    3. If by "rules of behaviour" you mean the empirical sociological matter of what norms typically guide behaviour in the society, then these simply aren't a priori.

      If you instead mean something normative, like what rules ought to guide behaviour in certain stipulated circumstances, these are a priori and not limited to any given species. (I can do hypothetical ethics on behalf of aliens, not just humans.)

      So I don't see any problem here. That said, I am presupposing a broadly Chalmersian 2-D picture. Anyone who thinks that it's a priori knowable (to some other creature) that water is H2O is not going to find this framework comprehensible.

      Delete

    4. I mean by 'rules of behavior' the sort of rules that is specific to each species, especially social ones but not limited to them, without taking a stance on whether they are normative in the sense you have in mind.

      For example, consider first simpler, non-social beings, say frogs, or some snakes. They do not know empirical observations to learn, say, the actual rules of sexual interaction between females and males of their species. They instinctively know them, even if they need some more or less standard environment to develop their mind (at least they need oxygen!). The same goes for rules of dominance/submission, cues that are taken by others of their species, etc. They do not learn this things empirically, though we humans would have to study them to know them. When brains/minds get more complicated, that sort of non-empirical knowledge does not go away, and indeed it composes a lot of what they understand about the rules of social interaction, even if more social stimuli is needed for the development of some of them, and also there are some rules that need to be learned because they're cultural or proto-cultural, or whatever one calls it.

      The above limitations are also the result of the way evolution works, but for other possible agents, such as some AI, it would not be like that. They could have non-empirical knowledge of all sorts of things, and they would not even need any kind of social stimuli for the proper development of them.

      Still, since you say that that is not a priori, we may consider the following classification:

      P is non-empirical for agent A iff agent A apprehends or has the capacity for apprehending P without any empirical observations. Otherwise, P is empirical for agent A.

      Clearly, by what you say, the classification non-empirical/empirical is not the same as the classification a-priori/a-posteriori you make. But a relevant question here would be: Do moral naturalists who identify some moral properties a-posteriori with some properties that can be described in non-moral term ("natural" properties if you like, though I don't believe the name is a good choice), make the same classification a-priori/a-posteriori you make, or something closer to the non-empirical/empirical classification above?

      Delete
    5. Animals perform various behaviours instinctively. What proposition P do they thereby know? I don't see any reason to attribute propositional knowledge to them here. We might formulate some "rule" that describes the animals' behaviour. But nobody, of any species, could know the proposition that frogs behave according to rule R except through empirical observation. (We could imagine some agent that innately believes this proposition, but it wouldn't be a justified belief, and so could not constitute knowledge, until they actually go out and observe some frogs.)

      Also, don't forget that introspection is a form of observation. So self-knowledge (from "I exist" to "I naturally interpret these cues like so...") is also best understood as empirical and a posteriori.

      Delete
    6. I was not thinking only of an agent A having the explicit thought "frogs behave according to rule R" in some language, but rather, A observes a frog F under circumstances C and expect F to do B - an expectation based some rule of frog behavior R A grasps non-empirically. In other words, A intuitively assigns an extremely high probability to the scenario in which F does B (or has a very high credence, or however you construe that), regardless of whether A linguistically thinks in terms of probability, or frogs, or linguistically.

      Now, if the above knowledge is possible, then suppose now that A has the capability for linguistic reasoning and knows rule R in the sense above, but not linguistically. Then A figures linguistically that frogs behave according to rule R, by considering hypothetical scenarios with frogs, and predicting non-empirically how the frogs will behave. If that holds, then an agent can know non-empirically ' frogs behave according to rule R' . propositionally and linguistically (side note: funnily, plenty of theists believe that that sort of thing can be known non-empirically (though of course for very different reasons), by God (and some of them believe that God's knowledge is also propositional), angels, or whoever God chooses to give that knowledge too. Of course, I'm not relying on anything like theism).

      Granted, there is the objection that the above is not knowledge as long A has not made empirical observations (and so, allegedly extremely high probability assignment, credence, etc. is not justified). But that would seem to deny that the non-linguistic, non-empirical grasp of R and the intuitive probabilistic assessments/credences based on it constitute knowledge too (as A used them to reason its way to the linguistic version of the rule, and there don't need to be errors going from one to the other as far as I can tell), which raises a number of other issues.

      Moreover, even if the above objection works against knowledge rules of frog behavior, as long as there is something else it does not work against, one can make the same argument that what is non-empirically knowable depends on the agent, so it seems the objection would have to be globally successful. But it's difficult to see how that would not fail when we're talking about non-empirical high-credence about the reliability of an agent's own senses, faculties, and so on.

      Side note 2, even if an agent's high credence in the reliability of any of its own faculties also needs empirical confirmation to be justified (but I'm not sure how that would avoid being self-defeating?), suppose A makes the same kind of assessment it does for frogs for many other species S1, ..., Sn, then observes all of those and empirically confirms (because A somehow can justifiably trust its eyes, even if not its predicting abilities for not-seen species), the reliability of its non-empirical prediction mechanism, without a single failure. On the basis of that, A empirically reckons his own mental system for making predictions about unobserved species is reliable, and then on the basis of that comes to know linguistically that frogs behave according to rule R, without ever observing a frog, or being in contact with anyone who ever observed a frog, directly or indirectly.

      Delete
    7. Here is a further argument based on the example above, which gets the result that possible non-empirical knowledge depends on species (or generally, the kind of mind of the agent) under weaker hypotheses:

      As before, agent A has intuitive, non-empirical grasp of rule R of fish behavior. Based on that, A considers hypothetical scenarios involving frogs, and then doesn't go as far as acquiring the belief that frogs behave according to rule R, but rather, only that 'probably, frogs behave according to rule R'. Now, if this is possible, then it seems that 'very probably' would be possible too, and so on, so knowledge that

      P1: Frogs behave according to rule R

      is also possibly acquired. An alternative would be that there is a probability limit, so an agent may gain non-empirical knowledge that probably, P1, but not probable enough to be certain. But that in turn results in two possibilities:

      a. There is a precise limit, so it's possible to gain non-empirically the knowledge that the probability that P1 is 0.53 (for example) but not 0.54 (or some other precise limit)
      b. There is a fuzzy limit.

      I think a. is weird, and b. results in continuity and no sharp boundaries between empirical and non-empirical knowledge, which also is interesting.

      Moreover, regardless of whether knowledge that P1 obtains is possible or (a. or b.) holds, if at least the knowledge that probably, P1 is possibly acquired non-empirically, that is enough to conclude that there is some non-empirical possible knowledge about the behavior of frogs, and also that empirical knowledge varies with species (or generally kind of minds). Furthermore, one doesn't even need to get to probably, P1 to obtain this result. All one needs is that A can justifiably attribute to P1 a higher probability than, say, a human or some other entity who does not have that intuitive non-empirical grasp of rule R.

      Delete
    8. "A intuitively assigns an extremely high probability to the scenario in which F does B"

      I think there are two ways that such expectations could be justified. Firstly, it could be that F-doing-B worlds are just objectively more (a priori) probable than F-not-doing-B worlds, and any agent's prior should ideally reflect this. It could then be a priori knowable to any agent: if they have the right prior, they're justified in their expectation.

      Alternatively (as I expect you instead intend), the grounds for regarding F-doing-B-in-C as probable might be entirely contingent (based on details about how Fs are, that easily could have been otherwise). Perhaps one is oneself an F, and just has innate expectations of this sort about F-behaviour.

      One way to justify such an expectation would be by generalizing from introspection. One "internally" observes one's own dispositions, and reasons that others of one's species are likely to be similar. That's empirical a posteriori knowledge.

      Or one might, as you suggest, observe that one's innate expectations (stemming from cognitive module M) have proved reliable in the past; observe that M now yields an expectation of B, and so conclude that B is likely. Again, this justification is entirely based in observation.

      Finally, one could just trust one's innate expectations or "intuitions" about contingent matters of fact despite their contents not being objectively a priori probable, and despite not having any empirical basis for regarding the expectations as having stemmed from a reliable source. But then they are plainly not justified at all.

      In no case is there species-specific non-empirical justification for contingent claims that aren't generally a priori knowable.

      Delete

    9. One way to justify such an expectation would be by generalizing from introspection. One "internally" observes one's own dispositions, and reasons that others of one's species are likely to be similar. That's empirical a posteriori knowledge.

      That would not seem to work in practice, for the following reasons:

      a. This would not seem to work for other species, e.g., expectations about the behaviors of predators on the basis of what they look like; i.e., these expectations would be unjustified.

      b. This would not seem to work for the behavior of inanimate objects (e.g., intuitive physics); i.e., also unjustified.



      Finally, one could just trust one's innate expectations or "intuitions" about contingent matters of fact despite their contents not being objectively a priori probable, and despite not having any empirical basis for regarding the expectations as having stemmed from a reliable source. But then they are plainly not justified at all.
      In no case is there species-specific non-empirical justification for contingent claims that aren't generally a priori knowable.

      Agents generally rely on their own intuitive probabilistic assessments, their own senses, their own scheme for classifying objects, and so on. Humans for example will intuitively (even as babies) classify separate objects, recognize human faces (implicitly assign minds to them), trust their own memories, and so on. Nearly all if not all of the other assessments are then built upon things like that (+ some other things, but at least one of those). They also trust their ability to do logic and make probabilistic assessments. Since knowledge is possible for such agents, this whole thing generally does not fall apart, so it seems this sort of reliance is at least generally justified.

      But if so, then the objection to the other case I presented seems odd. Agents are already, with justification, trusting their senses, and their faculties, and their intuitive scheme for classifying objects, and so on. Why not the expectations about the behavior of said objects? (including, for example, frogs, or water, or anything else).

      I guess it could be argued that it's a priori more likely that an agent's faculties are generally reliable, and that helps with trusting one's senses, intuitive, non-empirical expectations, etc.? That, however, does not seem to be how in practice things work: agents trust those things without reasoning like that, and indeed before they could do so (fortunately, as they would be unable to navigate their world otherwise).

      Moreover, if the reason that it is justified to trust one's own senses and generally faculties is that those are a priori probably reliable, one might then mirror the above and ask whether moral knowledge is only possible a posteriori.

      Indeed, a posteriori, B (a moral agent) reason that its own faculties are generally reliable, and given that B's own moral faculty/whatever one calls it says X is immoral, then that gives a posteriori evidence that it is immoral. But why would B be justified in skipping that step and just a priori judging that X is immoral, but not justified in also reckoning a priori that, say, objects that look like L generally behave according to rule S, and requiring the intermediate 'a posteriori' step: 'My faculties are probably reliable, and they give me the distinct impression that L-looking objects generally behave by S, so that's probably the case'?

      Sure, one assessment is about a contingent matter, whereas the other one is not, but I do not see why that ontological distinction would have an impact on the epistemic justification of the agent's assessments. After all, whether an agent has reliable faculties is a contingent matter, and that applies also to the moral sense.

      Delete
    10. > "it seems this sort of reliance is at least generally justified"

      We need to be careful here. I agree that it's generally legitimate for agents to rely upon default trust in their epistemic dispositions, in the sense that they may be justified even without offering an explicit defense of their dispositions. But it may be too lax to endorse the opposite extreme that they are automatically justified in such reliance. Instead, I think it depends upon the further question (which the agent themselves may not know the answer to) as to whether there is some good justification available -- either a priori based on the content of their judgments, or else indirectly based on more general reliability considerations (which could be either a priori or a posteriori in nature).

      I don't think it's obvious that unreflective animals have rationally justified expectations about predator behaviour (when experienced for the first time). Insofar as they're just relying on evolved instincts, we can understand how it serves them well in practice without attributing epistemic rationality to them. But I don't have a firm view on that.

      These are really interesting issues, so I appreciate your taking the time to discuss them here. I should probably return to some of these issues in a future post.

      But for present purposes, I don't think any of this much affects the argument of the OP. For again, the relevant 2-D judgments involve a distinctive kind of idealization that serves (I believe) to bracket these issues. For the idea of the "primary" dimension is that it models a kind of deep epistemic necessity. So even if one was a priori justified in some probabilistic expectations, so long as P remains a priori possible, there will be a possible world that, "considered as actual", verifies P.

      Going back to the start of this thread then: the relevant kind of "logical omniscience" clause is that it just builds in knowledge that some agent could know a priori "for certain", so to speak. Whereas any possible species-relative "a priori knowledge" would be, I take it, provisional and uncertain: a mere expectation which the agent, if rational, would have to regard as merely contingent such that they could easily conceive of how things could turn out otherwise. (By contrast, I cannot conceive of how it could actually turn out that the watery stuff of my acquaintance isn't water. That's a much deeper epistemic necessity.)

      Thanks again for the interesting discussion!

      Delete
    11. Oh, I should also addressing the following:

      > "Moreover, if the reason that it is justified to trust one's own senses and generally faculties is that those are a priori probably reliable, one might then mirror the above and ask whether moral knowledge is only possible a posteriori."

      I actually don't think there's any a priori reason to expect moral faculties of arbitrary agents to be reliable. (It seems easy enough to imagine evolutionary circumstances which select for norms, and associated normative beliefs, that are objectively awful.) The only possible justification, on my view, is a priori based on the specific content of the normative beliefs (and whether it corresponds to self-evident normative truths). See my paper 'Knowing What Matters' for more on my views about moral epistemology, and why I think our first-order normative judgments have a kind of essential epistemic priority over our normative reliability judgments.

      Delete
    12. Thanks for the replies and the links. Even though my views on the central issues are different (e.g., I think conditioned to the human moral sense not begin generally reliable, P(either an epistemic moral error theory or a substantive moral error theory is true)=almost 1, but I think the human moral sense is generally reliable, aliens would have morality*, etc.), I find your arguments very interesting.

      In re: the distinction between things an agent could know a priori for certain, and things they could not, I think there are a couple of potential difficulties:

      1. How probable is 'for certain'? A difficulty here seems to be that in order to define an agent with all a-priori-for-certain knowledge, there would have to be some sharp line between a-priori-for-certain (APC) and a-priori-but-not-for certain (APNC).

      2. In re: APNC knowledge, here again we would need some limits that appear odd, e.g., there is some proposition S1 and 0< p1 < 1 for which it is possible to be justified in holding a priori P(S1)=p1, but not P(S1)=(1+epsilon)p1 for any positive epsilon (if that is not needed, I'm not sure how one avoids a fuzzy transition between APNC and APC, which would be problematic for the "logical omniscience" clause it seems to me).

      That aside, I can think of some potential answers for the a posteriori naturalist that do not rely on objections to the definition of the ideal agent you propose. One of them would be as follows:


      First, they might say that what is a priori for some kinds of minds is a posteriori for other kinds of minds. So, even granting for the sake of the argument that the distinction between APC and APNC holds, which property moral wrongness is would be a posteriori for humans (and probably, for any actual agents) even though it would be a a priori for some ideal agent with all APC knowledge.

      Second, they might hold that something that the example in my first post in this discussion works, i.e., that one should assign higher probability to X's being immoral in S1 than one should in S2. I get you disagree with that, but that would not have to be a problem within their own framework as far as I can tell (it might if they add further features).

      Granted, you reckon that that is objectionable parochialism, but it strikes some of us as correct (i.e., I think morality is in that sense 'parochial', though I would prefer the expression 'species-specific' or something like that, as I think 'parochial' is negatively loaded), so there is a chance they might come up with a similar response. Then again, I don't know whether a posteriori naturalists hold views that are compatible with the reply above (I have a number of background views similar to those of naturalists, but I do not take a stance on this matter, due to uncertainty about what it means for property A and property B to be identical, and the OQA).


      Side note:


      >"
      I actually don't think there's any a priori reason to expect moral faculties of arbitrary agents to be reliable. (It seems easy enough to imagine evolutionary circumstances which select for norms, and associated normative beliefs, that are objectively awful.)"

      As I see it, they would very probably just be talking about something else, e.g., species#842734 would be talking about 842734-moral-wrongness (and very probably, detecting it in a generally reliable fashion), not about moral wrongness, etc. Though I do think it's possible of course that some aliens to have some unreliable moral-like sense, I think it's pretty improbable in an evolutionary framework - though I can see how within your framework you would not expect general reliability.

      Delete
    13. Interesting. It may be that a sharp distinction here requires that APC knowledge can be held with p=1. While that would seem crazy for fallible thinkers like ourselves, it may not seem so bad to think that a logically omniscient agent should be absolutely certain of logical truths and other a priori necessities. (One difficulty: surely they could not reasonably be certain of their logical infallibility? This suggests that objectively certain a priori justification should be distinguished from what any person could reasonably believe with absolute certainty. The talk of logically omniscient agents is a rough shorthand -- possibly useful for getting the rough idea, but not really the right way to explicate the fundamental concept.)

      > moral wrongness is would be a posteriori for humans

      I'm fine with that in principle -- my primary concern is to establish that it's a priori in principle. But again, I actually don't think that we do have any a posteriori basis for moral knowledge (that doesn't already presuppose that we have a decent a priori grasp of basic normative truths), so I think that to deny us a priori moral knowledge is to condemn us to moral skepticism.

      On your remaining disagreements: fair enough! I think we've probably reached bedrock there.

      Delete
    14. Yes, I get the impression we have.

      There is one thing we agree about that I think I've not been clear, though: I do think that we have a priori moral knowledge, and in fact, that our moral knowledge is generally a priori (though sometimes it can be a posteriori). My earlier suggestion regarding whether all moral knowledge would be a posteriori was raised under some conditions that I do not think hold (the reasons I take no stance on whether there is an a posteriori identity of the sort under discussion have to do with the general concept of property identity, and with the OQA).

      Delete
  3. Hi Prof Chappell, hi Angra

    An off-topic point but relevant to what you said above, Prof Chappell, namely that “ Animals perform various behaviours instinctively. What proposition P do they thereby know? I don't see any reason to attribute propositional knowledge to them here”.

    I had read at Daily Nous a view of Prof David Wallace that had resonated with my pretheoretic understanding of knowledge attributions:


    https://dailynous.com/2020/11/17/things-philosophers-know-science-dont/#comment-link



    “But fish know a lot about hydrodynamics! You can tell because they’re so good at swimming. Their knowledge of hydrodynamics is tacit, not explicit, but it’s not any less real for that. Fish know way more about hydrodynamics than I do, and I have a physics PhD.


    I was immediately drawn to Prof Wallace’s take that fish can indeed be said to “know hydrodynamics”, not in the sense that they know more hydrodynamics than Physicists, but at least in the sense that they have some sort of tacit propositional knowledge of the form, say, “if the water current moves in direction x, I should move in direction y to avoid the predator”. All of this arose in my mind then because I was trying to figure if an agent who had in the past completed a rational practical deliberation to the effect that she should be φ-ing, but has now forgotten her reasons for φ-ing, can still be said to have tacit knowledge of her reasons even if she cannot access them by memory but is still now cognizant of the fact that she had rationally deliberated in the past and that her now-forgotten reasons warrant φ-ing now.

    ReplyDelete
    Replies
    1. Hi dionissis,

      That's an interesting take on it. I tend to think that that is propositional and also that it works for 'should' knowledge as in your example, but I do not make those assumptions in my examples in order to avoid some of the potential objections.

      But going with your example, I would continue as follows: Imagine a super-smart fish SF with the same knowledge. SF could figure linguistically 'if the water current moves in direction x, I should move in direction y to avoid the predator' by contemplating such hypothetical scenarios and reckon where its own non-linguistic intuition tells it to go.

      And I would say at least part of the knowledge in your scenario is acquired by SF in a non-empirical fashion.

      Delete
    2. Hi Angra, thanks for the response.
      I saw in the comments that you had come in, and I thought of saying “hi” by posting something tangentially relevant to one sentence of Prof Chappell (Prof Wallace’s comment). Your discussion with Prof Chappell is intellectually inaccessible to me, I understand the referent of “Twin Earth” and the definitions of “a posteriori, “contigent”, etc, but the gist of the discussion and the philosophical issues it touches upon are all Greek to me (in Greek we express the same proposition by saying “it’s all Chinese to me” . 😊 )
      I am saying all this to explain that I have nothing of substance to respond to your welcome answer!

      Delete

  4. Hi dionissis,

    Thanks for the response and the "hi", and nice to see you again btw. :)

    Usually I don't have time to post much anymore, but there are still exceptions.

    In Re. the discussion, if it's all Chinese to you (well, not all since you got that point right), chances are I'm not being very clear, so sorry about that. Please let me know if you'd like me to clarify any point.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.