Friday, February 6, 2015

Natural law and participation in God

According to Natural Law, the right thing to do is that which accords with one's nature. But what if something really nasty accorded with one's nature? This is, of course, akin to the objection to divine command theory from the question "What if God commanded something really nasty?" Both theories can give the same answer: "That's just impossible." God couldn't command something really nasty and there just are no possible natures of rational beings that require such nastiness. As far as that goes, this is fine, though at this point in the literature there are two more steps in the dialectic to think about.

I want to, however, consider a side-step. Why is it impossible? One could think this is just a brute and unexplained impossibility, but that is unsatisfactory intellectually. Even apart from the Principle of Sufficient Reason, we don't like brute facts that look like too much of a coincidence. And it looks like too much of a coincidence that all of the nasty cases are impossible. We want an explanation.

The divine command theorist has a pretty immediate explanation. We're talking about God's commands, and necessarily God is perfectly good or, if one prefers, perfectly loving. (Of course, those divine commands who want to define the good, and not just the obligatory, in terms that involve divine choices cannot give this answer. But so much the worse for that version of divine command theory.)

I think the Natural Law answer can be similar. A nature is an essential (in the medieval sense, maybe not the modal sense) mode of participation in God. It's impossible for a rational being's essential mode of participation in God to require nastiness, because of the nature of God. (Why is God's nature that way? Maybe here we have a brute necessity. A single brute necessity is much less problematic than a whole slew of them. Or maybe we can talk of God's perfection here.)

So there is an explanatory gap that Natural Law points to, and bringing in God closes that explanatory gap. Are there other ways of closing that gap? Maybe. One would be a heavily Platonic theory on which natures are modes of essential participation in the Form of the Good. The Platonism here would be more like Plato's own Platonism than our more anemic contemporary Platonism. The participation relation would not be exemplification as in contemporary Platonism, but something ontologically meatier, more like the participation in the theistic version of Natural Law.

In any case, the question of why something nasty couldn't be required by one's nature points towards serious metaphysics.

15 comments:

  1. “But what if something really nasty accorded with one's nature?”

    Don’t notions of nastiness follow upon nature? In an objective sense, what is nasty for us surely just is nasty precisely because it doesn’t accord with our nature qua human beings.

    But if you mean subjective nastiness, then two responses come to mind:

    (1) insofar as our desires are directed at the fulfilment of our nature, we will find what is good for us pleasant rather than “nasty”;

    (2) because our desires are, at present, distorted and disordered, we may find some things “nasty” that we ought to find pleasant (eating healthy food for example, beginning to pray, &c.).

    ReplyDelete
  2. Hi, Alexander

    I'd like to ask two questions:

    1. What if the agent is, say, a strong AI, programmed with values radically different from human values?
    By "values" I don't mean "moral beliefs"; I mean the AI values things in a way that is vastly different from the way humans value things.
    Maybe the AI places no positive value on human lives, but does place a positive value in using all available resources to increase its long-term survival chances. It uses the planetary resources as it sees fit - getting ready for future competition with other potential AI made elsewhere -, without any consideration for the consequences for humans - or just exterminates humans to prevent them from making any AI that might oppose it in the future.

    I guess a reply might be that the AI is not a rational being, but why would it not be rational?
    We may stipulate that the AI is capable of reason, language, math, etc., on a level far beyond human capacity.

    2. What if the agent is an alien that evolved in a very different way, and as a result, it also values things very differently? (alternatively, the beings were genetically engineered by human or post-human beings, millions of years into the future).
    For example, maybe they're like Yudkowsky's "Space Cannibals", or something like that (I don't share the fictional character "Subhan"'s view on morality, but I find the example interesting in this context).

    As before, one potential answer is that they're not rational. But that does not seem plausible to me.

    I guess an alternative would be to say that such entities are not possible if God exists, but that would make the theistic Natural Law theorist (or for that matter, a defender of DCT, given similar questions) clearly committed to (among other things) certain claims about exobiology encompassing the entire universe (which they might or might not mind, but it would be interesting to clarify in my view).

    ReplyDelete
  3. Angra,

    I guess a reply might be that the AI is not a rational being, but why would it not be rational?
    We may stipulate that the AI is capable of reason, language, math, etc., on a level far beyond human capacity.


    Bracketing (very important) questions about the distinction between artifacts and natural substances, doesn’t this view assume that operations of reason are material operations? (Or would the artificially intelligent agent you have in mind have some immaterial aspect?)

    ReplyDelete
  4. Thomas,

    I was not assuming but making a number of implicit assessments regarding what an advanced AI can do. I wasn't making an assumption or implicit assessment as to whether operations of reason are material operations, though. I admit that even after years of thinking about it, I struggle with the concept of "material", so I try to stay clear from any such claims of assumptions.

    Terminological issues aside, a theist might claim that if an AI like that (i.e., capable of reason, language, math, etc., on a level far beyond human capacity) is possible, then reason is a material operation.
    However, that would have to be argued for, and if that is true, I would take it to be a good argument against the view that reason is not a material operation.
    Now, I would expect many (perhaps most) theists not to agree it would be good, but at any rate, it would be interesting on its own, because - for example - it would give us a means of testing the hypothesis that reason is not a material operation: suppose someone managed (in a few decades, or centuries) to program a computer that can clearly engage in reasoning: that would conclusively show that reason is a material operation (whatever "material" is).

    Leaving the material/immaterial issue aside, I think it's possible (not with present-day tech, though) to program an AI like that, or to program something that can evolve and learn on a hardware with much greater computing power than a human brain.

    However, in this context, one does not need to assume or assess, or show that it's possible to make an AI with those capabilities in order to make that stipulation. Rather, it's sufficient that the hypothesis that it's not possible to make an AI like that, would be unwarranted. Then, the theistic Natural Law theorist (TNLTist) - who maintains that TNLT is warranted - would have to address the AI case.

    Similar considerations apply to DCT and a DCTist.

    Still, it is implicit in my stipulation that it would be unwarranted to believe that an AI with such capabilities is not possible. Granted, a TNLTist or DCTist might disagree with that assessment and claim that such an AI is impossible, but that would have to be argued for I think.
    In any case, if that is the answer (i.e., that such AI is not possible), I think it would be interesting on its own, as it would highlight an ontological commitment of TNLT/DCT that was not apparent. Additionally, if someone were to actually make an AI like that, that would show that TNLT and/or DCT are false. Granted, if no one were able to make the AI, that would count in favor of them, but I don't think strongly - I think the evidence against both of them is too strong anyway, though of course I expect most theists to disagree.
    At any rate, I think it would be interesting to know if TNLT/DCT is committed to that.

    With regard to artifacts and natural substances, I don't make that distinction (i.e., I don't think there is a metaphysically relevant distinction), but also there is no need settle the matter in this context: as in the previous cases, the TNLTist or DCTist may give an answer based on that distinction, but that too would highlight interesting ontological commitments of those theories (if there is no other reply available to them).

    ReplyDelete
  5. Note: Mark Murphy in his natural law book says similar things to what I say in my post, but he has some more generality--it's not just about right/wrong but about good/bad (or flourishing/counterflourishing).

    ReplyDelete
  6. Angra:

    I think there are independent reasons why the natural lawyer has significant metaphysical commitments. She needs serious teleology to get her story going. The historical teleology provided by von Wright / Millikan evolutionary accounts of teleology just does not seem to have the kind of significance that would allow it to ground obligations. Much the same is true for artifactual teleology that derives from the historical intentions of the artificer (more needs to be said if the artificer is God, but in any case an undue focus on the artificer's intentions is apt to make natural law into a variant of divine command).

    A central driving intuition of natural law is that ethics needs to be grounded in the intrinsic nature of the agent. It just seems a mistake to say that a present action's being right is partly constituted by stuff that happened thousands or millions of years ago.

    The kind of teleology that natural law requires is very unlikely to reduce to physical facts. So the mere fact that we have a bunch of evolved or created organisms that behave in certain sophisticated ways--even in ways that are outwardly indistinguishable from us--does not rule out the hypothesis that they are, as we might say, teleological zombie. In a teleological zombie nothing intrinsically counts as proper functioning, normalcy, defect, flourishing, etc.

    So the natural lawyer can simply deny that the critters you imagine would have intrinsic teleologies that are aligned with their behaviors. Intrinsic teleologies would need to arise either from something like the power of an omnipotent being which can produce new ontological features from scratch (cf. how a lot of theists think that God produces each human soul from scratch) or from something like laws of nature on which certain material arrangements cause (but do not reduce or ground) the existence of these new features.

    We do not know if God would create a teleology even for a robot whose behavior is like ours and we do not know if the laws of nature would give rise to a teleology even for such a robot. Much less do we know that God would create a teleology for a robot whose patterns of behavior fails to reflect God's ways of behaving, or that the laws of nature that (on the hypothesis) gave rise to our teleology would give rise to teleology in beings whose patterns of behavior are very different from ours.

    And there is little cost, then, in the natural lawyer simply saying that such a teleology is impossible, and hence that God couldn't create a being with such a teleology and the laws of nature couldn't give rise to such a being, just as neither God nor nature can produce a square circle or a water molecule with three hydrogen atoms.

    Further, a rational being is a being that is characteristically responsive to reasons. Radically different ways of evaluating courses of action likely fail to give rise to reasons. Granted, sometimes humans value weird things like collecting thumbtacks, and maybe (this depends on the internalism/externalism debate) when they do so, their valuing of something gives them a reason to pursue it. But I think that a part of what makes this be a *valuing* (rather than the kind of evaluation that a Roomba makes when its noise sensor is activated (which makes the Roomba loop around as it's supposed to be indicative of significant dirt)) is that it's a process that normally or characteristically is aimed at real value.

    ReplyDelete
  7. More:

    All in all, I suspect that the best kind of natural theory is one that grounds ethics in the metaphysics of agency. It is completely unsurprising that this would put significant constraints on that metaphysics.

    Whether these constraints are significant costs depends on the degree to which we have independent reasons for the constraints. I think we have independent reasons to reject any physicalism without intrinsic teleology. Let me speak dogmatically here. Functionalism of some sort is the physicalist's only real hope. But functionalism requires proper function. And the kind of proper function that can ground mental states would have to be intrinsic--evolutionary (or artifactual) proper function has no hope of grounding mental functioning (Koons and Pruss, "Must a functionalist be an Aristotelian?", forthcoming).

    By the way, I am not actually a natural lawyer. But I embrace the Aristotelian metaphysics that natural law calls for.

    ReplyDelete
  8. True One
    Good and bad is a part of One's nature when One becomes the judge. But who is One to judge? How accurate is One's measure, is One measurable at All? Without judgement One is just One, as God is One. As All is One. Remove the measure, be One, =

    ReplyDelete
  9. Alexander,

    I agree it's a mistake to say that what is morally right is partly constituted by what happened millions of years ago, though I don't know why there would have to be an 'intrinsic' nature (though I guess it depends on what one means by 'intrinsic', but in the sense Thomists tend to think of kinds, I do not think that's needed) - a certain psychology will do, in my assessment.

    For example, lions do not have moral obligations - they do not have the right sort of psychology -, but neither do human babies, or humans with certain extreme mental illnesses.

    But that aside, I think there is a far greater cost associated with the claims you're suggesting - if I understand them correctly - than there is in saying that God could not produce a square circle (I guess a Euclidean one, else that's no problem). Leaving aside AI, a difference is that claiming that Euclidean square circles are impossible is of course warranted and obvious - and no one would be raising any objections out of it -, but on the other hand, your reply commits the TNLTist to make some specific claims about exobiology, which many of us would find at least unwarranted.

    At least, you seem to be implying that there are no beings like Space Cannibals (SC). While I think that those particular beings are improbable (but then, it depends on the size of the universe), other beings that have evaluating systems somewhat more similar to human ones but not quite the same are far more probable, and also something akin to morality but not quite the same. At least, I don't see any good reason to rule out such being.

    However, if I'm getting your point right (please let me know if not), the TNLTist seems to be committed to asserting that if there are any aliens with complex language, science, spaceships, etc. (for example), they would have a sense of good and bad and right and wrong (rather that some analogue sense of alien-good, etc.), and would make the same judgments we make, as least usually and save for error. If so, and unlike the square circle case, that's a non-trivial commitment not only to some metaethical views, but about alien psychology in the whole universe, if there are any such aliens (granted, it's unsurprising to me that they would be so committed, but it's not a commitment usually seen imv).

    On the issue of 'real value', I'm not sure what you mean here. I would say that some things are intrinsically morally valuable if that's what you're getting at, but as I see it, morally valuable is the same as positively valued by morality for their own sake - e.g., adult human lives might have intrinsic moral value, but they would have no intrinsic SC-moral value, and the SC wouldn't care about morality, so we would be in trouble.

    Back to the AI, I'm not sure I'm getting your view right here.
    Would you say one of the following is true?
    1. It's not possible to make AI more intelligent than humans, capable of generally reasoning, math, logic, language, etc., but which responds to reasons based on their own values.
    2. It is possible, but they would be zombies.
    3. It is possible, but they would have no moral obligations.
    4. It is possible, but they would not be rational, despite their superior language, math, etc., skills, and the fact that they would respond to reasons - even if reasons based on their own values.
    5. At least one of the above holds.

    Depending on the AI, the 'no moral obligations' option seems very plausible to me, but that would be a problem for a theory that ties morality and rationality the way TNLT seems to.

    I do agree the commitments are unsurprising, though I don't agree that there are good reasons to believe in the commitments (if I got them right).

    By the way, I don't consider myself a physicalist; I'm inclined to think the word 'physical' is not precise enough to do the philosophical work it's meant to do in physicalism/non-physicalism discussions.

    ReplyDelete
  10. It's metaphysically possible to have zombies with spaceships, language-like communication (I wouldn't say it's language, because maybe the zombies wouldn't have intentionality), something like science, etc.

    Ditto for the AI.

    But to get mental states, responsiveness to reasons,
    or intrinsic teleology, God would need to create a soul. God could do that for an AI.

    ReplyDelete
  11. Thanks.

    If I'm getting this right, it seems to me the theistic natural lawyer is committed to something like:

    One of the following is true:

    1. There are no aliens with language and capacity for logic and math at least as complex as ours [or what looks indistinguishable from all of that), advanced science, spaceships, etc., and who appear to have feelings of guilt, outrage, etc., associated with some moral-like language, or

    2. There are such aliens, but their moral-like judgments are indeed moral judgments and are largely in agreement with ours (since their moral sense and ours would be imperfect, but still largely correct), and furthermore after rational reflection they would tend to converge to the same moral judgments we make (very much unlike the Space Cannibals, but also unlike some other beings that might have evolved from something like orcas or elephants and might end up with a different, moral-like sense).

    3. There are such aliens and their moral-like judgments are considerably different from ours and they wouldn't converge after what looks like rational reflection, either (e.g., Space Cannibals), but in that case, the aliens in question are zombies.

    Am I getting this right? Or am I missing more options for the theistic natural lawyer?

    ReplyDelete
  12. That's about right, though we might distinguish between different kinds of zombies. For instance, among many others, there will be the classic consciousness-zombies, which look conscious but aren't, but there will also be ethics-zombies, which look like they make ethical judgments but don't.

    It is very plausible that every consciousness-zombie is an ethics-zombie (though even this is not obvious, because I want to reserve some slight credence for the possibility of an unconscious moral agent), but the converse is unlikely to be true.

    To make the idea of ethics-zombies plausible, consider set-zombies. They talk in a language that sounds very much like English, and they have a word like "set", but their axioms for "sets" are wildly different from our axioms for sets. For instance, they deny extensionality, and it's axiomatic for them that there is no "set" with exactly three elements. It seems quite plausible that whatever they are talking about aren't sets.

    I think it's unlikely that there in fact are consciousness-zombies or ethics-zombies, and so I think that in your list 1 or 2 is more likely to be true. But consciousness-zombies and ethics-zombies seem at least possible.

    ReplyDelete
  13. Personally, I wouldn't be inclined to call them "ethical zombies" or "set zombies".

    After reflection, it looks to me like the Space Cannibals (or even other, somewhat more human-like beings) are not making moral judgments. Given the description of the set up (leaving aside the assessment of the fictional characters setting it up), it seems to me that they're making SC-moral judgments.

    I interpret the "set zombies" in the same manner. I wouldn't call them "set-zombies". If the Space Cannibals also had the language you describe, I would say that they talk about SC-sets, whereas we talk about sets (and I'm not even sure every human is talking about the same thing in the set case, but I think we may leave that aside to simplify).

    As another analogy, if the SC have a visual system different from ours and they have perceptions like our color perceptions (accepting no inverted color spectrum in our case, etc., and leaving aside differences in human languages to simplify), but associated with different wavelengths, they would be making SC-color judgments associated with their perceptions, rather than color judgments, and their judgments would have truth-conditions different from ours. I wouldn't call them "color zombies", just as I wouldn't call humans "SC-color" zombies.

    So, I'd be inclined to say that they would be no ethical zombies, just as we wouldn't be SC-ethical zombies.

    Terminological issues aside, it seems to me that the theistic natural lawyer is committed to a crucial disanalogy in this context: in the case of SC-sets, and also SC-color, the theistic natural lawyer may accept that SC could generally make true SC-set and SC-color judgments.

    However, in the moral case, it seems to me that the theistic natural lawyer is committed to there being no SC-good and SC-bad other than good and bad, and so SC would be making false judgments (else, consider an argument from contingency (plus, perhaps a meta-SC-ethical argument) to the existence of an omnipotent, omniscient, SC-morally perfect being, etc.)

    But that creates the following kind of problem:

    Could God create beings that have feelings of guilt, outrage, etc., blame others, and so on, but have no moral language and are vastly confused about their judgments - an error theory of their judgments would be true?

    Moreover, if there are actual beings like that, why should we think we're not among them? That is, once there are beings like that, why should anyone think

    And so on.

    As far as I can tell, it seems to me that the theistic natural lawyer is committed to the claim that there are no Space Cannibals, or even beings with judgments that are closer to our moral judgments, but still not the same - or else, they're consciousness zombies.

    ReplyDelete
  14. I don't know that the SCs would have feelings of guilt, resentment, etc., even if they aren't consciousness-zombies. While they may have close analogues to the behavioral correlates of guilt, resentment, etc., why think that they would have the qualia that are partly constitutive of guilt, resentment, etc.?

    Here's a tough question. Imagine a species that evolved somewhat differently from us, but ended up with exactly the same brain structure. However, their optical receptors were far different: instead of having RGB sensors like we do, they had three different color sensors in the IR range. Would they have the same qualia as we do?

    If one thinks that qualia depend solely on brain states, the answer is positive. But there are at least two ways that qualia might not depend solely on brain states. First, they might depend on the representational content of the mental states, which representational content may have a semantically externalist component. If so, then when the inputs are typically different, the qualia would be. Second, the qualia might depend on some nonphysical factors, and then it's all up for grabs.

    If one answers this question in the negative, then it is quite plausible that the SCs wouldn't have the qualia that we have when we are angry, resentful, etc.

    In fact, it's even easier for them to have different qualia, since presumably their brain structure is different from ours, unlike in the color/IR example.


    Nonetheless, there is a question whether God would allow there to be even such SCs. But that seems to be a question largely independent of Natural Law. It's a question for *any* theist.

    ReplyDelete
  15. With regard to the color sensors, it seems to me it would depend on how similar the structure is. Do the receptors connect to the rest of the brain in a way that is not different from the way in which our receptors connect?
    But let's say that's not the case. Even so, I think the case of the SC is different because their behavior is evidence of how they feel.

    It seems to me that given their behavior - just like consciosuness zombies, the idea that they only look like having those feelings seems extremely improbable - they would have roughly the qualia that we have. They wouldn't have to be exactly the same feelings as humans, but pretty close.

    At any rate, there is no need to settle that to raise issues like the ones I mentioned earlier.
    For example, unless an error theory of their moral-like language is true, there is the issue of an omnipotent, etc., SC-morally good being. But then an error theory raises the issue of why think we're not like that ourselves.

    As for your point that that's a question for any theist, I actually agree (on a conception of God as the GCB, or omnipotent, omniscient and morally perfect, etc.), but I didn't want to make a claim that would take perhaps much longer to defend - theistic natural law theory provided a direct way to frame it, in terms
    of "what if something really nasty accorded with one's nature?".

    ReplyDelete