Tuesday, October 27, 2015

Edge cases, the moral sense and evolutionary debunking

It has been argued that if we are the product of unguided evolution, we would not expect our moral sense to get the moral facts right. I think there is a lot to those arguments, but let's suppose that they fail, so that there really is a good evolutionary story about how we would get a reliable moral sense.

There is, nonetheless, still a serious problem for the common method of cases as used in analytic moral philosophy. Even when a reliable process is properly functioning, its reliability and proper function only yield the expectation of correct results in normal cases. A process can be reliable and properly functioning and still quite unreliable in edge cases. Consider, for instance, the myriad of illusions that our visual system is prone to even when properly functioning. And yet our visual system is reliable.

This wouldn't matter much if ethical inquiry restricted itself to considering normal cases. But often ethical inquiry proceeds by thinking through hypothetical cases. These cases are carefully crafted to separate one relevant feature from others, and this crafting makes the cases abnormal. For instance, when arguing against utilitarianism, one considers such cases as that of the transplant doctor who is able to murder a patient and use her organs to save three others, and we carefully craft the case to rule out the normal utilitarian arguments against this action: nobody can find out about the murder, the doctor's moral sensibilities are not damaged by this, etc. But we know from how visual illusions work that often a reliable cognitive system concludes by heuristics rather than algorithms designed to function robustly in edge cases as well.

Now one traditional guiding principle in ethical inquiry, at least since Aristotle, has been to put a special weight on the opinions of the virtuous. However, while an agent's being virtuous may guarantee that her moral sense is properly functioning--that there is no malfunction--typical cognitive systems will give wrong answers in edge cases even when properly functioning. The heuristics embodied in the visual system that give rise to visual illusions are a part of the system's proper functioning: they enable the system to use fewer resources and respond faster in the more typical cases.

We now see that there is a serious problem for the method of cases in ethics, even if the moral sense is reliable and properly functioning. Even if we have good reason to think that the moral sense evolved to get moral facts right, we should not expect it to get edge case facts right. In fact, we would expect systematic error in edge cases, even among the truly virtuous. At most, we would expect evolution to impose a safety feature which ensures that failure in edge cases isn't too catastrophic (e.g., so that someone who is presented with a very weird case doesn't conclude that the right solution is to burn down her village).

Yet it may not be possible to do ethics successfully without the method of cases, including far-out cases, especially now that medical science is on the verge of making some of these cases no longer be hypothetical.

I think there are two solutions that let one keep the method of cases. The first is to say that we are not the product of unguided evolution, but that we are designed to have consciences that, when properly functioning (as they are in the truly virtuous), are good guides not just in typical cases but in all the vicissitudes of life, including those arising from future technological progress. This might still place limits on the method of cases, but the limits will be more modest. The second is to say that our moral judgments are at least partly grounded in facts about what our moral judgment would say were it properly functioning--this is a kind of natural law approach. (Of course, if one drops the "properly functioning" qualifier, we get relativism.)

8 comments:

  1. I think the basic problem will arise so long as one holds that our moral judgments use heuristics. But this is extremely plausible before we get to any story about where those judgments come from.

    I expect it is for this reason that every moral theory has to bite the bullet in some hard cases, and we tend to justify this in terms of reflective equilibrium: sometimes the principles outweigh the intuitions. The analytic philosopher’s ethical method is not pure induction from cases.

    ReplyDelete
  2. While it's not pure induction from cases, there is enough appeal to edge cases that we need a fair amount of reliability in edge cases. We don't need 100% reliability.

    A particularly worrisome thing is this. We tend not to worry about reliability when there is no disagreement about a case. But we shouldn't be surprised if our common heuristics have systematic errors.

    ReplyDelete
  3. Personally I would simply accept that there is currently no solution to that problem; there are not strong reasons for thinking that we have a good way to give the right answer regarding edge cases, just as there are not strong reasons for thinking that we can solve all and every one of the most difficult and complex philosophical problems.

    ReplyDelete
  4. A few points:

    1. Our visual system is generally reliable, and while it fails in some cases like visual illusions, that seems to happen not because of a feature of the objects that we're looking at, but because of theway in which we're looking at them. For example, if we are in good health, etc., and we get close to a mirage, the mirage disappears. So, even in those cases, it turns out that we get around the problem by analyzing the matter more carefully, but still relying on our visual system to do so. In other words, we know that there are optical illusions by means of using some of our faculties, including our visual system, to see that our visual system was giving us the wrong impression in some cases. We just need to put the objects in a context in which our visual system works effectively.

    2. Our moral system is generally reliable, but sometimes it also gives mistaken verdicts. Examples of that are the cases of moral disagreement. Some of those cases - in my assessment, most - result from giving our moral sense the wrong input, so to speak. More precisely, people mistakenly assign to others beliefs and/or intentions that those other people do not have, and as a result, they make mistaken moral assessments. Those are moral errors resulting from mistakes about non-moral facts.
    In other cases, the matter is more difficult, since it's not clear that there is a non-moral fact causing the moral error. I think in many of those apparent cases, if one digs deep enough, there are such non-moral facts. But there is also the possibility that our moral sense is affected - at least partially - in different ways, for example by holding a mistaken and unwarranted more or less general moral theory, or by bias towards some people we care about (real or believed to be real), and so on.
    That would require more work, but I don't see why the moral system in combination with other faculties of ours would not be able to get around that after reflection - as in the case of the visual system.
    Leaving aside scenarios that are too complicated for human comprehension (which still reverse engineering plus AI could resolve in many cases), and vagueness (which I'll leave aside for now because otherwise this gets too long), but in general, we would just have to look at the matter from a perspective that makes our moral sense work effectively (as we do in the visual case), plausibly by means of analogies.

    3. Even cases regarding future science are cases in which one needs to assess whether a human being would behave immorally, not immorally, etc., if she were to behave in some way or another.
    Now, that is something one should expect our intuitions to handle. For example, plausibly (and again save vagueness), immoral human behavior is some sort of behavior that is done with some kind of intention, or without rationally assessing the potential consequences, etc. But then, the problem is one of assessing intentions, potential consequences, etc., properly. That might be difficult, but it does not seem to be a problem for our moral faculty as far as I can tell.
    An analogy: our visual system works accurately even without further analysis not only on cases that are usual or were usual in the ancestral environment (i.e., when looking at a melon), but also unusual cases that have the same relevant properties (for example, if we look at a plane, or a space telescope, or a future spaceship without cloaking devices). Similarly, we can tell that killing people for fun with a laser weapon is immoral, just as easily as we can tell that killing people for fun with stones is immoral. The fact that the laser is a much more advanced way of killing would not be a problem.

    There is a deeper issue at play here - a sense in which the algorithm can't be mistaken, I think. But I'll leave that one for later if needed, for the sake of brevity.

    ReplyDelete
  5. I share the intuition in the last paragraph about the algorithm being in some sense infallible. It's hard, but not impossible, to hold that while remaining a moral realist.

    ReplyDelete
  6. Why do you think it's hard? Could you elaborate, please?
    My impression is that it depends on the definition of "moral realism" - and there are so many!
    For example, on the definition of "moral realism" given in the SEP entry on moral realism (by Sayre-McCord), it's not problematic. Similarly, on, say, Copp's conception of realism, it's not a problem.
    But on the other hand, if one goes by something like Street's definition of "uncompromising normative realism", it may be very difficult. In that sense, I'm actually not a realist - though I think you might be.

    Do you have a specific definition in mind?

    Regarding my infallibility point, I think the color analogy may be helpful as a means of elaborating a bit (which also might help explain better one of the reasons I don't find your argument or other evolutionary debunking arguments persuasive, at least as long as they're meant to establish that our moral faculty is not generally reliable if it resulted from unguided evolution):

    There are some cases in which our color perception fails. But even then, we can correct the matter by looking at an object in a different context, under a different light, etc.
    It's impossible, in my assessment, that an object looks red under normal light conditions (no tricks, cloaks, nothing) to a healthy human observer, and the object is not red.
    In the evolutionary case, evolution resulted in a certain kind of visual perception, which reacts to certain features of the world (e.g., red stuff, green stuff); our language ended up tracking and referring to those things. That's not a problem (I'm oversimplifying to get around the variation in the meaning of color terms among languages, which I think doesn't happen in the case of at least basic moral terms, but the idea is hopefully clear enough).

    A similar story would work in the moral case, so if we get the moral algorithm right, then under ideal conditions (i.e., all of the necessary info about non-moral facts, no errors of epistemic rationality), the algorithm can't fail.

    However, that's not problematic for my view, just as it's not problematic in the color case (of course, I disagree with conceptual analyses of our moral terms like, say, Mackie's; I don't think our moral language is committed to what he describes as queerness of moral properties). What would be problematic for me would be if the algorithm itself varied widely among humans, even under ideal conditions; depending on the case, that would yield either cultural relativism (but then, that would seem to be a form of realism under the SEP definition! Still, I'm not a culture relativist, either), or an error theory. But I don't think that's probable, given present-day evidence.

    That's perhaps the crux of why I think evolutionary debunking arguments fail, at least if they intend to establish that our moral sense is unreliable (substantively or epistemically) if unguided evolution has happened (which I believe it has). The moral sense only needs to track (reasonably reliably) some human mental states/properties/generallymentalstuff (or similar stuff in relevantly similar beings), and perhaps some other features of the environment (and relevantly similar ones), and that's that. Given a lack of telepathic abilities, we track mental stuff by means of behavioral cues, and that's an important source of errors, but that's not decisive, just as it's not decisive when we intuitively track other mental stuff - like whether a person is cruel, kind, in love, etc.

    On the other hand, I do think evolutionary debunking arguments succeed against what Street calls "uncompromising normative realism", or something similar to that (actually, I think Street's argument fails because of the way she makes it, but a successful argument is in the vicinity, using pretty much the same facts she uses).

    ReplyDelete
  7. I would expect that my moral realism is less compromising than yours. :-) I certainly think moral knowledge is knowledge of queer non-natural facts. Even on the natural law view where it might be the case that properly functioning moral judgment is guaranteed to give correct outputs given correct inputs, the relevant notion of proper function will not be a naturalistically analyzable one.

    ReplyDelete
  8. I don't think my view is "compromising" :-)

    That aside, I don't find the term "natural" (and so, "non-natural") clear, so I'm not sure I'm getting your point right (if not, please let me know), but if you're using "natural" it in the sense in which Moore talked about the "naturalistic fallacy" (as I understand his point), I don't hold that moral terms (by this I mean usual terms like "morally good", "morally wrong", "morally permissible", "immoral", etc.) can be defined in terms of non-moral ones (i.e., terms usually not regarded as moral ones), by means of analytical statements. For all I know, they can't. But for that matter, for all I know color terms can't be defined in that fashion, either (i.e., analytically, in terms of non-color terms). Or a number of other terms. There are difficult matters of linguistics involved (though I think that some terms are not analyzable like that at all, else there would be infinite regress or circularity). But I don't think there is anything specifically problematic and/or queer about moral terms.

    ReplyDelete