Tuesday, November 10, 2015

Parameters in ethics

In physical laws, there are a number of numerical parameters. Some of these parameters are famously part of the fine-tuning problem, but all of them are puzzling. It would be really cool if we could derive the parameters from elegant laws that lack arbitrary-seeming parameters, but as far as I can tell most physicists doubt this will happen. The parameters look deeply contingent: other values for them seem very much possible. Thus people try to come up either with plenitude-based explanations where all values of parameters are exemplified in some universe or other, or with causal explanations, say in terms of universes budding off other universes or a God who causes universes.

Ethics also has parameters. To further spell out an example from Aquinas' discussion of the order of charity, fix a set of specific circumstances involving yourself, your father and a stranger, where both your father and the stranger are in average financial circumstances, but are in danger of a financial loss, and you can save one, but not both, of them from the loss. If it's a choice between saving your father from a ten dollar loss or the stranger from an eleven dollar loss, you should save your father from the loss. But if it's a choice between saving your father from a ten dollar loss or the stranger from a ten thousand dollar loss, you should save the stranger from the larger loss. As the loss to the stranger increases, at some point the wise and virtuous agent will switch from benefiting the father to benefiting the stranger. The location of the switch-over is a parameter.

Or consider questions of imposition of risk. To save one stranger's life, it is permissible to impose a small risk of death on another stranger, say a risk of one in a million. For instance, an ambulance driver can drive fast to save someone's life, even though this endangers other people along the way. But to save a stranger's life, it is not permissible to impose a 99% risk of death on another stranger. Somewhere there is a switch-over.

There are epistemic problems with such switch-overs. Aquinas says that there is no rule we can give for when we benefit our father and when we benefit a stranger, but we must judge as the prudent person would. However I am not interested right now in the epistemic problem, but in the explanatory problem. Why do the parameters have the values they do? Now, granted, the particular switchover points in my examples are probably not fundamental parameters. The amount of money that a stranger needs to face in order that you should help the stranger rather than saving your father from a loss of $10 is surely not a fundamental parameter, especially since it depends on many of the background conditions (just how well off is your father and the stranger; what exactly is your relationship with your father; etc.) Likewise, the saving-risking switchover may well not be fundamental. But just as physicists doubt that one can derive the value of, say, the fine-structure constant (which measures the strength of electromagnetic interactions between charged particles) from laws of nature that contain no parameters other than elegant ones like 2 and π, even though it is surely a very serious possibility that the fine-structure constant isn't truly fundamental, so too it is doubtful that the switchover points in these examples can be derived from fundamental laws of ethics that contain no parameters other than elegant ones. If utilitarianism were correct, it would be an example of a parameter-free theory providing such a derivation. But utilitarianism predicts the incorrect values for the parameters. For instance, it incorrectly predicts that that the risk value at which you need to stop risking a stranger's life to certainly save another stranger is 1, so that you should put one stranger in a position of 99.9999% chance of death if that has a certainty of saving another stranger.

So we have good reason to think that the fundamental laws of ethics contain parameters that suffer from the same sort of apparent contingency that the physical ones do. These parameters, thus, appear to call for an explanation, just as the physical ones do.

But let's pause for a second in regard to the contingency. For there is one prominent proposal on which the laws of physics end up being necessary: the Aristotelian account of laws as grounded in the essences of things. On such an account, for instance, the value of the fine-structure constant may be grounded in the natures of charged particles, or maybe in the nature of charge tropes. However, such an account really does not remove contingency. For on this theory, while it is not contingent that electromagnetic interactions between, say, electrons have the magnitude they do, it is contingent that the universe contains electrons rather than shmelectrons, which are just like electrons, but they engaged in shmelectromagnetic interactions that are just like electromagnetic interactions but with a different quantity playing the role analogous to the fine-structure constant. In a case like this, while technically the laws of physics are necessary, there is still a contingency in the constants, in that it is contingent that we have particles which behave according to this value rather than other particles that would behave differently. Similarly, one might say that it is a necessary truth that such-and-such preferences are to be had between a father and a stranger, and that this necessary truth is grounded in the essence of humanity or in the nature of a paternity trope. But there is still a contingency that our world contains humans and fathers rather than something functionally very similar to humans and fathers but with different normative parameters.

So in any case we have a contingency. We need a meta-ethics with a serious dose of contingency, contingency not just derivable from the sorts of functional behavior the agents exhibit, but contingency at the normative level--for instance, contingency as to appropriate endangering-saving risk tradeoffs. This contingency undercuts the intuitions behind the thesis that the moral supervenes on the non-moral. Here, both Natural Law and Divine Command rise to the challenge. Just as the natures of contingently existing charged objects can ground the fine-structure constants governing their behavior, the natures of contingently existing agents can ground the saving-risking switchover values governing their behavior. And just as occasionalism can have God's causation ground the arbitrary-seeming parameters in the laws of physics, so God's commands can ground the arbitrary-seeming parameters in ethics (the illuminating analogy between occasionalism and Divine Command is due to Mark Murphy). Can other theories rise to the challenge? Maybe. But in any case, it is a genuine challenge.

It would be particularly interesting if there were an analogue to the fine-tuning argument in this case. The fine-tuning argument arises because in some sense "most" of the possible combinations of values of parameters in the laws of physics do not allow for life, or at least for robust, long-lasting and interesting life. I wonder if there isn't a similar argument on the ethics side, say that for "most" of the possible combinations of parameters, we aren't going to have the good moral communities (the good could be prior to the moral, so there may be no circularity in the evaluation)? I don't know. But this would be an interesting research project for a graduate student to think about.

Objection: The switchover points are vague.

Response: I didn't say they weren't. The puzzle is present either way. Vagueness doesn't remove arbitrariness. With a sharp switchover point, just the value of it is arbitrary. But with a vague switchover point, we have a vagueness profile: here something is definitely vaguely obligatory, here it is definitely vaguely vaguely obligatory, here it is vaguely vaguely vaguely obligatory, etc. In fact, vagueness may even multiply arbitrariness, in that there are a lot more degrees of freedom in a vagueness profile than in a single sharp value.

15 comments:

Heath White said...

I have a better story to tell in the father-stranger case than in the stranger-risking case, so here goes. Humans, unlike most animals, are born way before they are ready to live independently, and they require the support of both parents in order to flourish. This is a contingent fact about human nature, explained by the evolutionary pressure to develop large brains combined with the difficulty of gestating a human being until their brains are large enough, and then passing them out the vaginal canal. Consequently, fathers and mothers are very important to children, and this is the source of moral obligations to them. If we were like bears, where the father is irrelevant to the child’s upbringing, then moral duties to fathers would be different.

I think this sort of story (plus elaborations on it) will explain as much as can be explained about why the switchover is where it is. Maybe that answers the “moral fine-tuning” question.

Here is a social, rather than biological, version of the problem: both Greek culture (in Plato’s Euthyphro and in Sophocles) and Chinese culture (in Confucius and Mencius) raise questions about whether one should turn in one’s father to the state for committing a crime. In all cultures, there will be a switchover somewhere, in that turning in one’s father for very minor crimes will be regarded as unfilial while failing to turn him in for heinous crimes will be unjust. But where the switchover is (or where it is felt to be) has a lot to do with the relative importance of the family and the state to the maintenance of social order. In Euthyphro and in Confucius, for example, the crime under consideration is homicide and it is considered doubtful in both cases whether one ought to turn a father in; today I think the issue would be much more clear-cut. It is no accident that both societies are evolving from clan-based to state-based governance, whereas we live under a strong central state.

I did not follow the step of the argument that moved from the “normative parameters are contingent” to “the moral does not supervene on the non-moral.” If the supervenience base is contingent, the supervening states will be contingent too. What am I missing?

Alexander R Pruss said...

I think your story about fathers is a plausible one, but I suspect that these facts about human life will underdetermine the switchover points. Considerations like the ones you give might give us reason to think that the switchover point would be expected to lie somewhere between two values y1 and y2. But the particular value (or vagueness profile) of the switchover point is likely to be underdetermined.

"I did not follow the step of the argument that moved from the 'normative parameters are contingent” to “the moral does not supervene on the non-moral.'"

The reason you didn't follow it is that there was no argument. All I said is that thinking about this stuff undercuts the intuitions behind the supervenience claim. I guess I should have said that it undercuts *my* intuitions behind the supervenience claim. You're right that contingency doesn't do that by itself. But when we think about these issues, I think we are apt to see underdetermination.

Alexander R Pruss said...

By the way, in the past I've blogged about similar issues in epistemology.

Heath White said...

Considerations like the ones you give might give us reason to think that the switchover point would be expected to lie somewhere between two values y1 and y2. But the particular value (or vagueness profile) of the switchover point is likely to be underdetermined.

I think I lack the intuition that there is a more definite switchover point than the one that intelligibly (in some sense) supervenes on the non-moral facts. (ALL the non-moral facts.) Also in epistemology. Maybe this is a defect in my intuitions. On the other hand, why do you think there is one?

Alexander R Pruss said...

How about this: If one is to translate the vague non-numerical considerations about the importance of the father's role into numbers, somewhere in the line of reasoning numbers will have to appear for the first time. Where will they come from?

I suppose you could say that the data on the father's role already comes with numbers, like the amount of time spent by a normal father with children (note, though, that once we talk of a normal father, we are already in the realm of the normative). But then we need conversion factors of some sort from times to switchover points.

Alexander R Pruss said...

I think the risk case is a more effective support for my position.

Angra Mainyu said...

Alex:

Just a few of brief points:

1. And just as occasionalism can have God's causation ground the arbitrary-seeming parameters in the laws of physics, so God's commands can ground the arbitrary-seeming parameters in ethics (the illuminating analogy between occasionalism and Divine Command is due to Mark Murphy). Can other theories rise to the challenge? Maybe. But in any case, it is a genuine challenge.
But according to those theories, God is morally perfect. But if there are arbitrarily-seeming parameters in ethics, then that translates to arbitrarily-seeming parameters in God's own nature, commands, etc. For example, on DTC, God commands that we save n people in such-and-such circumstances, etc. But then again, those are the arbitrary seeming parameters (assuming there are arbitrary-seeming parameters; in a sense of "arbitrary", there aren't, because "arbitrary" means "morally arbitrary", and moral parameters aren't morally arbitrary. But I get you don't mean that by "arbitrary", though I'm not entirely sure what you mean).

Granted, theists claim that God is not contingent. But I don't see why an arbitrary-looking alleged necessity is in better shape than an arbitrary-looking alleged contingency. On that note, and while I don't think the FT argument works, given that you do, do you think the argument would be deflated by the hypothesis that the universe exists necessarily (where "universe" doesn't necessarily mean the portion we're familiar with, but maybe some much bigger multiverse)?

2. Take the case of color. Color also has somewhat fuzzy parameters that are seemingly arbitrary (assuming ethical parameters are seemingly arbitrary). For example, green light has roughly a wavelength between 495 and 570 nm. (or combinations of other colors, but let's simplify) But why not 500 and 600, or some other range?
It seems also that it's necessary (due to some H2O-like referent-fixing quality) that if light has a wavelength not (roughly) in that range, it's not green.
Would the color case be a challenge?
I think clearly not. Different animals (including humans) evolved different visual systems. We got color vision. Dogs got dog-color vision. And so on.
While the info we have about our evolutionary past is insufficient to get the specific color parameters, that's only to be expected given how much info about it was lost (i.e., we have some fossils and a few other things, but that's clearly very limited info, and that's even without counting our very limited knowledge about the specific links between genes and proteins, etc.).

But why would the moral case be a challenge?

3. But there is still a contingency that our world contains humans and fathers rather than something functionally very similar to humans and fathers but with different normative parameters.
But that seems to assume that there are no aliens similar to humans to different degrees, but with different normative parameters. Am I misreading?
If there are aliens as intelligent as humans or more, I would expect them to have probably different normative (moral-like) parameters, and different color-like parameters if they have something akin to that, though probably a greater overlap should be expected on the normative case, due to the fact that they had to solve similar problems, whereas we already know (from other animal species on Earth) that color-like parameters are widely variable.
Granted, there are philosophers who do believe there is only one set or class of actual moral parameters - which would extend to aliens, AI, alien AI, etc. -, so I guess my points 2. and 3. would not work from their perspective. But from my perspective (or from that of philosophers who also have a relevantly similar perspective), I don't see the force of the challenge for the reasons explained in 2. and 3. Unless I misunderstood something?

Alexander R Pruss said...

1. "But if there are arbitrarily-seeming parameters in ethics, then that translates to arbitrarily-seeming parameters in God's own nature, commands, etc."

I would think that there would be tradeoffs between incommensurable values. For instance, there is a value in having egalitarian behavior and a value in tight-knit relationships. On the version of divine command theory that I am imagining, a perfectly good God then simply chooses the degree of balance between promoting tight-knit relationships and promoting egalitarianism. He doesn't choose an optimal balance point, as there is no optimal balance point.

2. Maybe the color analogy trades on the idea that the aliens with different behavior wouldn't have moral properties, but would only have moral-like properties.

3. "But that seems to assume that there are no aliens similar to humans to different degrees, but with different normative parameters. Am I misreading?"

There may be such aliens. But unless all combinations are realized, there is a question of why *these* combinations of moral parameters (the ones in us, the ones in the aliens, etc.) are realized and not others.

Angra Mainyu said...

1. I'm not sure I get how your reply on this point is supposed to work.
I'm granting for the sake of the argument that the parameters (the actual parameters) are seemingly arbitrary. But if that is so, then if God exist, by being morally perfect he positively values those particular arbitrary-seeming parameters, and if he gives a command, the command will be in line with those parameters.
For example, let's say that - as in your example - the actual moral parameters (AMP: risk) are seemingly arbitrary. Yet, God, by being morally perfect, has a preference for the arbitrary-seeming AMP:risk, and will command that people behave in accordance to a seemingly arbitrary command.
Now, let's say that God chooses the degree of balance between promoting tight-knit relationships and promoting egalitarianism. Still the choice he makes is based on arbitrary-seeming parameters.
Generally, I don't see how God would eliminate the seeming arbitrariness, given that God is morally perfect, and the arbitrariness is built-in morality itself.

2. It's related to that, but not committed to it.
For example, the aliens with different visual system would still have color, even though they wouldn't have color language (they'd have some color-like language, with different truth conditions).
Similarly, some aliens with different norms (say, some "alien squid" who evolved from something similar to squid) (probably, but see below) not have on this account moral language. They would have some squid-moral language, with different truth conditions: what's squid-morally good isn't always morally good, etc.
Whether they would have moral properties depends on factors such as whether agents without a moral sense can be morally good, morally bad, etc., and if so, what sort of mind is required for that. At the very least, they would have moral rights, just as non-human animals do, but the color analogy is silent on whether alien squid or some other alien would have moral obligations (for example), could be morally good, etc.
There is a variant: maybe they have moral obligations and language to talk about their moral obligations (rather than squid-moral obligations), only that the moral obligations of alien squid are not linked to the good, but to the squid-good.However, I find the variant less probable.

3. There may be such aliens. But unless all combinations are realized, there is a question of why *these* combinations of moral parameters (the ones in us, the ones in the aliens, etc.) are realized and not others.
I don't see that as a problem. One might similarly say that unless all combinations are realized, there is the question of why *these* color-like parameters (the ones in us, in eagles, octopuses, dogs, etc.) are realized and not others.
The answer is that that is how evolution went, but we don't have sufficient information about the details in order to explain it. But then again, that's often the case, and it's to be expected. The fossil record and other pieces of evidence only provide a very small fraction of the information of everything that happened. For example, if someone were to ask why Allosaurus no longer existed 145 million years ago, but some other species did, we do not have a specific account. It's very probable that we never will, as our information about the relevant ecosystems is very limited, and will remain so because the data was just lost alongside those animals, plants, etc.

Angra Mainyu said...

Just one thing I'd like to add with regard to point 3: loss of evolutionary info is a sufficient condition for us not to be able to explain in detail why we got the parameters we got and not others, but it's not necessary. In fact, if we had a lot more information, we could only have a better approximation of what happened, but figuring out from evolutionary information (even DNA data, environmental conditions, etc.) what specific parameters (even if they're fuzzy) we would get would require not only a much better understanding of genetics, but also a capacity for processing information that is far beyond our computational limits (even adding our best computers to our brains), and will very likely always be so.
The alternative of explaining it in terms of particle interactions (i.e., if we had information about particles in the past) will also not be doable.
All of this is, of course, even assuming that uncertainty resulting from quantum phenomena would not block such detailed explanation regardless of computational capability and input.

But the basic idea is the same - namely, that we don't and won't know enough to explain in detail why those parameters, even though we do get the rough idea: it's what evolution gave us.

Alexander R Pruss said...

I am afraid I don't see how given perfect knowledge of the evolutionary process one would even start to determine a parameter like the risk-saving switchover point. What kind of evolutionary facts are relevant to it?

I guess my intuition here is akin to the intuition that zombies are possible. It seems to me that you can fix all the non-normative facts and vary the normative ones at least a bit.

Angra Mainyu said...

I don't think there is a switchover point, at least in many cases - I think it's fuzzy -, but that aside, facts about the evolutionary process (or, perhaps about particles) would determine (if the universe is deterministic) what sort of beings result from the process; if it's not deterministic, such facts would still influence the outcome significantly.
That would determine, in particular, what parameters we got, both in our visual system (i.e., color in this case) and our normative system (i.e., morality; if you use "norms" more broadly, I'm talking specifically about that part of the normative system).

That said - and to clarify just in case - I don't think evolution had been different and some other entities (say, more intelligent orcas) had evolved and were dominant on the planet (as much as we are) instead, then grass (of the usual variety in usual conditions) would not be green, or that it would not be necessarily true that if a human being tortures another one for fun, that's immoral.
Rather, I think the smart orcas probably wouldn't have a system tracking greenness (they would have orca-colors, or something like that, and some sound-based representation of the world as well), and some other normative (orca-moral) system, significantly overlapping with but not the same as morality.

But maybe you think this does not answer your challenge?
If so, maybe I'm missing something. Do you think the challenge also holds if we focus on color parameters, instead of moral parameters, or do you think it's a particularly moral challenge?

Side note: I do have the impression that zombies are metaphysically possible, but I don't think they're nomologically possible.

Alexander R Pruss said...

Angra:

I think this is our main disagreement: "Rather, I think the smart orcas probably wouldn't have a system tracking greenness (they would have orca-colors, or something like that, and some sound-based representation of the world as well), and some other normative (orca-moral) system, significantly overlapping with but not the same as morality."

I suspect the orcas would have morality simpliciter, but moral obligations depend on circumstances, and the agent's species is one of the circumstances.

Angra Mainyu said...

Maybe the following will clarify my reply:

1. I'm not trying to reply to either the argument from contingency or the fine-tuning argument in this thread. I interpret your challenge as meant to be a different challenge, and so I take it for granted that there are contingent beings, and that there is a universe (contingent or not; I needn't take a stance on that) with such-and-such constants, rules, etc.

2. In the context of 1., I'm addressing the following question: Why did we end up with the moral normative system, out of the many metaphysically possible moral-like normative systems?

3. I'm making a parallel between the question in point 2., and the question: Why did we end up with color vision, out of the many metaphysically possible color-like sorts of vision?

4. In both cases, I'm saying that the answer - on one level - lies in the evolutionary process, though on a deeper level, it's something related to particles, but either way, we don't have enough information about the details.

If I'm missing one or more of your points, I would like to ask for clarification on them (else, if I get them right but you disagree that the reply is adequate, I'd like to ask for more info, so that I can address the objections).

Angra Mainyu said...

Oops, sorry sent my reply one minute after you posted yours, and I hadn't seen it yet.

The possibility you mention (namely, that they have moral obligations) is also compatible with the reply I'm giving, but which I find less likely, though it's a matter of moral semantics whether the obligations of the orcas would also be moral obligations.
One (semantic) reason I think that's not the case is that moral obligations seem to be tied to what's good or bad (e.g., preventing bad things), even if there is more to moral obligations than that. But alien squid or smart orcas wouldn't talk about good or bad, but about squid-bad, or orca-good, etc.
Else, one may ask: what's a worse situation (and how much worse): that an elephant is torn apart by lions and suffers horribly, or that the pride of lions starve to death after perhaps some cannibalism?

I would expect the judgments of alien elephants (smart things who evolved from something like elephants) and alien squid, or smart orcas, to be probably significantly different. That's no problem, because elephant-worse is not the same as squid-worse or orca-worse, or just worse, etc.
So, if smart orcas (or alien squid, etc.) have moral obligations rather than orca-moral obligations, then the moral obligations are probably not tied to the good or bad, but to the orca-good and orca-bad. I'm inclined to say that the orca-moral obligations are tied to the orca-good and orca-bad, regardless of whether smart orcas also have moral obligations (that are tied to the good and the bad) that they neither know nor care about (whether it's possible for an agent to have moral obligations if it does not have a moral sense is not a matter I don't need to address in this context).

That aside, I'm not sure how the challenge is supposed to work. For example, consider the Aristotelian account you sketched. I'm not an Aristotelian, but in that framework, the answer to your point "But there is still a contingency that our world contains humans and fathers rather than something functionally very similar to humans and fathers but with different normative parameters.", would be that it's how evolution happened, and at a deeper level, something about particles. You can still raise the fine-tune or contingency argument challenges, but those are distinct challenges - I'm not trying to address them here.

My reply is akin to that, just without the Aristotelian ontological commitments.