It looks like the laws of physics include "free parameters", constants that we feel could well have had other values than they do. It does not appear that these parameters can all be derived from more fundamental laws that have no free parameters.
Analogous phenomena occur in ethics. If to save your life, I must suffer a minor pain for a second, it is my duty to make the sacrifice. If to save your life, I must suffer constant torture for decades, it is not my duty to do so. As one increases the amount of my suffering to be weighed against your life, at some point one transitions from duty to supererogation (and perhaps eventually to just imprudence). Similar phenomena come up when deciding between goods to people with whom one has different relationships (saving n of one's cousins versus m strangers), when deciding between risks and certainties, etc. It does not appear that these parameters can all be derived from more fundamental laws that have no free parameters. (The best proposal for doing so is utilitarianism, and that just doesn't fit the moral data.)
And there are analogous phenomena in epistemology. For instance, there is the question of how quickly one should make inductive generalizations (in the Bayesian setting this comes to questions like: how high should be one's priors for generalizations).
In physics, the existence of free parameters is strong evidence for some sort of contingency in the laws. There are two ways to have such contingency. The first is to say that there could have instead been other laws. The second is an Aristotelian story on which the laws of physics are necessary but are conditional on the natures of things (e.g., if x is an electron, it behaves thus-and-so), and there could have been other things in the universe with other natures (e.g., shmelectrons instead of electrons) and then other laws--those with antecedents concerning the things with the other natures--would have been relevant.
The first approach raises a problem of explanation: Why are these the laws? The second approach reduces the explanatory question to a different explanatory question that we had anyway: Why are these the entities that exist?
In ethics and epistemology there are two options that can't be taken seriously in physics. One might, for instance, be a subjectivist of some stripe about the parameters (say, by being a subjectivist about all of ethics or epistemology, or just about the parameters). Or one might try to bring in vagueness to solve the problem--maybe it's vague at what point the needs of a larger number of strangers take precedence over a smaller number of cousins.
Vagueness does not, I think, solve the problem. For even if it's vague what the parameters are, it's not completely vague. It's non-vaguely true, for instance, that one should save a billion innocent strangers over one close relative. And subjectivism gives up too quickly.
It would be nice if one could give the same account of the free parameters problem in all three disciplines. Some accounts do not have much hope of doing that. For instance, one might solve the free parameters problem in physics by supposing that there is a multiverse with many different laws, either selected at random or with all possible laws exhibited, and it's just rock bottom that these laws are the laws where they are laws. The idea that there would be such variation in the moral or epistemological laws, with no explanation of the variation, is very unattractive.
There is a uniform Aristotelian story about all three free parameter problems. The parameters are necessarily what they are given the natures of the beings (physical beings, moral agents and epistemic agents) involved. The explanatory burden then shifts to the question of why these are the beings that exist. There is also a uniform divine choice story: God sets the parameters in the laws of physics, ethics and epistemology in a way that makes for a particularly good universe.
But there are, of course, non-uniform stories. One might, for instance, take the Aristotelian story about laws of physics, and a divine choice story about ethics and epistemology. But uniform stories are to be preferred.
7 comments:
Even if one takes a uniform Aristotelian approach, the only answer to the question "why are these the entities that exist?" is God (this is the argument from contingency; or from "unrealized possibilities", as Leftow puts it). Either way, it seems we're back to God.
I am curious, though, if the "free parameters" in ethics are really analogous to the ones in physics. The free parameters in physics are of the sort "x force/constant/quantity could have been different, and then the physics of the world would have been dramatically altered". The free parameters in ethics are not of that sort at all (at least, not in any obvious way...). Indeed, we would need more of an argument that there are in fact any "free" parameters in ethics. It could be that there is an absolute dividing line in each of the problematic cases you mentioned; just because we don't know where that line is, or it seems vague to us, does not mean that the line could have been somewhere else (as in the case of physical constants and quantities which could have been otherwise).
I think that vagueness does solve the problem, and the reason that you think it does not, is that you don't take the reality of vagueness sufficiently seriously.
As I have said before, all words are vague, and consequently all claims are vague. That includes claims like, "One should save the lives of one billion strangers rather than of one friend."
In my view, if we discovered that there was another rational species and that we killed millions of them every time we took a step, that would not make it moral to lie down and die, and it would not make it immoral to continue to go about our lives as usual. I realize you disagree with that, but if I am right, then there are possible situations where it is right to save the life of a friend rather than of one billion strangers. So the truth of the claim about the billion is not non-vague.
entirelyuseless: Which term is vague in "one should save the lives of one billion strangers rather than of one friend" leading to the vagueness of the overall claim? Is it "one" or "strangers" or what?
I confess, I don't fully understand this issue of "vagueness". Of course, I am a Wittgensteinean (as much as a total layperson can be), so I don't think words have meanings at all, but uses in particular language games. That being said, I don't see how differences of use for any of the words in Pruss' sentence changes the moral outcome.
As to your scenario, I do think there would be a rational imperative to find a way not to kill those millions by taking our steps in life. I can't imagine any rule which (without special pleading) can make those rational beings irrelevant and yet my fellow humans as salient as I know them to be. Is it size? Would we accept that if we were the tiny ones, and a planet-sized creature was running around destroying us by the millions with its every step?
Michael: every term in the claim is vague. In fact, even if there were no sense in which it could ever be true, that would not prevent the words from being vague themselves. I was pointing to the fact that Pruss believes that if some things are definitely true, and others vaguely true, there must be an absolutely definite point were you pass from being definite to vague. But that is false: if we want to speak of "definite" and "vague", the line between them will be itself vague. The process can repeat as you please. You will never bottom out at a point where you have something entirely non-vague.
Once that is understood, it does not really matter whether there are ever situations where it is ok to kill a billion people to save one acquaintance. Even if there were not, that could be for entirely objective reasons that could not have been otherwise.
The rule in that situation is the same as the rule in general: human morality is what promotes human flourishing. That is not special pleading; that is just what it means to be moral. If you want an economic analysis, it could not be true that we were killing millions with every step without them being far more numerous, and therefore far less valuable economically. I agree, by the way, that if we found out that to be the case, we would begin to work on ways to help them, in time. But I disagree that we would be morally bound to immediately lie down and die; Pruss has directly affirmed that we would be. Likewise, I accept that if there were a planet sized rational creature that was destroying planets with every step, it might randomly destroy us, and I would not blame it for that. I would not expect it to lie down and die.
Human flourishing is a completely arbitrary grounding for morality. That is NOT "what it means to be moral". If we met an alien species that was as sentient and rational as us, our moral inclination would be to treat them well even if it has absolutely zero bearing on the flourishing of humans. THEIR flourishing would be relevant. Likewise if we create full-blown AI.
Economic analyses only work on certain sorts of moral theories (the type which tend to utterly fail in lining up with our moral intuitions, in my humble opinion). If EACH individual sentient person has intrinsic value, then that value is not diminished by being a member of a very large species. If you grounded morality in humankind's OVERALL flourishing, then you could run this sort of economic analysis, but that is totally arbitrary grounding and is not really a moral grounding at all (it's a genetically natural one and a pragmatic one, at best). So, for example, if it benefitted human flourishing for us to kill off all the unfit bloodlines (by some actually good standard of "unfit") that would not make it a moral thing to do, and everyone knows that.
I don't think anyone is suggesting we "lie down and die", but that we figure out a way to live our lives without destroying theirs. Pruss' analogy is pretty far-fetched, but think of the case made for veganism: Animals are sentient and should not be obliged to suffer needlessly. So, people should alter their behavior in a way that reduces needless suffering in animals. Plain and simple, even if it is really inconvenient and requires a radical change in lifestyle (it's actually perfectly easy and much healthier for us, but that's beside the point). Again, even if there were no issue of human flourishing (though there is, in this case, but imagine that there weren't), it would still be a moral dilemma and we would still know that, ceteris paribus, it is better to reduce the needless suffering of animals to the best of our ability.
Let's start with some big givens...
We have duties authored by God.
Our first duty is to God.
Some of our duties are to the honoring and the development of ourselves.
Some of our duties are to others.
We are by nature social and political creatures such that the good of the individual is poorly understood by studying the individual in isolation,even if that is the primary locus of responsibility and agency.
With these givens I find the idea of a single story unappealing or implausible,.... Because it seems to me that there is casuistry which takes into account parameters generated in and through subjectivity and intersubjectivity (and thus universal in kind but not content) overlaid on parameters which God might (in a modal sense) have set differently. Attempts to clearly distinguish the boundaries between these types (particularly between the intersubjective and the divine, non-necessary) seem unreliable... Whereas distinguishing the physical parameters from the intersubjective descriptions of them (which are at least in part indicative of and responsive to the finite scale and typical speed and length of human lives) seems more plausible?
Perhaps I have missed the point or this is too tangled.
Post a Comment