Friday, June 16, 2017

Brute necessities and supervenience

There is something very unappealing about unexplained, i.e., brute, metaphysical necessities that are “arbitrary”. For instance, suppose that someone said that some constant in a law of nature had the precise value it does by metaphysical necessity. If that contant were 1 or π or something like that, we could maybe buy that. But if the constant couldn’t be put in any neat way, could not be derived from deeper metaphysical necessities, but just happened necessarily to be exactly 1.847192019... (in a natural unit system) for some infinite string of digits? Nah! It would be much more satisfactory to posit a theory on which that constant has that value contingently. “Arbitrariness” of this sort is evidence of contingency, though it is a hard question exactly why.

Here is an application of this epistemic principle. It seems very likely that any view on which mental properties supervene of metaphysical necessity on physical ones will involve brute metaphysical necessities that are “arbitrary”.

For instance, consider a continuum of physical arrangements, starting with a paradigmatic healthy adult human and ending with a rock of the same mass. The adult human has conscious mental properties. The rock does not. Given metaphysically necessary supervenience, there must be a necessary truth as to where on the continuum the transition from consciousness to lack of consciousness occurs or, if there is vagueness in the transition, then there must be a necessary truth as to how the physical continuum maps to a vagueness profile. But it is very likely that any such transition point will be “arbitrary” rather than “natural”.

Or consider this. The best naturalist views make mental properties depend on computational function. But now consider how to define the computational function of something, say of a device that has two numerical inputs and one numerical output. We might say that if 99.999% of the time when given two numbers the device produces the sum of the numbers, and there is no simple formula that gives a higher degree of fit, then the computational function of the device is addition. But just how often does the device need to produce the sum of the numbers to count as an adder? Will 99.99% suffice? What about 99.9%? The reliability cut-off in defining computational function seems entirely arbitrary.

It may be that there is some supervenience theory that doesn’t involve arbitrary maps, arbitrary cut-offs, etc. But I suspect we have no idea how such a theory would go. It’s just pie in the sky.

If supervenience theories appear to require “arbitrary” stuff, then it is reasonable to infer that any supervenience is metaphysically contingent—perhaps it is only nomic supervenience.

This line of argument is plausible, but to make it strong one would need to say more about the notion of the “arbitrary” that it involves.

7 comments:

  1. I've been thinking a bit lately about anti-physicalist arguments along these lines---roughly: (i) assuming the falsity of panpsychism, there will be something arbitrary about the location of the line between physical systems that are conscious and those that aren't. (ii) There is precedent for this sort of arbitrariness in contingent laws of nature. (iii) There is not precedent for this sort of arbitrariness in metaphysically necessary "grounding laws," and moreoever this sort of arbitrariness seems inappropriate in necessary grounding laws. (iv) So, probably, the line is determined by contingent laws of nature.

    But it's not clear to me that this style of argument works against reductive physicalist views of consciousness, i.e. views that assert that consciousness is identical to some physical/functional property. Suppose it turns out that physical systems are conscious when, and only when, they integrate exactly n bits of information (measured by Giulio Tononi's PHI, say), where n is some arbitrary-looking number like 3,201,211.432318... A reductive physicalist might say that this is because consciousness = the property of integrating at least n bits of information. There's a temptation to respond that this identity, if true, would be *arbitrary*. (Why isn't consciousness identical to the property of integrating n+1 bits of information, or n-4 bits, or ...?) But I'm not sure it makes sense to say that an identity is arbitrary. (Is it arbitrary that I'm identical to me and not you? Or that scarlet is identical to scarlet and not crimson? Or that water is identical to H2O and not H3O?)

    ReplyDelete
  2. correction: "...when, and only when, they integrate *at least* n bits of information"

    ReplyDelete
  3. How about bringing in ethics? There is necessarily a value things have in virtue of being conscious. Why should that value be associated with consciousness rather than consciousness*, the property of integrating n+1 bits?
    One could make the same move, that there is value and value*, each of which is identical to some physical property. But I don't think this move is plausible.
    There is also a semantic problem. Why does our language refer to consciousness rather than consciousness*, when both are equally nonnatural.

    ReplyDelete
  4. I think bringing in ethics might be promising here. We might imagine two physically similar systems, one that integrates n bits and one that integrates n-1 bits, where both are in a functional state characteristic of enormous pain. Intuitively, there's a huge axiological difference between being in conscious pain, and being in a completely unconscious state with a pain-like functional profile, and it would be weird if this huge axiological difference were grounded in a tiny natural difference.

    I agree there's are semantic problems---it's hard to see how a reductive physicalist about consciousness can avoid a large (and, I think, implausible) amount of semantic indeterminacy in "is phenomenally conscious." There's also issues about substantivity (in Sider's sense) even if the semantic indeterminacy can be resolved. Parfit says somewhere that if we come upon an alien life form that wriggles vigorously when we poke it, it can't be an empty or merely verbal question whether the creature is conscious and in great pain, or merely an unconscious automaton. This seems really plausible. But it's hard to see how the reductive materialist can avoid the conclusion that such questions are sometimes empty/non-substantive/merely verbal. Whatever physical/functional property is identical with consciousness, there will be nearby, very similar physical/functional properties that don't differ much in terms of their naturalness/joint-carving-ness. In that case, given a Sider-style account of what makes a question substantive, it looks like reductive physicalism will entail the falsity of Parfit's substantivity intuition.

    ReplyDelete
  5. I think it could be merely verbal whether some alien quale is a pain, but it cannot be merely verbal whether it is to be avoided or to be pursued.

    ReplyDelete
  6. I agree it could be merely verbal whether an alien quale is a pain (e.g. it might be on the borderline between a mere sensation of heat, and a sensation of heat that constitutes a pain), but I don't think it could be a merely verbal question whether a creature is in pain *or* entirely unconscious (where that's an "alternative" question whose candidate answer are (i) "it is in pain" and (ii) "it is entirely unconscious," rather than a yes/no question whose candidate answers are "it is in pain" and "it is not in pain.")

    ReplyDelete
  7. Maybe. Consciousness is sufficiently mysterious that I am open to the possibility that there is something akin to consciousness which is morally on par with it, and hence that there is something like conscious pain which isn't conscious or a pain, but which has similar moral implications.

    I don't know what that thing would be like. But here's an analogy that makes me open to it. Suppose we knew no art form other than music, and we had just the one relevant word "musart" whose extension, as far as we knew, coincided with music. We might then come to reasonably speculate that there could be something like musart that involves no sound but exhibits similar values to paradigmatic cases of musart. And then we might ask whether: (a) there is a form of musart that doesn't involve sound or (b) there is something other than musart that doesn't involve sound but is sufficiently like musart to exhibit the same kinds of values. And we would have a very hard time settling this question. And of course this could well be a merely verbal question.

    I guess that my openness to these possibilities means that I can't run the standard multiple realizability arguments against type-type identity theory. Maybe pain can only exist in brains like ours. Maybe octopi feel no pain. But if so, then what I care about as a philosopher isn't pain, but stuff that has the same unfortunate ethical properties that pain does, so the multiple-realizability problem comes back.

    ReplyDelete