Wednesday, January 17, 2018

Free will, randomness and functionalism

Plausibly, there is some function from the strengths of my motivations (reasons, desires, etc.) to my chances of decision, so that I am more likely to choose that towards which I am more strongly motivated. Now imagine a machine I can plug my brain into such that when I am deliberating between options A and B, the machine measures the strengths of my motivations, applies my strengths-to-chances function, randomly selects between A and B in accordance with the output of the strengths-to-chances function, and then forces me to do the selected option.

Here then is a vivid way to put the randomness objection to libertarianism (or more generally to a compatibilism between freedom and indeterminism): How do my decisions differ from my being attached to the decision machine? The difference does not lie in the chances of outcomes.

That the machine is external to me does not seem to matter. For we could imagine that the machine comes to be a part of me, say because it is made of organic matter that grows into my body. That doesn’t seem to make any difference.

But when the randomness problem is put this way, I am not sure it is distinctively a problem for the libertarian. The compatibilist has, it seems, an exactly analogous problem: Why not replace the deliberation by a machine that makes one act according to one’s strongest motivation (or, more generally, whatever motivation it is that would have been determined to win out in deliberation)?

This suggests (weakly) that the randomness problem in general may not be specific to compatibilism, but may be a special case of a deeper problem that both compatibilists and libertarians face.

It seems that both need to say that it deeply matters just how the decision is made, not just its functional characteristics. And hence both need to deny functionalism.

9 comments:

Heath White said...

I like this way of putting the problem. And I think it highlights what (I am coming to believe) is the deep difference between in/compatibilists.

All parties agree that if "the self" is making the decision (=performing the role of the strengths-to-chances function) then the action is free. What they differ on is what can be part of the self. If the machine is explicitly external to a person, then clearly the action is not free, on any view. And that would be because if the action is immoral, for example, it is the machine that needs to be fixed, not the person. Hence the person is not responsible and therefore not free.

But if the machine is part of the self (for a certain kind of libertarian: per impossibile) then it is (part of) the self that needs fixing in case of an immoral decision. And so the person is responsible and therefore free.

The big difference, I think, is whether such a machine CAN be part of the self. And the view that it can/not is a premise, not a conclusion, in an argument for in/compatibilism.

Walter Van den Acker said...

Heath

I could agree that the self needs fixing in case of an immoral action, but that does not mean the self is responsible.
Suppose I design an android with "the machine" as part of it. Now, I program the machine in such a way that it makes the android perform an immoral action. So, the android needs fixing.
But it seems to me I am responsible for the android's immoral actions.

Martin Cooke said...

If the android could fix itself, then it could be responsible.

Doug said...

A potential difficulty is that "random" is not an ontological attribute, but an epistemological one. That is, when we call something random, we are short-handing the fact that the causal factors are too many or too difficult to integrate. In other words, randomness is a description of *our uncertainty* about the *actual* cause. A machine making "truly random" decisions does not exist, and cannot exist.

Walter Van den Acker said...

Martin

It could fix itself if it was programmed to fix itself.



Alexander R Pruss said...

Heath:

"If the machine is explicitly external to a person, then clearly the action is not free, on any view."

I am not sure that is the case. Suppose the function is deterministic and super-simple: it just chooses the strongest motivation and goes with it. And imagine that this function is normally computed by the brain, but that Alice has that part of her brain damaged, and gets a prosthesis that does the simple computation. It seems to me that one replacing just one part of the brain that computes a simple deterministic function by an equivalent prosthesis should not affect whether one has moral responsibility. (Yes, one may worry about a sorites here. But suppose we just do that one replacement and stop.)

I suppose you could say that the prosthesis becomes a part of Alice. But I wouldn't think that a prosthesis immediately comes to be a part of one.

Walter Van den Acker said...

Dr Pruss

If the prosthesis is the sole thing that accounts for Alice's moral choices, then once it is implanted it is morally responsible.
You could argue that a non-living thing cannot be morally responsible, but then the relevant part of Alice's brain before the damage didn't have moral resposiblity either, since the only diffrence between that part of the brain and the prosthetic replacing it is in the material it is composed of. Hence, in this scenario, Alice did not have moral responsibility, at least not for the choices made as a result of the computation of her brain.

Now suppose the same prosthesis was planted in your brain, then it would "make the same choices" as Alice would.



Heath White said...

Alex,

It seems to me that if you got a robot arm and figured out how to use it smoothly, it would for all practical purposes be a part of you. (As much as your other arm.) And similarly for (parts of) the brain.

I think most compatibilists would agree that if the function is simple enough, Alice doesn't have MR to start with anyway.

Alexander R Pruss said...

Heath:

"I think most compatibilists would agree that if the function is simple enough, Alice doesn't have MR to start with anyway."

I am not sure about that. Couldn't this be a perfectly fine compatibilist system of moral responsibility?
Step 1. Calculate the strengths of the desires for the different options, in the light of all the motivations and considerations.
Step 2. Choose the option corresponding to the strongest calculated desire.

Step 1 is very complicated. But the decision happens at Step 2. And Step 2 is very simple. It is Step 2 that I am thinking about replacing.

I don't think the question is whether something is *for all practical purposes* a part of me, but whether it is a part of me. Plausibly, my hair is a part of me, but a wig is not. Yet when you stick on a wig, you immediately know how to use it smoothly--there is nothing to it if the glue works. Similarly, if I am getting heart surgery, and some gigantic piece of medical technology pumps blood through me, that gigantic piece of medical technology probably isn't a piece of me, even if it's hooked up so as to be controlled by my brain stem. (For one, I necessarily own all my body parts; but it is the hospital that owns the machine.)

Walter:

It may be (I am not sure) that a prosthesis can eventually grow into being a part of a person, but I don't think it *immediately* becomes a part of the person. Suppose I am trying on a variety of artificial legs in an artificial leg store. It seems implausible that I am doing a sequence of organ transplants. (Besides, there is the ownership argument in my comment to Heath.)