Plausibly, there is some function from the strengths of my motivations (reasons, desires, etc.) to my chances of decision, so that I am more likely to choose that towards which I am more strongly motivated. Now imagine a machine I can plug my brain into such that when I am deliberating between options A and B, the machine measures the strengths of my motivations, applies my strengths-to-chances function, randomly selects between A and B in accordance with the output of the strengths-to-chances function, and then forces me to do the selected option.
Here then is a vivid way to put the randomness objection to libertarianism (or more generally to a compatibilism between freedom and indeterminism): How do my decisions differ from my being attached to the decision machine? The difference does not lie in the chances of outcomes.
That the machine is external to me does not seem to matter. For we could imagine that the machine comes to be a part of me, say because it is made of organic matter that grows into my body. That doesn’t seem to make any difference.
But when the randomness problem is put this way, I am not sure it is distinctively a problem for the libertarian. The compatibilist has, it seems, an exactly analogous problem: Why not replace the deliberation by a machine that makes one act according to one’s strongest motivation (or, more generally, whatever motivation it is that would have been determined to win out in deliberation)?
This suggests (weakly) that the randomness problem in general may not be specific to compatibilism, but may be a special case of a deeper problem that both compatibilists and libertarians face.
It seems that both need to say that it deeply matters just how the decision is made, not just its functional characteristics. And hence both need to deny functionalism.
I like this way of putting the problem. And I think it highlights what (I am coming to believe) is the deep difference between in/compatibilists.
ReplyDeleteAll parties agree that if "the self" is making the decision (=performing the role of the strengths-to-chances function) then the action is free. What they differ on is what can be part of the self. If the machine is explicitly external to a person, then clearly the action is not free, on any view. And that would be because if the action is immoral, for example, it is the machine that needs to be fixed, not the person. Hence the person is not responsible and therefore not free.
But if the machine is part of the self (for a certain kind of libertarian: per impossibile) then it is (part of) the self that needs fixing in case of an immoral decision. And so the person is responsible and therefore free.
The big difference, I think, is whether such a machine CAN be part of the self. And the view that it can/not is a premise, not a conclusion, in an argument for in/compatibilism.
Heath
ReplyDeleteI could agree that the self needs fixing in case of an immoral action, but that does not mean the self is responsible.
Suppose I design an android with "the machine" as part of it. Now, I program the machine in such a way that it makes the android perform an immoral action. So, the android needs fixing.
But it seems to me I am responsible for the android's immoral actions.
If the android could fix itself, then it could be responsible.
ReplyDeleteA potential difficulty is that "random" is not an ontological attribute, but an epistemological one. That is, when we call something random, we are short-handing the fact that the causal factors are too many or too difficult to integrate. In other words, randomness is a description of *our uncertainty* about the *actual* cause. A machine making "truly random" decisions does not exist, and cannot exist.
ReplyDeleteMartin
ReplyDeleteIt could fix itself if it was programmed to fix itself.
Heath:
ReplyDelete"If the machine is explicitly external to a person, then clearly the action is not free, on any view."
I am not sure that is the case. Suppose the function is deterministic and super-simple: it just chooses the strongest motivation and goes with it. And imagine that this function is normally computed by the brain, but that Alice has that part of her brain damaged, and gets a prosthesis that does the simple computation. It seems to me that one replacing just one part of the brain that computes a simple deterministic function by an equivalent prosthesis should not affect whether one has moral responsibility. (Yes, one may worry about a sorites here. But suppose we just do that one replacement and stop.)
I suppose you could say that the prosthesis becomes a part of Alice. But I wouldn't think that a prosthesis immediately comes to be a part of one.
Dr Pruss
ReplyDeleteIf the prosthesis is the sole thing that accounts for Alice's moral choices, then once it is implanted it is morally responsible.
You could argue that a non-living thing cannot be morally responsible, but then the relevant part of Alice's brain before the damage didn't have moral resposiblity either, since the only diffrence between that part of the brain and the prosthetic replacing it is in the material it is composed of. Hence, in this scenario, Alice did not have moral responsibility, at least not for the choices made as a result of the computation of her brain.
Now suppose the same prosthesis was planted in your brain, then it would "make the same choices" as Alice would.
Alex,
ReplyDeleteIt seems to me that if you got a robot arm and figured out how to use it smoothly, it would for all practical purposes be a part of you. (As much as your other arm.) And similarly for (parts of) the brain.
I think most compatibilists would agree that if the function is simple enough, Alice doesn't have MR to start with anyway.
Heath:
ReplyDelete"I think most compatibilists would agree that if the function is simple enough, Alice doesn't have MR to start with anyway."
I am not sure about that. Couldn't this be a perfectly fine compatibilist system of moral responsibility?
Step 1. Calculate the strengths of the desires for the different options, in the light of all the motivations and considerations.
Step 2. Choose the option corresponding to the strongest calculated desire.
Step 1 is very complicated. But the decision happens at Step 2. And Step 2 is very simple. It is Step 2 that I am thinking about replacing.
I don't think the question is whether something is *for all practical purposes* a part of me, but whether it is a part of me. Plausibly, my hair is a part of me, but a wig is not. Yet when you stick on a wig, you immediately know how to use it smoothly--there is nothing to it if the glue works. Similarly, if I am getting heart surgery, and some gigantic piece of medical technology pumps blood through me, that gigantic piece of medical technology probably isn't a piece of me, even if it's hooked up so as to be controlled by my brain stem. (For one, I necessarily own all my body parts; but it is the hospital that owns the machine.)
Walter:
It may be (I am not sure) that a prosthesis can eventually grow into being a part of a person, but I don't think it *immediately* becomes a part of the person. Suppose I am trying on a variety of artificial legs in an artificial leg store. It seems implausible that I am doing a sequence of organ transplants. (Besides, there is the ownership argument in my comment to Heath.)
Two questions:
ReplyDelete1) Say indeterminism for free will means that if you replayed time, you could just as easily have made a different decision than the one you made previously even though all external and internal facts are identical, and maybe in the replay you would.
But you can just as easily imagine a person having a really strong preference for something (a certain pleasure, or avoiding a certain pain) and no matter how many times the scene is replayed he always chooses the preference. Similar to this would be someone you love being in danger and you being able to save them effortlessly and easily - and no matter how many times the scene is replayed you never choose otherwise (assuming your character isn't so strong as to permanently prevent you from refraining).
A similar story would be you desiring something so intensely that you find it relatively irresistible and thus always choose it and with great intensity, even though your will isn't attracted to it absolutely, since only God is an infinite unqualified good. That is, you COULD resist it in principle since the intensity isn't for an infinite good, but the probability of you not choosing is so small as to be virtually zero - just like flipping a fair coin infinitely many times and only getting heads. While it's technically possible, the realistic chances are zero.
The above 3 stories seem intuitively consistent with having free will. So how would this (always freely choosing the same thing indefinitely many times, or with such intensity that the probability of rejecting it is realistically zero, though still possible) square with indeterminism's insistence that you could just as easily have done otherwise if things were replayed?
2) If randomness and chance aren't the same phenomenon, and can exist independently of each other as this Plato Stanford article argues: https://plato.stanford.edu/entries/chance-randomness/
Would this mean that if a random process was indeterministic but not chancy (not even zero chance), that it would then basically produce its effects in a saturated non-measurable way --- with there being no probability at all attached to any of the effects it can / does produce?
Note for 1: That is, indeterminism seems to require that you definitely WOULD eventually choose otherwise given indeterminism and the option of refraining. It's not that it says you COULD choose otherwise but for various reasons you simply never do, it's that it seems to imply that an indeterministic free will means that you MUST eventually choose otherwise at some point.
ReplyDeleteWhat do you think?
What do you think of the above two questions on free will, repetition and chance-randomness distinction?
ReplyDelete@Alex,
ReplyDelete3) Does indeterminism necessarily imply non-contrastivity? Because we often make decisions where we prefer one option over another, and would always choose one over the other even if it were replayed infinitely, which is contrastive - yet we still have free will even in those cases as our wills aren't necessitated or determined towards the preferred choice.
So if indeterminism can exist along with contrastivity, what would make non-contrastivity different? Is there an additional principle of indeterminism that allows there to be non-contrastive explanations / causes?
I don't see why you think we would choose the one over the other always.
ReplyDeleteContrastivity is understood in many ways: http://alexanderpruss.blogspot.com/2019/09/ten-varieties-of-contrastive-explanation.html
@Alex,
ReplyDeleteWell, it's easy to imagine how we can always choose one over the other. Say you really dislike one option, and the other is your favorite, and absent any other possible reasons & benefits for the dislikable option, it's easy to see why we'd always choose it.
But even if we had possible reasons to choose the bad option (asceticism, proving you have free will by choosing the bad, people-pleasing, etc.), we can imagine those not applying and us always preferring our favorite. Yet even in that case we still have free will.
If you have no reason to choose the alternative, choosing the alternative would violate the PSR, and hence would be impossible. But if the alternative is impossible, then we are determined.
ReplyDeleteBut I agree that it is *possible* that we might always choose the first option even if we had reasons for it, just as it is possible that a fair coin should always come up heads. However, the probability of our doing so is zero.
@Alex,
ReplyDeleteSorry, should have worded that more precisely. I meant if one had no reason to choose the painful option, and the only other option is very enjoyable and your favorite, then it makes sense to think you'd always choose the second better option over the bad painful alternative.
To repeat a question from one of my above comments - does indeterminism require that we WILL do differently if a scene of us choosing were replayed? Because we often have preferences which make us choose one thing over another many different times, maybe even daily.
I agree about the painful option, absent special reasons to choose a painful one. But if so, then there is no reason to choose the painful one, and it is impossible to choose something without a reason to do so, so the painful one is not possible.
ReplyDeleteIndeterminism does not require that we WILL do differently, but does require that we MIGHT do differently.
@Alex
ReplyDelete1) Interestingly, it seems intuitive to say that even if we have no reason to choose the painful option, our choosing of the enjoyable favorite option is still free in some way - perhaps because our will is drawn to it in a finite and non-necessary mode, since even the enjoyable favorite option isn't an infinite good. What do you think? Is it still free will in cases where you only have one option and only reasons for that one?
2) As for there being no reason for choosing the painful one being in line with the PSR - that's fair. But we can still have a similar scenario where we DO have some reasons for choosing the painful option; it's just that the reasons don't impress us and are extremely weak, so we will still always stick with the other option no matter how many times it's replayed.
So even when we do have reasons to choose A, we can still always prefer B - and if time were replayed somehow, we'd do the same decision for the same reasons with the same thoughts. What do you think?
3) So would it be possible under indeterminism to always prefer to choose enjoyable B over not-as-enjoyable-A - even though we COULD still freely choose A metaphysically speaking?
That is, we often make choices that are predictable, and we can say roughly that a person we know would always choose to do this over that - such language is coherent, and still compatible with free will.
Also, what would be your thoughts on the difference objection / question to indeterminism? Say indeterminism means that the same cause, in an environment that is identical both externally and internally, can still produce different effects.
ReplyDeleteBut it seems there would still be a difference - indeterministically causing A is different from causing B, so there must be a difference either external or internal to the cause, besides the existence of different effects. Since the cause acts from its own internal powers primarily, the difference should be internal - say the substance elicits an act inside of itself which causes A, and the eliciting of the act is what's indeterministic.
The question then is, is this still compatible with the indeterministic maxim that the same internally-identical cause can produce different effects? And does this lead to infinite regress - since if the difference is in the specific act elicited, one may go on to ask further what makes it the case that the act was elicited, etc?
Because denying any internal or external difference seems to deny either the principle of identity by which different events / acts are different, or to deny PSR since there is nothing intrinsic which specifies the act towards the effect.
I think one possible response could be to say that the internally elicited act is the same, but the effect produced is different - but in that case, doesn't logic demand that there still be a difference SOMEWHERE other than the effect produced, since we don't want to say that the connection between act and effect is wholly arbitrary? What do you think?
What do you think of the above two comments?
ReplyDelete