Friday, September 17, 2021

Medical recommendations and informed consent

It is widely accepted that medical treatments require informed consent from the patient. This requires medical professionals to educate patients, to a reasonable degree, on the relevant scientific aspects of the treatment.

Interestingly, I have been told by a medical ethicist that it is not widely accepted that medical recommendations, whether from one’s individual physician or from a government body, are governed by similar informed consent standards. Thus, before giving you an injection, the physician is required to give you both the medical pros and cons of the injection, but if the physician recommends exercise to you, there is no such education requirement (e.g., the physician is not required to tell a clueless patient that exercise can result in joint pain).

This view seems wrong to me. The main reason for requiring informed consent is patient autonomy. But autonomy can be compromised just as much by recommendations omitting salient information as by actual treatment. Let’s say that Jeeves is annoyed by Wooster’s ugly mustache, and recommends to a Wooster the deliciousness of a particular brand of chocolate, having heard from the factory owner's valet that this brand has been contaminated with a chemical that makes one’s facial hair fall out. Jeeves has violated Wooster’s bodily autonomy through the recommendation almost as much as if Jeeves had shaved Wooster in the night.

Thursday, September 16, 2021

An ontological argument from the possible nondefectiveness of modality

  1. Necessarily, if it is necessary that there is no God, then modal reality is bad. (Making the existence of God impossible is terrible!)

  2. Necessarily, if something is bad, it is possible for it not to be bad. (The bad is a flaw in something that ought to be better than it is, and what ought to be can be.)

  3. So, if modal reality is necessarily bad, then it is possible for modal reality not to be bad. (by 2)

  4. So, if modal reality is necessarily bad, then modal reality is not necessarily bad. (by 3)

  5. So, modal reality is not necessarily bad. (by 4)

  6. So, possibly, modal reality is not bad. (by 5)

  7. So, possibly, it is not necessary that there is no God. (by 1 and 6)

  8. So, possibly, it is possible that God exists. (by 7)

  9. So, it is possible that God exists. (by 8 and S4)

  10. Necessarily, if God exists, it is necessary that God exists. (God is a necessary being and essentially divine.)

  11. So, it is possible that it is necessary that God exists. (by 9 and 10)

  12. So, God exists. (by 11 and Brouwer)

Wednesday, September 15, 2021

Two common intuitions

Here are two very common intuitions in the philosophy of mind:

  1. Our experiences of the same things are approximately qualitatively the same: your perceptual experiences of white, or squareness, or the beat of a drum are approximately like mine.

  2. It is metaphysically possible to remap all of one’s qualia, so that one could have had all the color perceptions in one’s life rotated by 120 degrees, say.

I find myself somewhat sceptical of each. Moreover, each claim makes the other less likely, so the probability that both are true is less than the product of the probabilities of each.

Of the two claims, the first seems fairly plausible to me, because I am attracted to the idea that the qualitative properties of my perceptions arise from typical interconnections (including, but perhaps not limited to, inferential ones) between them, and we all have roughly the same ones. But this line of thought, while supporting (1) also supports the denial of (2).

Moreover, our use of the same word “red” for your and my experiences of red tomatoes suggests that (1) is a part of our ordinary pre-theoretic beliefs. And I am inclined to trust our ordinary pre-theoretic beliefs.

On the other hand, it could turn out that (1) is false because it could turn out that how red things look is partly a function of features of brain organization that differ from individual to individual (and in the same individual over time). If so, then we might want to disambiguate ordinary language’s “looks the same” relation to mean either having the same qualitative experience or having an experience with the same representative content, so that we could continue to say that when you and I are looking at a red tomato, it looks the same to us in the representative but not qualitative sense.

But in any case all this is deeply mysterious stuff. I am strongly inclined to the idea that we should try to figure out the best theory of mind and perception we can, and then use that to figure out if (1) and (2) are true, rather than using (1) and (2) as constraints.

An ontological argument from justice

Buras and Cantrell have given a very clever ontological argument for the existence of God based on a desire for happiness. Here is a variant of their argument based on justice.

  1. Ought implies (metaphysical) possibility.

  2. There ought to be justice for humans.

  3. Necessarily, if there is justice for humans, it is possible that there is a human who has happiness.

  4. Necessarily, if there is a human who has happiness, God exists.

  5. So, possibly God exists. (1-4 and S5)

  6. So, God exists. (S5 and as God is essentially divine and necessarily exists.)

I want to expand a little on 3 and 4.

In any world where there is justice for humans, there is (a) a practical possibility of a human being innocent, and (b) a system that reliably rewards innocent humans with happiness. Items (a) and (b) taken together plausibly imply a practical, and hence metaphysical, possibility of happiness. That gives us (3).

Buras and Cantrell defend claim (4). My favorite defense of claim (4) is that human happiness, when we think through our deep desire for eternal life as well as the danger of boredom in eternal lfie, requires some sort of friendship with God.

Monday, September 13, 2021

Virtue ethics and peer disagreement

Aristotelian ethics is committed to the claim that the virtuous person knows what actions and habits are virtuous and is justified in holding on to that knowledge, and indeed should hold on to it. There is a deep stability to virtue. This means that an Aristotelian virtuous person ought not adopt a conciliationist response to those who disagree as to what is virtuous, suspending judgment over the disagreed-upon items.

Indeed, one imagines that Aristotle’s virtuous person could say of those who disagree: “They are not virtuous, and hence do not see the truth about moral matters.” Aristotle’s virtuous person would reject the idea that someone who disagrees with them about virtue could be an epistemic peer. Virtuous habits give epistemic access to moral (and not only moral) truth.

Of course, the disagreer may think themselves virtuous as well, and may think the same thing about the virtuous person as the virtuous person thinks about them. But that does not shake the Aristotelian virtuous person.

This means that if Aristotelian virtue ethics is correct, there is a clear thing that a Christian can say about religious disagreement. The Christian thinks faith is a virtue, albeit an infused rather than natural one. As such, faith gives epistemic access, and someone lacking faith is simply not an epistemic peer, since they lack a source of truth. The fact that a person lacking faith thinks they have the virtue of faith should not move the person who actually has the virtue.

Of course, one might turn all this around and use it as an argument against virtue ethics. But I think Aristotle’s picture seems exactly correct as to the kind of firmness of moral knowledge that the virtuous person exhibits, the kind of spine that lets them say, without pride or vanity, to vast numbers of others that they are simply wrong.

Moral bindingness and levels of jurisdiction

In the US, you are sometimes told that something “violates federal law”, and it is said in a way that suggests that violating federal law is somehow particularly bad.

This raises a moral question. I will assume, contrary to philosophical anarchists, that valid and reasonable laws are in some way morally binding. Other things being equal, is it morally worse to violate the laws that operate at broader levels of organization. In the US, an affirmative answer would imply that federal law is morally worse to break than state law, and state law than county law, and county law than city law.

One might think this: the power to make laws belongs to more local levels of organization by delegation from broader levels of organization, and hence violating the laws of a more local jurisdiction is less morally bad. But this argument does not fit with what I understand is the US consitutional system’s idea that sovereignty starts with the states which permanently delegate some of their authority to the federal system. And, in any case, it is not clear why it would be less bad to go against the laws of a more delegated authority: if x delegates some authority to y, then relevant disobedience to y is also disobedience to x.

A perhaps more plausible argument in favor of the laws of broader jurisdictions being morally more strongly binding is that in violating a law, one offends against the body of citizens. With a broader jurisdiction, that body of citizens is larger, and hence the offense is worse. But this can’t be right. It is not morally less bad to commit federal tax fraud in Canada than in the US just because in Canada the population is smaller! (This observation perhaps suggests that if we do adopt the view that violating the law offends against the body of citizens, we should not view the “offense against the body of citizens” as meaning an offense against the citizens taken severally—to offend against a body is different from offending against the body’s constituents taken severally, or else punching a bigger person would be a worse thing than punching a smaller, just because the bigger person’s body has more cells. Or, perhaps, we have to say that the offensiveness of a law breaking is diluted among the citizenry, so that in a larger body, each citizen is less offended against.)

I want to suggest that the idea that it is worse to offend against broader jurisdictions is backwards for multiple reasons:

  1. An offense against a narrower jurisdiction is an offense against a body of citizens who are more closely related to one, and hence is a greater breach of the duties of civic friendship.

  2. The laws of narrower jurisdictions can be reasonably expected to be on the hwole better fitted to the community, because there is less variation in circumstance within a narrower jurisdiction.

  3. One has a greater say in the laws of the laws of the narrower jurisdiction, and hence they better fit with the autonomy of the governed.

  4. It is typically less burdensome to choose which narrower jurisdiction one lives under than which wider one: it is easier to move to a different city than to a different country. Therefore, any implied consent to local laws is greater than to wider laws.

These considerations suggest that offending against a narrower body is worse. Interestingly, (3) suggests that in my earlier example of tax fraud in the US and Canada, it is even worse to commit tax fraud in Canada, because doing so violates laws one has a greater say in. That actually sounds right to me, but I do not feel the difference in moral badness is a very big one, so (3) is probably not a major factor (of course, in the special case of tax fraud, a lot of the immorality comes from the immorality of lying, which precedes law).

(These same considerations support the principle of subsidiarity.)

So far I have been thinking about geographically defined jurisdictions. But consider a very different jurisdiction: the body of a profession, such as physicians or lawyers or electricians. The standards of such a body have a great deal of moral force. When a doctor says that disclosing some information about a patient violates medical ethics, that carries a great deal of moral force. And yet it really is “just” a violation of the law of a body, because there would be no such moral duty of confidentiality without the standards of the body of physicians (there would be more limited duties of confidentiality, say when the doctor specifically promised the patient not to disclose something). The laws of the professional jurisdictions have a lot of moral force, and it is not implausible that 1-4 are at least partly explanatory of that force.

Saturday, September 11, 2021

Mice

It feels like I am constantly fixing computer mice. The most common issue is the wires breaking near the mouse, requiring me to shorten the cable and resolder it to the PCB. I usually also add some glue or heat shrink tubing as strain relief if I haven’t done so already. Switching the household to wireless mice would solve the problem, at the expense of having yet more batteries to deal with.

I started off today with a new fix: the plastic axle from a mouse wheel broke off. I drilled through the mouse wheel and put a toothpick in instead. We’ll see how long that holds up. If it doesn’t, I’ll have to see if I can find a screw or nail of the right diameter instead.

Likely near future task: my daughter complains of a mouse double clicking. When it earlier had that problem, it seemed to me that it was generating an extra click on release, which sounds to me like a debouncing failure. But then I took it apart and put it back together and the problem disappeared, so I didn’t have a chance to fix it. Apparently the problem has come back, but I can’t duplicate it. If and when I duplicate it, I plan to hook it up to an oscilloscope, and play around with capacitors to debounce the release.

I wonder if problems would decrease if we bought more expensive mice.

Friday, September 10, 2021

Comparing the epistemic relevance of measurements

Suppose P is a regular probability on (the powerset of) a finite space Ω representing my credences. A measurement M is a partition of Ω into disjoint events E1, ..., En, with the result of the experiment being one of these events. In a given context, my primary interest is some subalgebra F of the powerset of Ω.

Note that a measurement can be epistemically relevant to my primary interest without any of the events in in the measurement being something I have a primary interest in. If I am interested in figuring out whether taller people smile more, my primary interest will be some algebra F generated by a number of hypotheses about degree to which height and smiliness are correlated in the population. Then, the measurement of Alice’s height and smiliness will not be a part of my primary interest, but it will be epistemically relevant to my primary interest.

Now, some measurements will be more relevant with respect to my primary interest than others. Measuring Alice’s height and smiliness will intuitively be more relevant to my primary interest about height/smile correlation, while measuring Alice’s mass and eye color will be less so.

The point of this post is to provide a relevance-based partial ordering on possible measurements. In fact, I will offer three, but I believe they are equivalent.

First, we have a pragmatic ordering. A measurement M1 is at least as pragmatically relevant to F as a measurement M2, relative to our current (prior) credence assignment P, just in case for every possible F-based wager W, the P-expected utility of wagering on W after a Bayesian update on the result of M1 is at least as big as that of wagering of W after updating on the result of M2, and M1 is more relevant if for some wager W the utility of wagering after updating on the result of M1 is strictly greater.

Second, we have an accuracy ordering. A measurement M1 is at least as accuracy relevant to F as a measurement M2 just in case for every proper scoring rule s on F, the expected score of updating on the result of M1 is better than or equal to the expected score of updating on the result of M2, and M1 is more relevant when for some scoring rule the expected score is better in the case of M1.

Third, we have a geometric ordering. Let HP, F(M) be the horizon of a measurement M, namely the set of all possible posterior credence assignments on F obtained by starting with P, conditionalizing on one of the possible events in that M partitions Ω into, and restricting to F. Then we say that M1 is at least as (more) geometrically relevant to F as M2 just in case the convex hull of the horizon of M1 contains (strictly contains) the convex hull of the horizon of M2.

I have not written out the details, but I am pretty sure that all three orderings are equivalent, which suggests that I am on to something with these concepts.

An interesting special case is when one’s interest is binary, an algebra generated by a single hypothesis H, and the measurements are binary, i.e., partitions into two sets. In that case, I think, a measurement M1 is at least as (more) relevant as a measurement M2 if and only if the interval whose endpoints are the Bayes factors of the events in M1 contains (strictly contains) the interval whose endpoints are the Bayes factors of the events in M2.

Thursday, September 9, 2021

What question should I ask?

Epistemology is heavily focused on the question of evaluating a doxastic state given a set of evidence: is the state rational or irrational, is it knowledge or opinion, etc. This can be useful to the epistemic life of an agent, but there is something else that is at least as useful and does not get discussed nearly as much: the question of how we should go about gathering evidence or, equivalently, what experiments (broadly understood) we should perform.

[The rest of the post was based on a mathematical error and has been deleted. The next post is an attempt to fix the error.]

Wednesday, September 8, 2021

Has cultural relativism about norms of etiquette really been established?

Imagine a philosopher who argued that the norms of assertion are relative to culture on the grounds that in England we have the norm:

  1. Only assert “It’s snowing” when it’s snowing

while in France we have the norm:

  1. Only assert “Il neige” when it’s snowing.

This would be silly for multiple reasons. Foremost among these is that (1) and (2) are mere consequences of the norm of assertion:

  1. Only assert what it is true.

(Of course, you may disagree that truth is the norm of assertion. You may prefer a knowledge or justified belief or belief or high credence norm. But an analogous point will apply.)

It is widely held that while the norm of assertion is essentially the same across cultures, norms of etiquette vary widely. But the main reason people give for believing that the norms of etiquette vary widely is akin to the terrible argument about norms of assertion I began the post with. People note such things as that in some countries when one meets acquaintances one bows, and in others one waves; or that in some one eats fish with two forks and in others with a fork and knife.

But just as the fact that in England one should follow (1) and in France (2) is compatible with the universality of norms of assertion, likewise the variation in greeting and eating rituals can be compatible with the universality of norms of etiquette. It could, for instance, be that the need to eat fish with two forks in Poland and with a fork and knife in the USA derives simply from a universal norm of etiquette:

  1. Express respect for your fellow diners.

But just as one asserts the truth with different words in different languages, one expresses respect for one’s fellow diners with different gestures in different cultures.

Indeed, presumably nobody thinks that the fact that in France one says “Merci” and in England “Thank you” implies a cultural relativism in etiquette. In both cases one is thanking, but the words that symbolize thanks are different. But what goes for words here also applies to many gestures (there may turn out to be universal gestures, like pointing).

One object that among the norms of etiquette there are norms that specify which gestures signify, say, respect or thanks. But a specification of what signifies what is not the specification of a norm. That “Merci” signifies gratitude and that eating fish with two forks signifies respect are not norms, because norms tell us what to do, and these do not.

  1. “Merci” signifies thanks

is grammatically not a norm but a statement of fact. We might try to make it sound more like a norm by saying:

  1. Signify thanks with “merci”!

But that is bad advice when taken literally. For thanks are not to be signified always, but only when thanks are appropriate. A more correct norm would be:

  1. When a service has been done for you, signify thanks with “merci”!

But this is just a consequence of the general norm of etiquette:

  1. When a service has been done for you, signify thanks!

together with the fact (5).

So, we see that the mere variation in rituals should not be taken to imply that there is cultural relativity of norms of etiquette.

If there is to be a cultural relativity of norms of etiquette, it will have to be at a higher level. If in some cultures, etiquette requires one to show respect for all fellow diners and at others to show disrespect for some—say, those from an underprivileged group—then that would indeed be a genuine relativity of norms of etiquette.

But it’s not clear that me that in a culture where one is expected to show disrespect to fellow diners in some underprivileged group that expectation is actually a norm of etiquette. Not all social expectations, after all, are actually norms of etiquette, or even norms at all. A norm (of behavior) gives norm-based reasons. But an expectation that one show disrespect to members of an underprivileged group has no reason-giving force at all.

We can imagine a culture where there is no way to symbolize respect for members of an underprivileged group when dining. On the view I wish to defend, such a lack would not exempt one from the duty to show respect to all one’s fellow diners—it would just make it more difficult to do so, because it would require one to create new ways of showing respect (say, by adapting the forms of showing respect to members of privileged groups, much as in some European languages the polite forms of address are derived from forms in which one used to address nobility in less democratic times).

I am not sure if there is cultural variation in norms of etiquette. But if there is, that variation will not be proved by shallow differences between rituals, and may not even follow from deeper variation, such as a culture where it is not appropriate to thank one’s subordinates for work well done. For in the case of deeper variation, it could simply be that some in some cultures violation of certain norms of etiquette is nearly universal, and there are no accepted ways to show the relevant kind of respect.

In fact, it could even be the case that there is only one norm of etiquette, and it is culturally universal:

  1. Signify respect to other persons you interact with in ways fitted to the situation.

If this is right, then social rules designed to show disrespect, no matter how widespread, are not norms of etiquette.

Reasons from the value of true belief

Two soccer teams are facing off, with a billion fans watching on TV. Brazil has a score of 2 and Belgium has a score of 0, and there are 15 minutes remaining. The fans nearly unanimously think Brazil will win. Suddenly, there is a giant lightning strike, and all electrical devices near the stadium fail, taking the game off the air. Coincidentally, during the glitch, Brazil’s two best players get red cards, and now Belgium has a very real chance to win if they try hard.

But the captain of the Brazilian team yells out this argument to the Belgians: “If you win, you will make a billion fans have a false belief. A false belief is bad, and when you multiply the badness by billion, the result is very bad. So, don’t win!”
Great hilarity ensues among the Belgians and they proceed to trounce the Brazilians.

The Belgians are right to laugh: the consideration that the belief of a billion fans will be falsified by their effort carries little to no moral weight.

Why? Is it that false belief carries little to no disvalue? No. For suppose that now the game is over. At this point, the broadcast teams have a pretty strong moral reason to try to get back on the air in order to inform the billion fans that they were mistaken about the result of the game.

In other words, we have a much stronger reason to shift people’s beliefs to match reality than to shift reality to match people’s beliefs. Yet in both cases the relevant effect on the good and bad in the world can be the same: there is less of the bad of false beliefs and more of the good of true beliefs. An immediate consequence of this is that consequentialism about moral reasons is false: the weight of moral reasons depends on more than the value of the consequences.

It is often said that belief has a mind-to-world direction of fit. It is interesting that this not only has repercussions for the agent’s own epistemic life, but for the moral life of other parties. We have much more reason to help others to true belief by affecting their beliefs than by affecting the truth and falsity of the content of the beliefs.

Do the Belgians have any moral reason to lose, in light of the fact that losing will make the fans have correct belief? I am inclined to think so: producing a better state of affairs is always worthwhile. But the force of the reason is exceedingly small. (Nor do the numbers matter: the reason’s force would remain exceedingly small even if there are trillions of fans because Earth soccer was famous through the galaxy.)

There is a connection between the good and the right, but it is quite complex indeed.

Two spinners and infinitesimal probabilities

Suppose you do two independent experiments, A and B, each of which uniformly generates a number in the interval I = [0, 1).

Here are some properties we would like to have on our probability assignment P:

  1. There is a value α such that P(A = x)=P(B = x)=α for all x ∈ I and P((A, B)=z)=α2 for all z ∈ I2.

  2. For every subset U of I2 consisting of a finite union of straight lines, P((A, B)∈U) is well-defined.

  3. For any measurable U ⊆ I2, if P((A, B)∈U|A = x)=y for all x ∈ I, then P((A, B)∈U)=y.

  4. For any measurable U ⊆ I2, if P((A, B)∈U|B = x)=y for all x ∈ I, then P((A, B)∈U)=y.

  5. The assignment P satisfies the axioms of finitely additive probability with values in some real field.

Here is an interesting consequence. Let U consist of two line segments, one from (0, 0) to (1, 1/2) and the other from (0, 1/2) to (1, 1). Then every vertical line in I2 intersects U in exactly two points. This is measurable by (2). It follows from (1) and (5) that P((A, B)∈U|A = x)=2α for all x ∈ I. Thus, P((A, B)∈U)=2α by (3). On the other hand, every horizontal line in I2 meets U in exactly one point, so P((A, B)∈U|B = x)=α by (1) and P((A, B)∈U)=α by (4). Thus, 2α = α, and so α = 0.

In other words, if we require (1)-(5) to hold, then the probability of every single point outcome of either experiment must be exactly zero. In particular, it is not possible for the probability of a single point outcome to be a positive infinitesimal.

Cognoscenti of these kinds of arguments will recognize (3) and (4) as special cases of conglomerability, and are likely to say that we cannot expect conglomerability when dealing with infinitesimal probabilities. Maybe so: but (3) and (4) are only a special case of conglomerability, and they feel particularly intuitive to me, in that we are partitioning the sample space I2 on the basis of the values of one of the two independent random variables that generate the sample space. The setup—say, two independent spinners—seems perfectly natural and unparadoxical, the partitions seem perfectly natural, and the set U to which we apply (3) and (4) is also a perfectly natural set, a union of two line segments. Yet even in this very natural setup, the friend of infinitesimal probabilities has to embrace a counterintuitive violation of (3) and (4).

Tuesday, September 7, 2021

Naturalism and lovability

  1. If naturalism is true, Stalin is not lovable.

  2. Everyone is lovable.

  3. So, naturalism is not true.

Here, by “lovable”, I don’t mean that it is possible to love the person, but that it is not inappropriate to do so.

Premise 2 follows the intuition that it is permissible for every parent to love their children. It also follows from the more controversial claim that everyone should love everyone.

The intuition behind premise 1 is something like this: Stalin’s actions were so horrible that the only plausible hypotheses on which he is lovable are that there is some deeply mysterious and highly valuable metaphysical fact about his being, such as that he is in the image and likeness of God, or that his Atman is Brahman, a fact incompatible with naturalism. For if all we have are the ordinary naturalistic goods in Stalin, these goods are easily outweighed by the horrors of his wickedness.

Friday, September 3, 2021

Impairment and saving lives

Bob and Carl are drowning and you can save only one of them. Bob is a human being in the prime of life, physically and mentally healthy, highly intelligent, and leading a happy and fulfilling life as a physicist committed to lifelong celibacy. To look at him, Carl is Bob’s identical twin. Carl has the same physical and mental powers as Bob, and leads a very similar happy and fulfilling life as a physicist committed to lifelong celibacy.

But there is one crucial difference that you know about, but Carl does not. Carl is actually a member of a superintelligent humanoid alien species. However, due to an unfortunate untreatable genetic condition, Carl suffers from a severe intellectual impairment, having merely the intelligence of a highly intelligent human. In order that Carl might avoid the stigma of the impairment, his parents had some highly sophisticated surgery done on him to make him fit into human society, and arranged for him to be adopted by a human family and raised as a human. No one except for you on earth will ever know that Carl isn’t human. You know because you happened to see the aliens arranging this (but you haven’t told anyone, because you don’t want people to think you are crazy).

Should you save Bob or Carl from drowning? My intuition is that if the above is all that you know, you have no reason to prefer saving one over the other. If one of them is slightly more likely to be saved by you (e.g., they are slightly closer to you), you should go for that one, but otherwise it’s a toss-up.

But notice that if you save Carl, there will be more natural evil in the world: There will be a severe intellectual impairment, which won’t be present if you choose to save Bob instead. It seems pretty plausible that:

  1. If you have a choice between two otherwise permissible courses of action, which result in the same goods, but one of them results in exactly one additional evil, you have a moral reason to choose the course of action that does not result in the evil.

Thus, it seems, you should save Bob.

So there is something paradoxical here. On the one hand, there seems to be no reason to pick Bob over Carl. On the other hand, the plausible general ethical principle (1) suggests you should pick Bob.

How can we get out of this paradox? Here are two options.

First, one could say that impairment is not an evil at all. As long as Carl leads a fulfilling life—even if it is merely fulfilling by human standards and not those of his species—his impairment is no evil. Indeed, we might even take the above story to be a reductio ad absurdum of an Aristotelian picture of species as having norms attached to them with it being a harm to one to fall short of these norms.

Second, one argue that principle (1) does not actually apply to the case. For there is a difference of goods in saving Carl: you are saving a member of a superintelligent species, while in the case of saving Bob, you are saving a mere human. For this to fit with the intuition that it’s a toss-up whether to save Bob or Carl, it has to be the case that what the superintelligence of his species adds to the reasons for saving Carl is balanced by what his abnormally low intelligence subtracts from these reasons.

Of these options, I am more attracted to the second. And the second has an interesting and important consequence: "mere" membership in a natural kind can have significant value. This has important repercussions for the status of the human fetus.

Wednesday, September 1, 2021

Models of libertarian agency, and some more on divine simplicity

Here is a standard libertarian picture of free and responsible choice. I am choosing between two non-mental actions, A and B. I deliberate on the basis of the reasons for A and the reasons for B. This deliberation indeterministically causes an inner mental state W(A), which is the will or resolve or intention to produce A. And then W(A) causes, either deterministically or with high probability, the extra-mental action A.

Now notice two things. First, notice that my production of the state W(A) is itself something I am morally responsible for. Imagine that I have resolved myself to gratuitously insult you. If it turns out that my vocal chords are paralyzed, my resolve W(insult) is itself enough to make me guilty.

Second, note that that my production of W(A) could involve the production of a prior second-order state of will or resolve, a willing W(W(A)) to will to produce A. For there are times when it’s hard to resolve ourselves to do something, and in those cases we might resolve ourselves to resolve ourselves first. But at the same time, to avoid an infinite regress, we should not adopt a view on which every time we responsibly produce something, we do so by forming a prior state of willing or resolve or intention. In light of this, although my production of W(A) could involve the production of a prior second-order state W(W(A)), it need not do so. In fact, phenomenologically, it seems more plausible to think that in typical cases of free choice, we do not go to the meta level of producing W(W(A)). We only go to the meta level in special cases, such as when we have to “steel” ourselves to gain the resolve to do the action.

Thus we have seen that, assuming libertarianism, it is possible for me to be responsible for indeterministically producing a state of affairs W(A) without producing a prior state of willing or resolving or intending in favor of W(A). The state W(A) is admittedly an inner mental state. But the responsibility for W(A) does not seem to have anything to do with the innerness of W(A). We are responsible for W(A) because our deliberation indeterministically but non-aberrantly results in W(A).

Here is a question: Could there be cases where we have libertarian-free actions where instead of our deliberation indeterministically non-aberrantly resulting in W(A), and thereby making us responsible for W(A) as well as A, our deliberation directly indeterministically and non-aberrantly results in the extra-mental action A, without an intervening inner mental state W(A) that deterministically or with high probability causes A, but with us nonetheless being responsible for A?

Once we have admitted—as a libertarian has to, on pain of a regress of willings—that we can be responsible for producing a state of affairs without a prior willing of that state of affairs, then it seems hard to categorically deny the possibility of us producing an extra-mental state of affairs responsibly without an intervening prior willing. And in fact phenomenology fits quite well with the hypothesis that we do that. We do many things intentionally and responsibly without being aware of a willing, resolve or intention to do them. If we stick to the initial libertarian model on which there must be an intervening mental state W(A), we have to say that either the state W(A) is hidden from us—unconscious—or that these actions are only free in a derivative way. Neither is a particularly attractive hypothesis. Why not, simply, admit that sometimes deliberation results in an extra-mental action that we are responsible for without an intervening willing, resolve or intention?

Well, I can think of one reason:

  1. It seems that you can only be responsible for what we do intentionally, and we cannot do something intentionally without intending something.

But note that if this reason undercuts the possibility of our responsibly directly doing A without an intervening act W(A) of intention, it likewise undercuts the possibility of our responsibly directly producing W(A) without an intervening W(W(A)) act, and sets us on a vicious regress.

I actually think (1) can be accepted. In that case, when we directly responsibly produce W(A), the intentionality in the production of W(A) is constituted by the non-aberrant causal connection between deliberation and W(A), rather than by some regress-engendering intention-for-W(A) prior to W(A). And the occurrence of W(A) means that we are intending something, namely A.

But what would it be like if we were to directly responsibly produce A, without an intervening act of intention W(A)? How would that be reconciled with (1)? Again, the intentionality of the production of A would be constituted by the non-aberrant causal connection between deliberation and A. And the content of the intention would supervene on the actual occurrence of A as well as on the reasons favoring A that were instrumental in the deliberation. (There are some complications about excluded reasons. Maybe in those cases deliberation can have an earlier stage where one freely decides whether to exclude some reasons.)

Call the cases where we thusly directly responsibly produce an extra-mental action A cases of direct agency.

A libertarian need not believe we exhibit direct agency. Perhaps we always have one level of resolve, willing or intention as an inner mental state. But the libertarian should not be dogmatic here, given the above arguments.

Our phenomenology suggests that we do exhibit direct agency, and indeed do so quite commonly. And if God is simple, and hence does not have contingent inner states, all of God’s indeterministic free actions are cases of direct agency.

In fact, independently of divine simplicity, we may have some reason to prefer the direct agency model in the case of God. Consider why it is that sometimes we go to the meta level of W(W(A)): we do so because of the weakness of our wills, we have to will ourselves to will ourselves to produce A. It seems that a perfect being would never have reason to go to the meta level of W(W(A)). So, the remaining question is whether a perfect being would ever have reason to go to the W(A) level. I think there is some plausibility in the idea that just as going to the W(W(A)) level is a sign of weakness, a sign of a need for self-control, going to the W(A) level is also a sign of imperfection—a sign that one needs a tool, even if an intra-mental tool, for the production of A. It seems plausible, thus, that if this is possible and compatible with freedom and responsibility, a perfect being would simply directly produce A (where A is, say, the action of the being’s causing horses to exist). And I have argued that it is possible, and it is compatible with freedom and responsibility.