Wednesday, October 5, 2022

Induction to the Causal Principle?

I’m curious whether one can infer the causal principle C that everything that comes into existence has a cause inductively on the basis of our observations of things with causes.

There are a couple of issues with such an inference. First, let’s think about the inductive evidence about causes globally. It seems to consist primarily in these two observations:

  1. we have found causes for many things that come into existence, but

  2. there are many things that come into existence for which we have yet to find causes.

It is worth noting that in terms of individuals, (b) vastly outnumbers (a). Consider insects. Of the myriad insects that we come into contact daily, we have found the causes of very few. Of course, we assume that the others have causes, causes that we suppose to be parent insects, but we haven’t found the parents.

For observations (a) and (b) to support C, these observations have to be more likely on C than on C’s negation. But now we have two problems. First, on the negation of C it doesn’t seem like we can make any sense of the probability that some item has or does not have a cause. Causeless events have no probabilities. Second, even if somehow assign such a probability, it is far from clear that the observations of (a) and (b) are more to be expected on C than on not C.

Second, I suspect that often when we claim to have found y to be the cause of x, our reason for belief that y is the cause of x depends on our assumption of C. Our best candidate for a cause of x is y, so we take y to be the cause. But I wonder how often this inference isn’t based on our dismissing the possibility that x just has no cause.

None of this is meant to impugn C. I certainly think C is true. But I think the reasons for believing C are metaphysical or philosophical rather than inductive observation.

Monday, October 3, 2022

The Church-Turing Thesis and generalized Molinism

The physical Church-Turing (PCT) thesis says that anything that can be physically computed can be computed by a Turing machine.

If generalized Molinism—the thesis that for any sufficiently precisely described counterfactual situation, there is a fact of the matter what would happen in that situation—is true, and indeterminism is true, then PCT seems very likely false. For imagine the function f from the natural numbers to {0, 1} such that f(n) is 1 if and only if the coin toss on day n would be heads, were I to live forever and daily toss a fair coin—with whatever other details need to be put in to get the ``sufficiently precisely described’’. But only countably many functions are Turing computable, so with probability one, an infinite sequence of coin tosses would define a Turing non-computable function. But f is physically computable: I could just do the experiment.

But wait: I’m going to die, and even if there is an afterlife, it doesn’t seem right to characterize whatever happens in the afterlife as physical computation. So all I can compute is f(n) for n < 30000 or so.

Fair enough. But if we say this, then the PCT becomes trivial. For given finite life-spans of human beings and of any machinery in an expanding universe with increasing entropy, only finitely many values of any given function can be physically computed. And any function defined on a finite set can, of course, be trivially computed by a Turing machine via a lookup-table.

So, either we trivialize PCT by insisting on the facts of our physical universe that put a finite limit on our computations, or in our notion of “physically computed” we allow for idealizations that make it possible to go on forever. If we do allow for such idealizations, then my argument works: generalized Molinism makes PCT unlikely to be true.

Saturday, October 1, 2022

Vagueness and moral obligation

It sure seems like there is vagueness in moral obligation. For instance, torture of the innocent is always wrong, making an innocent person’s life mildly unpleasant for a good cause is not always wrong, and in between we can run a Sorites sequence.

What view could a moral realist have about this? Here are four standard things that people say about a vague term “ϕ”.

  1. Error theory: nothing is or could be ϕ; or maybe “ϕ” is nonsense.

  2. Non-classical logic: there are cases where attributions of “ϕ” are neither true nor false.

  3. Supervaluationism: there are a lot of decent candidates for the meaning of “ϕ”, and no one of them is the meaning.

  4. Standard epistemicism: there are a lot of decent candidates for the meaning of “$”, and one of them is the meaning, but we don’t know which one, because we don’t know the true semantic theory and the details of our linguistic usage.

If “ϕ” is “moral obligation”, and we maintain moral realism, then (1) is out. I think (3) and (4) are only possible options if we have a watered-down moral realism. For on a robust moral realism, moral obligations really central to our lives, and nothing else could play the kind of central role in our lives that they do. On a robust moral realism, moral obligation is not one thing among many that just as well or almost as well fit our linguistic usage. Here is another way to put the point. On both (3) and (4), the question of what exact content “ϕ” has is a merely verbal question, like the question of how much hair someone can have and still be bald: we could decide to use “bald” differently, with no loss. But questions about moral obligation are not merely verbal in this way.

This means that given robust moral realism, of the standard views of vagueness all we have available is non-classical logic. But non-classical logic is just illogical (thumps table, hard)! :-)

So we need something else. If we deny (1)-(3), we have to say that ultimately “moral obligation” is sharp, but of course we can’t help but admit that there are Sorites sequences and we can’t tell where moral obligation begins and ends in them. But we cannot explain our ignorance in the semantic way of standard epistemicism. What we need is something like epistemicism, but where moral obligation facts are uniquely distinguished from other facts—they have this central overriding role in our lives—and yet there are moral facts that are likely beyond human ken. One might want to call this fifth view “non-standard epistemicism about vagueness” or “denial of vagueness”—whether we call it one or the other may just be a verbal question. :-)

In any case, I find it quite interesting that to save robust moral realism, we need either non-classical logic or something that we might call “denial of vagueness”.

Thursday, September 29, 2022

The structure of morality

In physics, we hope for the following unification: there is a small set of simple laws, and all the rest of physics derives logically from these laws and the contingencies of the arrangement of stuff.

In ethics, a similar ideal has often manifested itself. While I have a hope for the ideal being realized in physics, I have come to be more pessimistic about the ideal in ethics. Instead, I think we can have a looser unificatory structure. We can have a multilevel hierarchy of more general laws, and then more specific laws that specify or implement the more general laws.

I suspect the looser structure is what we have in Aquinas’s Natural Law. At the highest level we have the general law that the good is to be pursued and the bad to be avoided. This is then specified into three laws about promoting the goods of existence, species-specific life and reason. These three laws, I think, are then further specified.

There is thus a structure to the moral law, but it is not a deductive structure. The higher level laws make the lower level laws fitting, but do not necessitate them.

Sunday, September 25, 2022

A strict propriety argument for probabilism without any continuity assumptions

Here’s an accuracy-theoretic argument for probabilism (the thesis that only probabilities are rationally admissible credences) on finite spaces that does not make any continuity assumptions on the scoring rule. I will assume all credence functions take values on [0,1].

  1. All probabilities are rationally admissible credences.

  2. If any non-probabilities are rationally admissible, then all non-probabilities satisfying Normalization (whole space has credence 1) and Subadditivity (P(A) + P(B) ≤ P(AB) when A and B are disjoint) are rationally admissible with the appropriate prevision being given by a level set integral [correction: actually, I need LSI, not the version of LSI in the earlier blog post].

  3. A rationally appropriate scoring rule s satisfies strict propriety for all rationally admissible credences with an appropriate prevision: if V is an appropriate prevision then Vus(u) is better than Vus(v) whenever u and v are different rationally admissible credences.

  4. There is a rationally appropriate scoring rule.

But now we have a cute theorem:

  • On any finite space Ω with at least two points, no scoring rule satisfies strict propriety for the credences with Normalization and Subadditivity and level set integral prevision.

It follows no non-probabilities are rationally admissible.

Is this a good argument? I find (2) somewhat plausible—it’s hard to think of a less problematic weakening of the axioms of probability than from Additivity to Subadditivity, and I have not been able to find a better prevision than the level set integral one. Standard arguments for probabilism assuming strict propriety for all probabilities. But it seems to me that a non-probabilist will find strict propriety for all probabilities plausible only insofar as they find strict propriety for all admissible credences plausible. Thus (3) is dialectically as good as the usual strict propriety assumption.

I think the non-probabilist’s best way out is to deny strict propriety or to deny that there is a rationally appropriate scoring rule. Both of these ways out work just as well against more standard arguments for probabilism, and I think both are good ways out.

Technically speaking, the advantage of this argument over standard arguments for probabilism is that it makes no assumptions of continuity.

Friday, September 23, 2022

Discontinuous epistemic utilities

I used to take it for granted that it’s reasonable to make epistemic utilities be continuous functions of credences. But this is not so clear to me right now. Consider a proposition really central to a person’s worldview, such as:

  • life has (or does not have) a meaning

  • God does (or does not) exist

  • we live (or do not live) in a simulation

  • morality is (or is not) objective.

I think a case can be made that if a proposition like that is in fact true, then there is a discontinuous upward jump in epistemic utility as one goes from assigning a credence less than 1/2 to assigning a credence more than 1/2.

Thursday, September 22, 2022

Video: Three Mysteries of the Concrete

Alexander Pruss, “Three mysteries of the concrete: Causation, mind and normativity”, Christian Philosophy 2022, online, Cracow, Poland, September, 2022.

Monday, September 19, 2022

More on proportionality in Double Effect and prevention

In my previous post, I discuss cases where someone is doing an evil for the sake of preventing significantly worse goods—say, murdering a patient to save four others with the organs from the one—and note that a straightforward reading of the Principle of Double Effect’s proportionality condition seems to forbid one from stopping that evil. I offer the suggestion, due to a graduate student, that failure to stop the evil in such cases implies complicity with the evils.

I now think that complicity doesn’t solve the problem, because we can imagine case where there is no relevant evildoer. Take a trolley problem where the trolley is coming to a fork and about to turn onto the left track and kill Alice. There is no one on the right track. So far this is straightforward and doesn’t involve Double Effect at all—you should obviously redirect the trolley. But now add that if Alice dies, four people will be saved with her organs, and if Alice lives, they will die.

Among the results of redirecting the trolley, now, are the deaths of the four who won’t be saved, and hence Double Effect does apply. To save one person at the expense of four is disproportionate, and so it seems that one violates Double Effect in saving the one. And in this case, a failure to save Alice would not involve any complicity in anyone else’s evildoing.

It is tempting to say that the deaths of the four are due to their medical condition and not the result of trolley redirection, and hence do not count for Double Effect proportionality purposes. But now imagine that the four people can be saved with synthetic organs, though only if the surgery happens very quickly. However, the only four surgeons in the region are all on an automated trolley, which is heading towards the hospital along the left track, is expected to kill Alice along the way, but will continue on until it stops at the hospital. If the trolley is redirected on the right path, it will go far away and not reach the hospital in time.

In this case, it does seem correct to say that Double Effect forbids one from redirecting the trolley—you should not stop the surgeons’ trolley even if a person is expected to die from a trolley accident along the way. (Perhaps you are unconvinced if the number of patients needing to be saved is only four. If so, increase the number.) But for Double Effect to have this consequence, the deaths of the of the patients in the hospital have to count as effects of your trolley redirection.

And if the deaths count in this case, they should count in the original case where Alice’s organs are needed. After all, in both cases the patients die of their medical condition because the trolley redirection has prevented the only possible way of saving them.

Here’s another tempting response. In the original version of the story, if one refrains from redirecting the trolley in light of the people needing Alice’s organs, one is intending that Alice die as a means to saving the four, and hence one is violating Double Effect. But this response would not save Double Effect: it would make Double Effect be in conflict with itself. For if my earlier argument that Double Effect prohibits redirecting the trolley stands, and this response does nothing to counter it, then Double Effect both prohibits redirecting and prohibits refraining from redirecting!

I think what we need is some careful way of computing proportionality in Double Effect. Here is a thought. Start by saying in both versions of the case that the deaths of the four patients are not the effects of the trolley redirection. This was very intuitive, but seemed to cause a problem in the delayed-surgeons version. However, there is a fairly natural way to reconstrue things. Take it that leaving the trolley to go along the left track results in the good of saving the four patients. So far we’ve only shifted whether we count the deaths of the four as an evil on the redirection side of the ledger or the saving of the four as a good on the non-redirection side. This makes no difference to the comparison. But now add one more move: don’t count goods that result from evils in the ledgerat all. This second move doesn’t affect the delayed-surgeons case. For the good of saving lives in that case is not a result of Alice’s death, and the proportionality calculation is unaffected. In particular, in that case we still get the correct result that you should not redirect the trolley, since the events relevant to proportionality are the evil of Alice’s death and the good of saving four lives, and so preventing Alice’s death is disproportionate. But in the organ case, the good of saving lives is a result of Alice’s death. So in that case, Double Effect’s proportionality calculation does not include the lives saved, and hence, quite correctly, we conclude that you should redirect to save Alice’s life.

Maybe. But I am not sure. Maybe my initial intuition is wrong, and one should not redirect the trolley in the organ case. What pulls me the other way is the hungry bear case here.

Friday, September 16, 2022

Proportionality in Double Effect and prevention cases

Suppose you are visiting a hospital and you see Bob, a nurse, sneaking into Alice’s hospital room. Unnoticed, you look at what is going on, and you see that Bob is about to add a lethal drug to Alice’s IV, a drug that would undetectably kill Alice while leaving her organs intact. You recall with horror that two days ago you had a conversation with Bob and he described to you how compelling he finds the argument that it is sometimes obligatory to kill one patient in order to provide organs to save multiple other patients, when this can be done secretly. At the time, you unsuccessfully tried to persuade Bob that the consequentialism behind the argument was implausible. You happen to know that if Bob were to die right now, then four people could be saved. You could now yell, push Bob away, and prevent Alice’s murder.

Here is a Double Effect argument that you shouldn’t stop the murder. Your action of pushing Bob away has two sets of effects: (a) Alice isn’t murdered and (b) four patients who would be saved by Alice’s organs die. Of these, (a) is an intended good and (b) is an unintended evil. So your action is an action to which Double Effect is relevant: it is an action with two effects, an intended good and an unintended evil. But Double Effect makes it a necessary condition for the permissibility of an action that the evils not be disproportionate to the goods. And here the evils are disproportionate to the goods. So you shouldn’t stop Bob, it seems.

Now, one might question the proportionality judgment. Maybe while four deaths are disproportional to one death, four deaths are not disproportionate to one murder? This is mistaken, however. For suppose you see an assassin trying to murder someone with a long-range shot, and you see four innocent people near the assassin. The only way you have to stop the assassin is with a hand-grenade, which would kill the four innocents as well. It is clear that four deaths of innocents are disproportionate to the one murder: you should not stop the murder by blowing up the assassin.

Suppose you bite the bullet and agree that you shouldn’t stop Bob. Then I have an even more problematic version. Go back to your disquieting conversation with Bob about killing patients for their organs. Suppose that Bob disclosed to you in the course of that conversation that it wasn’t a merely hypothetical question, as you assumed, but that he was actually planning on acting on it. It seems completely clear that you should try to persuade him out of this murderous plan. But the exact same Double Effect argument seems to apply here: There are two sets of effects of your persuading Bob not to do it—one person isn’t murdered and a number of people die. The bad effects are disproportionate to the good ones, so Double Effect seems to prohibit you from persuading Bob out of his plan.

Maybe though this second case is different from the first, in that it is one of the basic tasks of a fellow human being to persuade others to act well—this is a central part of our human communal interaction. So it may be that once we take into account the good of persuading others to act well, and add that good to the intended goods, now the four deaths are no longer disproportionate. But now increase the numbers. Perhaps Alice has some weird mutation in her heart tissue such that culturing her heart tissue would save a thousand lives. Now the death of a thousand seems clearly disproportionate to preventing one murder and obtaining the goods of persuading others to act well. Imagine that I had a choice between preventing an explosion that would completely destroy a ship with a thousand people on board and persuading someone not to commit an “ordinary” murder. I should prevent the sinking of the ship. Yet even in the thousand patient case I have the intuition—admittedly, now weaker—that I should try to persuade Bob not to murder Alice, or at least that it is permissible to do so. Especially if Bob is my friend.

What’s going on? Is it the case that when we consider the good of persuading someone to act well, we should not count against that any goods that would result from their acting badly? Is it—a graduate student suggested this to me—that if I fail to persuade them to act well in order to obtain the goods that would result from their act badly, then I become complicit in their bad action? I think there is something to this idea. It may even apply in my earlier case of not stopping Bob physically from the murder, but it seems particularly plausible in the case of refraining to persuade.

In any case, if I am right that it is right to persuade Bob out of his plan to murder Alice, we really do need to understand the proportionality condition in Double Effect very carefully. That condition seems to become significantly context-sensitive. Double Effect is not a simple structural principle by any means.

Objection: When it’s a matter of stopping Bob’s murder of Alice, you don’t cause the deaths of the patients who need Alice’s organs to live. The patients die of whatever conditions they die of, rather than from your action.
So those deaths don’t figure in the Double Effect proportionality calculus.

Response: Imagine that I could stop an ordinary murder, but to do that I would have to park my car in a place that would block an ambulance from getting to the scene of an unrelated accident, where a number of people would die of their injuries if the ambulance were not to get there in time. When considering my action of parking my car, I do need to consider the deaths of the people the ambulance would save, even though they die from their injuries rather than from my action. If the number of people the ambulance would save is large enough, I ought not block the ambulance’s path to prevent one murder.

Thursday, September 15, 2022

Consent and inner acts

Some people think that a constituent (whole or partial) of consent is some sort of inner mental act of agreement with the thing one consents to. Here is an argument against this:

  1. A request or command does not require an inner mental act of agreement.

  2. Someone who requests or commands something necessarily consents to its performance.

  3. So, consent does not require an inner mental act of agreement.

(One can also qualify the requests, commands and consents as valid in all the premises, and the argument remains sound, I think.)

That said, consent does require some inner component, as does request or command. Consent requires a relevant communicative act to be performed intentionally. Similarly, to request or command something is not just to utter some sounds (or make some gestures, etc.), but to do so intending to be taken as requesting or commanding.

Tuesday, September 13, 2022

"Pun not intended"

Some people think that an outcome of an action foreseen with practical certainty is also intended. If so, then pretty much every case where someone writes “pun not intended” is a case where what they write is false. For one foresees with practical certainty that by disseminating the message one is punning.

Monday, September 12, 2022

Humeans laws and constants

On Mill-Ramsey-Lewis accounts of laws of nature, the laws are the propositions that best balance informativeness and brevity (in a language that cuts nature precisely at the joints).

Now, the laws of nature include constants, such as the fine-structure constant whose current best measured value is 1/137.035999206. Now, we might be lucky, and it might turn out that the fine-structure constant will have some neat and elegant precise value. There is a history of speculation that it has such a value—for a while, there was hope it was exactly 1/137, and then other guesses took over. But suppose we don’t get so lucky. Suppose it just is some messy number with no simple expression. That should, after all, be a serious possibility.

In that case, the exact value of the fine-structure constant cannot be a part of the Mill-Ramsey-Lewis “world in a nutshell” system of laws, since the system would then be infinitely long, and we lose our hope of defining laws in terms of brevity.

So we have two options. First, the system of laws might not include any specific information on the value of the fine-structure constant, but might instead be of the form αF(α) where F(α) says nothing about what α is, except maybe that it’s real-valued and positive. If we go for this option, then we have to say that all the things that depend on the actual value of the fine-structure constant—and that apparently includes all of chemistry—are not in fact laws of nature. This will likely fail to yield some counterfactuals that we want, and while the laws will be briefer, they will be far less informative than if they had something to say about the value of α.

So that moves us to the second option, which is that the laws are of the form αF(α) and F(α) includes some constraints on α, such as that it lies between 1/137.04 and 1/137.03. These constraints are sufficiently tight to generate the nomic implications we need for chemistry and biology. But while this result seems a better fit for science, it is metaphysically very strange. For it is very strange to think that the laws allow the fine-structure constant to have any of an infinite number of values, but these values must lie in a narrow range.

Furthermore, the exact narrow range for α would be determined by fine details (I am not sure if the pun is intended) of exactly how informativeness and brevity are balanced in the definition of the laws.

The same issue comes up for other constants in the laws of nature. Either Mill-Ramsey-Lewis laws do not include anything about the values of constants or else they include oddly specific, but not completely specific, ranges.

Thursday, September 8, 2022

Christian panpsychism

I’ve just realized that there is something rather attractive about panpsychism from a Christian point of view. All things God has created are in the image of God. Panpsychism allows them to be in the image of God in a very concrete way: by being minded. There seems to be something fitting about all things having an awareness of reality. And there is Luke 19:40. See also this paper.

That said, I think this falls in the general category of speculative arguments about what God would be expected to create, alongside such arguments as that we would expect God to create a multiverse, or Leibniz’s idea that we would expect a world that is infinitely nested in both the macro and the micro directions. Such arguments need to be extremely tentative.

Motivating panpsychism

There is something attractive about an ontology where all the properties are powers, but it seems objectionable.

First, a power is partly defined by the properties it can produce. But if these in turn are powers, then we have a vicious regress or circularity.

At the same time, mental properties do not seem to be purely powers: they seem to have a categorical qualitative character that is not captured by the power to produce something else.

What is attractive about a pure powers ontology is the conceptual simplicity, and the fact that categorical properties seem really mysterious.

There is, however, a modification we can make to a pure powers ontology that gets us out of the problem. There are two kinds of properties: powers and qualia. The mysteriousness objection does not apply to qualia, because we experience them. On this ontology, powers bottom out in the ability to produce qualia.

For this to avoid implausible anthropocentrism, we need panpsychism—only then will there be enough qualia outside of living things for the powers of fundamental physics to bottom out in. So we have an interesting motivation for panpsychism: it yields an attractive ontology for reasons that have nothing to do with the usual concerns in the philosophy of mind.

It’s worth noting that this ontology is similar to Leibniz’s. Leibniz had two kinds of properties: appetitions and perceptions. The appetitions are (deterministic) powers. Perceptions are similar to qualia, but not quite the same, because (a) perceptions need not be conscious, and (b) perceptions are always representational. Unfortunately, the representational aspect leads to a regress or circularity problem, much as the power powers ontology did, since representationality will define a perception in terms of other appetitions and perceptions.

Tuesday, September 6, 2022

Trolleys and chaos

Suppose that determinism is true and Alice is about to roll a twenty-sided die to determine which of twenty innocent prisoners to murder. There is nothing you can do to stop her. You are in Alice’s field of view. Now, a die roll, even if deterministic, is very sensitive to the initial conditions. A small change in Alice’s throw is apt to affect the outcome. And any behavior of yours is apt to affect Alice’s throw. You frown, and Alice becomes slightly tenser when she throws. You smile, and Alice pauses a little wondering what you’re smiling about, and then she throws differently. You turn around not to watch, and Alice grows annoyed or pleased, and her throw is affected.

So it’s quite reasonable to think that whatever you do has a pretty good chance, indeed close to a 95% chance, of changing which of the prisoners will die. In other words, with about 95% probability, each of your actions is akin to redirecting a trolley heading down a track with one person onto a different track with a different person.

Some people—a minority—think that it is wrong to redirect a trolley heading for five people to a track with only one person. I wonder what they could say should be done in the Alice case. If it’s wrong to redirect a trolley from five people to one person, it seems even more wrong to redirect a trolley from one person to another person. So since any discernible action is likely to effectively be a trolley redirection in the Alice case, it seems you should do nothing. But what does “do nothing” mean? Does it mean: stop all external bodily motion? But stopping all external bodily motion is itself an effortful action (as anybody who played Lotus Focus on the Wii knows). Or does it mean: do what comes naturally? But if one were in the situation described, one would likely become self-conscious and unable to do anything “naturally”.

The Alice case is highly contrived. But if determinism is true, then it is very likely that many ordinary actions affect who lives and who dies. You talk for a little longer to a colleague, and they start to drive home a little later, which has a domino effect on the timing of people’s behaviors in traffic today, which then slightly affects when people go to sleep, how they feel when they wake up, and eventually likely affects who dies and who does not die in a car accident. Furthermore, minor differences in timing affect the timing of human reproducive activity, which is likely to affect which sperm reaches the ovum, which then affects the personalities of people in the next generation, and eventually affects who lives and who dies. Thus, if we live in a deterministic world, we are constantly “randomly” (as far as we are concerned, since we don’t know the effects) redirectly trolleys between paths with unknown numbers of people.

Hence, if we live in a deterministic world, then we are all the time in trolley situations. If we think that trolley redirection is morally wrong, then we will be morally paralyzed all the time. So, in a deterministic world, we better think that it’s OK to redirect trolleys.

Of course, science (as well as the correct theology and philosophy) gives us good reason to think we live in an indeterministic world. But here is an intuition: when we deal with the external world, it shouldn’t make a difference whether we have real randomness or the quasi-randomness that determinism allows. It really shouldn’t matter whether Alice is flipping an indeterministic die or a deterministic but unpredictable one. So our conclusions should apply to our indeterministic world as well.