Monday, December 27, 2021

Essentially evil organizations

Start with this argument:

  1. Everything that exists is God or is created and sustained by God.

  2. God does not create and sustain anything essentially evil.

  3. The KKK is essentially evil.

  4. The KKK is not God.

  5. So, the KKK does not exist.

Now we have a choice-point. We could say:

  1. If the KKK does not exist, no organization exists.

  2. So, no organization exists.

After all, it may seem reasonable to think that the ontology of social groups should not depend on whether the groups are good, neutral or bad.

But I think it’s not unreasonable to deny (6), and to say that the being of a social group is defined by its teleology, and there is no teleology without a good telos. A similar move would allow for a way out of the previous argument.

Thursday, December 23, 2021

Yet another argument against artifacts

  1. If any complex artifacts really exist, instruments of torture really exist.

  2. Instruments of torture are essentially evil.

  3. Nothing that is essentially evil really exists.

  4. So, instruments of torture do not really exist. (2 and 3)

  5. So, no complex artifacts really exist. (1 and 4)

One argument for (3) is from the privation theory of evil.

Another is a direct argument from theism:

  1. Everything that really exists is created by God.

  2. Nothing created by God is essentially evil.

  3. So, nothing that is essentially evil really exists.

Tuesday, December 21, 2021

Divine simplicity and divine knowledge of contingent facts

One of the big puzzles about divine simplicity which I have been exploring is that of God’s knowledge of contingent facts. A sloppy way to put the question is:

  1. How can God know p in one world and not know p in another, even though God is intrinsically the same in both worlds?

But that’s not really a question about divine simplicity, since the same is often true for us. Yesterday you knew that today the sun would rise. Yet there is a possible world w2 which up to yesterday was exactly the same as our actual world w1, but due to a miracle or weird quantum stuff, the sun did not rise today in w2. Yesterday, you were intrinsically the same in w1 and w2, but only in w1 did you know that today the sun would rise. For, of course, you can’t know something that isn’t true.

So perhaps the real question is:

  1. How can God believe p in one world and not believe p in another, even though God is intrinsically the same in both worlds?

I wonder, however, if there isn’t a possibility of a really radical answer: it is false that God believes p in one world and not in another, because in fact God doesn’t have any beliefs in any world—he only knows.

In our case, belief seems to be an essential component of knowledge. But God’s knowledge is only analogical to our knowledge, and hence it should not be a big surprise if the constitutive structure of God’s knowledge is different from our knowledge.

And even in our case, it is not clear that belief is an essential component of knowledge. Anscombe famously thought that there was such a thing as intentional knowledge—knowledge of what you are intentionally doing—and it seems that on her story, the role played in ordinary knowledge by belief was played by an intention. If she is right about that, then an immediate lesson is that belief is not an essential component of knowledge. And in fact even the following claim would not be true:

  1. If one knows p, then one believes or intends p.

For suppose that I intentionally know that I am writing a blog post. Then I presumably also know that I am writing a blog post on a sunny day. But I don’t intentionally know that I am writing a blog post on a sunny day, since the sunniness of the day is not a part of the intention. Instead, my knowledge is based in part on the intention to write a blog post and in part on the belief that it is a sunny day. Thus, knowledge of p can be based on belief that p, intention that p, or a complex combination of belief and intention. But once we have seen this, then we should be quite open to a lot of complexity in the structure of knowledge.

Of course, Anscombe might be wrong about there being such a thing as knowledge not constituted by belief. But her view is still intelligible. And its very intelligibility implies a great deal of flexibility in the concept of knowledge. The idea of knowledge without belief is not nonsense in the way that the idea of a fork without tines is.

The same point can be supported in other ways. We can imagine concluding that we have no beliefs, but we have other kinds of representational states, such as credences, and that we nonetheless have knowledge. We are not in the realm of tineless forks here.

Now, it is true that all the examples I can think of for other ways that knowledge could be constituted in us besides being based on belief still imply intrinsic differences given different contents (beyond the issues of semantic externalism due to twinearthability). But the point is just that knowledge is flexible enough concept, that we should be open to God having something analogous to our knowledge but without any contingent intrinsic state being needed. (One model of this possibility is here.)

Thursday, December 16, 2021

When truth makes you do less well

One might think that being closer to the truth is guaranteed to get one to make better decisions. Not so. Say that a probability assignment p2 is at least as true as a probability assignment p1 at a world or situation ω provided that for every event E holding at ω we have p2(E)≥p1(E) and for every event E not holding at ω we have p2(E)≤p1(E). And say that p2 is truer than p1 provided that strict inequality holds in at least one case.

Suppose that a secret integer has been picked among 1, 2 and 3, and p1 assigns the respective probabilities 0.5, 0.3, 0.2 to the three possibilities while p2 assigns them 0.7, 0.1, 0.2. Then if the true situation is 1, it is easy to check that p2 is truer than p1. But now suppose that you are offered a choice between the following games:

  • W1: on 1 win $2, on 2 win $1100, and on 3 win $1000.

  • W2: on 1 win $1, on 2 win $1000, and on 3 win $1100

If you are going by p1, you will choose W1 and if you are going by p2, you will choose W2. But if the true number is 1, you would be better off picking W1 (getting $2 instead of $1), so the truer probabilities will lead to a worse payoff. C’est la vie.

Say that a scoring rule for probabilities is truth-directed if it never assigns a poorer score for a truer set of probabilities. The above example shows that a proper scoring rule need not be truth-directed. For let s(p)(n) be the payoff you will get if the secret number is n and you make your decision between W1 and W2 rationally on the basis of probability assignment p (with ties broken in favor of W1, say). Then s is a proper (accuracy) scoring rule but the above considerations show that s(p2)(1)<s(p1)(1), even though p2 is truer at 1. In fact, we can get a strictly proper scoring rule that isn’t truth-directed if we want: just add a tiny multiple of a Brier accuracy score to s.

Intuitively we would want our scoring rules to be both proper and truth-directed. But given that sometimes we are pragmatically better off for having less true probabilities, it is not clear that scoring rules should be truth-directed. I find myself of divided mind in this regard.

How common is this phenomenon? Roughly it happens whenever the truer and less-true probabilities disagree on ratios of probabilities of non-actual events.

Proposition: Suppose two probability assignments are such that there are events E1 and E2 with probabilities strictly between 0 and 1, with ω1 in neither event, and such that the ratio p1(E1)/p1(E2) is different from the ratio p2(E1)/p2(E2). Then there are wagers W1 and W2 such that p1 prefers W1 and p2 prefers W2, but W1 pays better than W2 at ω1.

Monday, December 13, 2021

An introduction to simple motion detection with Python and OpenCV

One of my kids really liked a cool thing in a children's museum where they had a camera pointed down a hallway, with a screen, and if you waved your hands in certain areas you got to play music. So I decided to make something like this myself. I found this tutorial and with its help produced some simple code. I then wrote up an Instructable explaining how the code works.

Truth directed scoring rules on an infinite space

A credence assignment c on a space Ω of situations is a function from the powerset of Ω to [0, 1], with c(E) representing one’s degree of belief in E ⊆ Ω.

An accuracy scoring rule s assigns to a credence assignment c on a space Ω and situation ω the epistemic utility s(c)(ω) of having credence assignment c when in truth we are in ω. Epistemic utilities are extended real numbers.

The scoring rule is strictly truth directed provided that if credence assignment c2 is strictly truer than c1 at ω, then s(c2)(ω)>s(c1)(ω). We say that c2 is strictly truer than c1 if and only if for every event E that happens at ω, c2(E)≥c1(E) and for every event E that does not happen at ω, c2(E)≤c1(E), and in at least one case there is strict inequality.

A credence assignment c is extreme provided that c(E) is 0 or 1 for every E.

Proposition. If the probability space Ω is infinite, then there is no strictly truth directed scoring rule defined for all credences, or even for all extreme credences.

In fact, there is not even a scoring rule that strictly truth directed when restricted to extreme credences, where an extreme credence is one that assigns 0 or 1 to every event.

This proposition uses the following result that my colleague Daniel Herden essentially gave me a proof of:

Lemma. If PX is the power set of X, then there is no function f : PX → X such that f(A)≠f(B) whenever A ⊂ B.

Now, we prove the Proposition. Fix ω ∈ Ω. Let s be a strictly truth directed scoring rule defined for all extreme credences. For any subset A of PΩ, define cA to be the extreme credence function that is correct at ω at all and only the events in A, i.e., cA(E)=1 if and only if ω ∈ E and E ∈ A or ω ∉ E and E ∉ A, and otherwise cA(E)=0. Note that cB is strictly truer than cA if and only if A ⊂ B. For any subset A of PΩ, let f(A)=s(cA)(ω).

Then f(A)<f(B) whenever A ⊂ B. Hence f is a strictly monotonic function from PPΩ to the reals. Now, if Ω is infinite, then the reals can be embedded in PΩ (by the axiom of countable choice, Ω contains a countably infinite subset, and hence PΩ has cardinality at least that of the continuum). Hence we have a function like the one the Lemma denies the existence of, a contradiction.

Note: This suggests that if we want strict truth directedness of a scoring rule, the scoring rule had better take values in a set whose cardinality is greater than that of the continuum, e.g., the hyperreals.

Proof of Lemma (essentially due to Daniel Herden): Suppose we have f as in the statement of the Lemma. Let ON be the class of ordinals. Define a function F : ON → A by transfinite induction:

  • F(0)=f(⌀)

  • F(α)=f({F(β):β < α}) whenever α is a successor or limit ordinal.

I claim that this function is one-to-one.

Let Hα = {F(δ):δ < α}.

Suppose F is one-to-one on β for all β < α. If α is a limit ordinal, then it follows that F is one-to-one on α. Suppose instead that α is a successor of β. I claim that F is one-to-one on α, too. The only possible failure of injectivity on α could be if F(β)=F(γ) for some γ < β. Now, F(β)=f(Hβ) and F(γ)=f(Hγ). Note that Hγ ⊂ Hβ since F is one-to-one on β. Hence f(Hβ)≠f(Hγ) by the assumption of the Lemma. So, F is one-to-one on ON by transfinite induction.

But of course we can’t embed ON in a set (Burali-Forti).

Friday, December 10, 2021

Unforgivable offenses that aren't all that terrible

When we talk of something as an unforgivable offense, we usually mean it is really a terrible thing. But if God doesn’t exist, then some very minor things are unforgivable and some very major things are forgivable.

Suppose that I read on the Internet about a person in Ruritania who has done something I politically disapprove of. I investigate to find out their address, and mail them a package of chocolates laced with a mild laxative. The package comes back to me from the post office, because my prospective victim was fictional and there is no such country as Ruritania.

If God doesn’t exist, I have done something unforgivable and beyond punishment. For there is no one with the standing to either forgive or punish me (I assume that the country I live in has a doctrine of impossible attempts on which attempts to harm non-existent persons are not legally punishable). Yet much worse things than this have been forgiven by the mercy of victims.

I ought to feel guilty for my attempt to make the life of my Ruritanian nemesis miserable. And if there is no God, there is no way out of guilt open to me: I cannot be forgiven nor can the offense be expiated by punishment.

The intuition that at least for relatively minor offenses there is an appropriate way to escape from guilt, thus, implies the existence of God—a being such that all offenses are ultimately against him.

Thursday, December 9, 2021

Yet another account of life

I think a really interesting philosophical question is the definition of life. Standard biological accounts fail to work for God and angels.

Here is a suggestion:

  • x has life if and only if it has a well-being.

For living things, one can talk meaningfully of how well or poorly off they are. And that’s what makes them be living.

I think this is a simple and attractive account. I don’t like it myself, because I am inclined to think that everything has a well-being—even fundamental particles. But for those who do not have such a crazy view, I think it is an attractively simple solution to a deep philosophical puzzle.

In search of real parthood

In contemporary mereology, it is usual to have two parthood relations: parthood and proper parthood. On this orthodoxy, it is trivially true that each thing is a part of itself and that nothing can be a proper part of itself.

I feel that this orthodoxy has failed to identify the truly fundamental mereological relation.

If it is trivial that each thing is a part of itself, then that suggests that parthood is a disjunctive relation: x is a part of y if and only if x = y or x is a part* of y, where parthood* is a more fundamental relation. But what then is parthood*? It is attractive to identify it with proper parthood. But if we do that, we can now turn to the trivial claim that nothing can be a proper part of itself. The triviality of this claim suggests that proper parthood is a conjunctive property, namely a conjunction of distinctness with some parthood relation. And on pain of circularity, parthood is not just parthood.

In other words, I find it attractive to think that there is some more fundamental relation than either of the two relations of contemporary mereology. And once we have that more fundamental relation, we can define contemporary mereological parthood as the disjunction of the more fundamental relation with identity and contemporary mereological proper parthood as the conjunction of the more fundamental relation with distinctness.

But I am open to the possibility that the more fundamental relation just is one of parthood and proper parthood, in which case the claim that everything is a part of itself or the claim that nothing is a part of itself is respectively non-trivial.

I will call the more fundamental relation “real parthood”. It is a relation that underlies paradigmatic instances of proper parthood. And now genuine metaphysical questions open up about identity, distinctness and real parthood. We have three possibilities:

  1. Necessarily, each thing is a real part of itself.

  2. Necessarily, nothing is a real part of itself.

  3. Possibly something is a real part of itself and possibly something is not a real part of itself.

If (1) is true, then real parthood is necessarily coextensive with contemporary mereological parthood. If (2) is true, then real parthood is necessarily coextensive with contemporary mereological proper parthood.

My own guess is that if there is such a thing as parthood at all, then (3) is true.

For the more fundamental a relation, the more I want to be able to recombine where it holds. Why shouldn’t God be able to induce the relation between two distinct things or refuse to induce it between a thing and itself? And it’s really uncomfortable to think that whatever the real parthood relation is, God has to be in that relation to himself.

Perhaps, though, the real parthood relation is a kind of dependency relation. If so, then since nothing can be dependent on itself, we couldn’t have a thing being a real part of itself, and real parthood would be coextensive with proper parthood.

All this is making me think that either real parthood is necessarily coextensive with proper parthood, or it is not necessarily coextensive with either of the two relations of contemporary mereology.

Monday, December 6, 2021

Samuel Clarke on our ignorance of the essence of God

At times I am made uncomfortable by this objection to arguments for the existence of God: there feels like there is something fishy about inferring the existence of a being about which we know so very little. It may be that theism is the only reasonable explanation of the universe’s existence, but if we know so very little about that explanation, can the inference to the truth of that explanation be a genuine version of inference to best explanation?

Newton's disciple Samuel Clarke has a nice answer to this objection:

There is not so mean and contemptible a plant or animal, that does not confound the most enlarged understanding upon earth; nay, even the simplest and plainest of all inanimate beings have their essence or substance hidden from us in the deepest and most impenetrable obscurity.

In other words, all our ordinary day-to-day inferences are to things whose essence is hidden.

It may be thought that now that we know about DNA, we do know the essences of plants and animals. But even if that is true, which I am sceptical of, it doesn’t matter: for belief in plants and animals was quite reasonable even before our superior science. And even this day, our knowledge of the essences of the fundamental entities of physics (e.g., particles, fields, wavefunctions) is basically nil. All we know is some facts about the effects of these entities.

Thursday, December 2, 2021

Misleadingness simpliciter

It is quite routine that learning a truth leads to rationally believing new falsehoods. For we all rationally believe many falsehoods. Suppose I rationally believe a falsehood p and I don’t believe a truth q. Then, presumably, I don’t believe the conjunction of p and q. But suppose I learn q. Then, typically, I will rationally come to believe the conjunction of p and q, a falsehood I did not previously believe.

Thus there is a trivial sense in which every truth I learn is misleading. But a definition of misleadingness on which every truth is misleading doesn’t seem right. Or at least it’s not right to say that every truth is misleading simpliciter. What could misleadingness simpliciter be?

In a pair of papers (see references here) Lewis and Fallis argue that we should assign epistemic utilities to our credences in such a way that conditioning on the truth should never be bad for us epistemically speaking—that it should not decrease our actual epistemic utility.

I think this is an implausible constraint. Suppose a highly beneficial medication has been taken by a billion people. I randomly sample a hundred thousand of these people and see what happened to them in the week after receiving the medication. Now, out of a billion people, we can expect about two hundred thousand to die in any given week. Suppose that my random sampling is really, really unlucky, and I find that fifty thousand of the people in my sample died a week because of the medication. Completely coincidentally, of course, since as I said the medication is highly beneficial.

Based on my data, I rationally come to believe the importantly false claim that the medication is very harmful. I also come to believe the true claim that half of my random sample died a week after taking the medication. But while that claim is true, it is quite unimportant except as misleading evidence for the harmfulness of the medication. It is intuitively very plausible that after learning the truth about half of the people in my sample dying, I am worse off epistemically.

It seems clear that in the medication case, my data is true and misleading in a non-trivial way. This suggests a definition of misleadingness simpliciter:

  • A proposition p is misleading simpliciter if and only if one’s overall epistemic utility goes down when one updates on p.

And this account of misleadingness is non-trivial. If we measure epistemic utility using strictly proper scoring rules, and if our credences are consistent, then the expected epistemic value of updating on the outcome of a non-trivial observation is positive. So we should not expect the typical truth to be misleading in the above sense. But some are misleading.

From this point of view, Lewis and Fallis are making a serious mistake: they are trying to measure epistemic utilities in such a way as to rule out the possibility of misleading truths.

By the way, I think I can prove that for any measure of epistemic utility obtained by summing a single strictly proper score across all events, there will be a possibility of misleadingness simpliciter.

Final note: We don’t need to buy into the formal mechanism of epistemic utilities to go with the above definition. We could just say that something is misleading iff coming to believe it would rationally make one worse off epistemically.

Wednesday, December 1, 2021

Investigative scoring rules

Let s be an accuracy scoring rule on a finite probability space. Thus, s(P) is a random variable measuring how close a probability assignment P is to the truth. Here are two reasonable conditions on the rule (the name for the second is made up):

  1. Propriety: EPs(P)≤EPs(Q) for any distinct probability assignments P and Q.

  2. Investigativeness: EPs(P)≤P(A)EPAs(PA)+P(Ac)EPAcs(PAc) whenever 0 < P(A)<1.

where EP is expected value with respect to P, PA is short for P(⋅|A), and Ac is the complement of A. Propriety says that if we are trying to maximize expected accuracy, we will never have reason to evidencelessly switch to a different credence. Investigativeness says that expected accuracy maximization never requires one to close one’s eyes to evidence because the expected accuracy after conditionalizing on learning whether A holds is at least as good as the currently expected accuracy. And we have strict versions of the two conditions provided the inequalities are always strict.

It is well-known that propriety implies investigativeness, and ditto for the strict variants.

One might guess that the other direction holds as well: that investigativeness implies propriety. But (perhaps surprisingly) not! In fact, strict investigativeness does not imply propriety.

Let s(P) be the following score: s(P)(w)=|{A : P(A)=1 and w ∈ A}|. In other words, s(P) measures how many true propositions P assigns probability 1 to. It is easy to see that s(PA)≥s(P) everywhere on A, and ditto for Ac in place of A, so the right-hand side in (2) is at least as big as P(A)EPAs(P)+P(Ac)EPAcs(P)=EPs(P).

But propriety does not hold as long as our probability space has at least two points. For let P be any regular probability—one that assigns a non-zero value to every non-empty set—and let Q be any probability concentrated at one point w0. Then s(P)=1 everywhere (the only subset P assigns probability 1 to is the whole space) while EPs(Q)≥1 + P({w0}) > 1 (since Q assigns probability 1 to {w} and to the whole space), and so we don’t have propriety.

If we want strict investigativeness, just replace s with s + ϵs′ where s′ is a Brier score and ϵ is small and positive. Then we will have strict investigativeness for s′, and hence for s + ϵs′ as well, but if ϵ is sufficiently small, we won’t have propriety.

It is interesting to think if investigativeness plus some additional plausible condition might imply propriety. A very plausible further condition is that if P is at least as close to the truth as Q for every event, then P gets a no-worse score. Another plausible condition is additivity. But my examples satisfy both conditions. I don’t see other plausible conditions to add, besides propriety as such.

Monday, November 29, 2021

Simultaneous causation and occasionalism

In an earlier post, I said that an account that insists that all fundamental causation is simultaneous but secures the diachronic aspects of causal series by means of divine conservation is “a close cousin to occasionalism”. For a diachronic causal series on this theory has two kinds of links: creaturely causal links that function instantaneously and divine conservation links that preserve objects “in between” the instants at which creaturely causation acts. This sounds like occasionalism, in that the temporal extension of the series is entirely due to God working alone, without any contribution from creatures.

I now think there is an interesting way to blunt the force of this objection by giving another role to creatures using a probabilistic trick that I used in my previous post. This trick allows created reality to control how long diachronic causal series take, even though all creaturely causation is simultaneous. And if created reality were to control how long diachronic causal series take, a significant aspect of the diachronicity of diachronic causal series would involve creatures, and hence the whole thing would look rather less occasionalist.

Let me explain the trick again. Suppose time is discrete, being divided into lots of equally-spaced moments. Now imagine an event A1 that has a probability 1/2 of producing an event A2 during any instant that A1 exists in, as long as A1 hasn’t already produced A2. Suppose A1 is conserved for as long as it takes to produce A2. Then the probability that it will take n units of time for A2 to be produced is (1/2)n + 1. Consequently, the expected wait time for A2 to happen is:

  • (1/2)⋅0 + (1/4)⋅1 + (1/8)⋅2 + (1/16)⋅3 + ... = 1.

We can then similarly set things up so that A2 causes A3 on average in one unit of time, and A3 on causes A4 on average in one unit of time, and so on. If n is large enough, then by the Central Limit Theorem, it is likely that the lag time between A1 and An will be approximately n units of time (plus or minus an error on the order of n1/2 units), and if the units of time are short enough, we can get arbitrarily good precision in the lag time with arbitrarily high precision.

If the probability of each event triggering the next at an instant is made smaller than 1/2, then the expected lag time from A1 to An will be less than n, and if the probaility is bigger than 1/2, the expected lag time will be bigger than n. Thus the creaturely trigger probability parameter, which we can think of as measuring the “strength” of the causal power, controls how long it takes to get to An through the “magic” of probabilistic causation and the Central Limit Theorem. Thus, the diachronic time scale is controlled precisely by creaturely causation—even though divine conservation is responsible for Ai persisting until it can cause Ai + 1. This is a more significant creaturely input than I thought before, and hence it is one that makes for rather less in the way of occasionalism.

This looks like a pretty cool theory to me. I don’t believe it to be true, because I don’t buy the idea of all causation being simultaneous, but I think it gives a really nice.

Simultaneous causation and determinism

Consider the Causal Simultaneity Thesis (CST) that all causation is simultaneous. Assume that simultaneity is absolute (rather than relative). Assume there is change. Here is a consequence I will argue for: determinism is false. In fact, more strongly, there are no diachronic deterministic causal series. What is surprising is that we get this consequence without any considerations of free will or quantum mechanics.

Since there is a very plausible argument from presentism to CST (a non-simultaneous fundamental causal relation could never obtain between two existent things given presentism), we get an argument from presentism to indeterminism.

Personally, I am inclined to think of this argument as a bit of evidence against CST and hence against presentism, because it seems to me that there could be a deterministic world, even though there isn’t. But tastes differ.

Now the argument for the central thesis. The idea is simple. On CST, as soon as the deterministic causes of an effect are in place, their effect is in place. Any delay in the effect would mean a violation of the determinism. There can be nothing in the deterministic causes to explain how much delay happens, because all the causes work simultaneously. And so if determinism is true—i.e., if everything has a deterministic cause—then all the effects happen all at once, and everything is already in the final state at the first moment of time. Thus there is no change if we have determinism and CST.

The point becomes clearer when we think about how it is an adherent of CST explains diachronic causal series. We have an item A that starts existing at time t1, persists through time t2 (kept in existence not by its own causal power, as that would require a diachronic causal relation, but either by a conserver or a principle of existential inertia), then causes an item B, which then persists through time t3 and then causes an item C, and so on. While any two successive items in the causal series A, B, C, ... must overlap temporally (i.e., there must be a time at which they both exist), we need not have temporal overlap between A and C, say. We can thus have things perishing and new things coming into being after them.

But if the causation is deterministic, then as soon as A exists, it will cause B, which will cause C, and so on, thereby forcing the whole series to exist at once, and destroying change.

In an earlier post, I thought this made for a serious objection to CST. I asked: “Why does A ‘wait’ until t2 to cause B?” But once we realize that the issue above has to do with determinism, we see that an answer is available. All we need to do is to suppose there is probabilistic causation.

For simplicity (and because this is what fits best with causal finitism) suppose time is discrete. Then we may suppose that at each moment of time at which A exists it has a certain low probability pAB of causing B if B does not already exist. Then the probability that A will cause B precisely after n units of time is (1 − pAB)npAB. It follows mathematically that “on average” it will cause B after pAB/(1 − pAB) fundamental units of time.

It follows that for any desired average time delay, a designer of the universe can design a cause that has that delay. Let’s say that we want B to come into existence on average u fundamental units of time after A has come into existence. Then the designer can give A a causal power of producing B at any given moment of time at which B does not already exist with probability pAB = 1/(1 + u).

The resulting setup will be indeterministic, and in particular we can expect significant random variation in how long it takes to get B from A. But if the designer wants more precise timing, that can be arranged as well. Let’s say that our designer wants B to happen very close to precisely one second after A. The designer can then ensure that, say, there are a million instants of time in a second, and that A has the power to produce an event A1 with a probability at any given instant such that the expected wait time will be 0.0001 seconds (i.e., 100 fundamental units of time), and A1 the power to produce A2 with the same probability, and so on, with A10000 = B. Then by the Central Limit Theorem, the average wait time between A and B can be expected to be fairly close to 10000 × 0.0001 = 1 seconds, and the designer can get arbitrarily high confidence of an arbitrarily high precision of delay by inserting more instants in each second, and more intermediate causes between A and B, with each intermediate cause having an average delay time of 100 fundamental units (say). (This uses the fact that the geometric distribution has a finite third moment and the Barry-Esseen version of the Central Limit Theorem.)

Thus, a designer of the universe can make an arbitrarily precise and reliable near-deterministic changing universe despite CST. And that really blunts the force of my anti-deterministic observation as a consideration against CST.

Tuesday, November 23, 2021

Plural and singular grounding

Here’s a tempting principle:

  1. If x and y ground z, then the fusion of x and y grounds z.

In other words, we don’t need proper pluralities for grounding—their fusions do the job just as well.

But the principle is false. For the principle is only plausible if any two things have a fusion. But if x and y do not overlap, then x and y ground their fusion. And then (1) would say that the fusion grounds itself, which is absurd.

This makes it very plausible to think that plural objectual grounding does not reduce to singular objectual grounding.

Failures of supervenience on Lewis's system

Start with the concept of “narrowly physical” for facts about the arrangement of physical entities and first-order physical properties such as “charge” and “mass”.

Here are two observations I have not seen made:

  1. On Lewis-Ramsey accounts of laws, laws of nature concerning narrowly physical facts do not supervene on narrowly physical facts.

  2. On Lewis’s account of causation, causal facts about narrowly physical events do not supervene on narrowly physical facts.

This means that in a Lewisian system we have at least four things we could mean by “physical”:

  1. narrowly physical

  2. grounded in the laws of narrowly physical facts and/or the narrowly physical facts themselves

  3. grounded in the causal facts about narrowly physical events and/or the narrowly physical facts themeselves

  4. grounded in the causal facts about narrowly physical events, the laws concerning narrowly physical facts and/or the narrowly physical facts themselves.

Here’s a corollary for the philosophy of mind:

  1. On a Lewisian system, we should not even expect the mental properties of purely narrowly physical beings to supervene on narrowly physical facts.

Argument for (1): The laws are the optimal systematization of particular facts. But now imagine a possible world where there is just a coin that is tossed a trillion times, and with no discernible pattern lands heads about half the time. In the best systematization, we attribute a chance of 1/2 to the coin landing heads. But now imagine a possible world with the same narrowly physical facts, but where there is an angel that thought about ℵ3 about a million times—each time, with a good prior mental explanation of the train of thought—and each of these times was a time just before the coin landed heads. Then the best systematization of the coin tosses will no longer make them simply have a chance of 1/2 of landing heads. Rather, they will have a chance 1/2 of landing heads when the angel didn’t just think about ℵ3.

Argument for (2): Add to the world in the above argument some cats and suppose that on any day when the fattest cat in the world eats n mice, that leads the angel to think about ℵn, though there are other things that can get the angel to think about ℵn. We can set things up so that the fattest cat’s eating three mice in a day causes the coin to land heads on the Lewisian counterfactual account of causation, but if we subtract the angel from the story, this will no longer be the case.

Monday, November 22, 2021

Functionalism implies the possibility of zombies

Endicott has observed that functionalism in the philosophy of mind contradicts the widely accepted supervenience of the mental on the physical, because you can have worlds where the functional features are realized by non-physical processes.

My own view is that a functionalist physicalist shouldn’t worry about this much. It seems to be a strength of a functionalist view that it makes it possible to have non-physical minds, and the physicalist should only hold that in the actual world all the minds are physical (call this “actual-world physicalism”).

But here is something that might worry a physicalist a little bit more.

  • If functionalism and actual-world physicalism are true, there is a possible world which is physically exactly like ours but where there is no pain.

Here is why. On functionalism, pain is constituted by some functional roles. No doubt an essential part of that role is the detection of damage and the production of aversive behavior. Let’s suppose for simplicity that this role is realized in C-fiber firing in all beings capable of pain (the argument generalizes straightforwardly if there are multiple realizers). Now imagine a possible world physically just like this one, but with two modifications: there are lots of blissful non-physical angels, and all C-fiber equipped brains have an additional non-physical causal power to trigger C-fiber firing whenever an angel thinks about that brain. It is no longer true that the functional trigger for C-fiber firing is damage. Now, the functional trigger for C-fiber firing is the disjunction of damage and being thought about by an angel, and hence C-fiber firing no longer fulfills the functional role of pain. But now add that the angels never actually think about a rain while that brain is alive, though they easily could. Then the world is physically just like ours, but nobody feels any pain.

One might object that a functional role of a detector is unchanged by adding a disjunct to what is being detected. But that is mistaken. After all, imagine that we modify the hookups in a brain so that C-fiber firing is triggered by damage and lack of damage. Then clearly we’ve changed the functional role of C-fiber firing—now, the C-fibers are triggered 100% of the time, no matter what—even though we’ve just added a disjunct.

We can also set up a story where it is the aversive behavior side of the causal role that is removed. For instance, we may suppose that there is a magical non-physical aura normally present everywhere in the universe, and C-fiber firing interacts with this aura to magically move human beings in the opposite direction to the one their muscles are moving them to. The aura does nothing else. Thus, if the aura is present and you receive a painful stimulus, you now move closer to the stimulus; if the aura is absent, you move further away. It is no longer the case that C-fibers have the function of producing aversive behavior. However, we may further imagine that at times random abnormal holes appear in the aura, perhaps due to a sport played by non-physical pain-free imps, and completely coincidentally a hole has always appeared around any animal while its C-fibers were firing. Thus, the physical aspects of that world can be exactly the same as in ours, but there is no pain.

The arguments generalize to show that functionalists are committed to zombies: beings physically just like us but without any conscious states. Interestingly, these are implemented as the reverse of the zombies dualists think up. The dualist’s zombies lack non-physical properties that the dualist (rightly) thinks we have, and this lack makes them not be conscious. But my new zombies are non-conscious precisely because they have additional non-physical properties.

Note that the arguments assume the standard physicalist-based functionalism, rather than Koons-Pruss Aristotelian functionalism.

Friday, November 19, 2021

A privation theory of evil without lacks of entities

Taking the privation theory literally, evil is constituted by the non-existence of something that should exist. This leads to a lot of puzzling questions of what that “something” is in cases such as error and pain.

But I am now wondering whether one couldn’t have a privation theory of evil on which evil is a lack of something, but not of an entity. What do I mean? Well, imagine you’re a thoroughgoing nominalist, believing in neither tropes nor universals. Then you think that there is no such thing as red, but of course you can say that sometimes a red sign fades to gray. It is natural to say that the faded sign is lacking the due color red, and the nominalist should be able to say this, too.

Suppose that in addition to being a thoroughgoing nominalist, you are a classical theist. Then you will want to say this: the sign used to participate in God by being red, but now it no longer thusly participates in God (though it still otherwise participates in God). Even though you can’t be a literal privation theorist, and hold that some entity has perished from the sign, you can be a privation theorist of sorts, by saying that the sign has in one respect stopped participating in God.

A lot of what I said in the previous two paragraphs is fishy. The “thusly” seems to refer to redness, and “one respect” seems to involve a quantification over respects. But presumably nominalists say stuff like that in contexts other than God and evil. So they probably think they have a story to tell about such statements. Why not here, then?

Furthermore, imagine that instead of a nominalist we have a Platonist who does not believe in tropes (not even the trope of participating). Then the problems of the “thusly” and “one respect” and the like can be solved. But it is still the case that there is no entity missing from the sign. Yet we still recognizably have a privation theory.

This makes me wonder: could it be that a privation theory that wasn’t committed to missing entities solve some of the problems that more literal privation theories face?

An omnipotence principle from Aquinas

Aquinas believes that it follows from omnipotence that:

  1. Any being that depends on creatures can be created by God without its depending on creatures.

But, plausibly:

  1. If x and y are a couple z, then z depends on x and y.

  2. If x and y are a couple z, then necessarily if z exists, z depends on x and y.

  3. Jill and Joe Biden are a couple.

  4. Jill and Joe Biden are creatures.

But this leads to a contradiction. By (4), we have a couple, call it “the Bidens”, consisting by Jill and Joe Biden, and by (2) that couple depends on Jill and Joe Biden. By (1) and (5), God can create the Bidens without either Jill or Joe Biden. But that contradicts (3).

So, Aquinas’ principle (1) implies that there are no couples. More generally, it implies that there are no beings that necessarily depend on other creatures. All our artifacts would be like that: they would depend on parts. Thus, Aquinas’ principle implies there are no artifacts.

Thomists are sometimes tempted to say that artifacts, heaps and the like are accidental beings. But the above argument shows that that won’t do. God’s power extends to all being, and whatever being creatures can bestow, God can bestow absent the creatures. If the accidental beings are beings, God can create them without their parts. But a universe with a heap and yet nothing heaped is absurd. So, I think, we need to deny the existence of accidental beings.

If we lean on (1) further, we get an argument for survivalism. Either Socrates depends on his body or not. If Socrates does not depend on his body, he can surely survive without his body after death. But if Socrates does depend on his body, then by (1) God can create Socrates disembodied, since Socrates’ body is a creature. But if God can create Socrates disembodied, surely God can sustain Socrates disembodied, and so Socrates can survive without his body. In fact, the argument does not apply merely to humans but to every embodied being: bacteria, trees and wolves can all survive death if God so pleases.

Things get even stranger once we get to the compositional structure of substances. Socrates presumably depends on his act of being. But Socrates’ act of being is itself a creature. Thus, by (1), God could create Socrates without creating Socrates’ act of being. Then Socrates would exist without having any existence.

I like the sound of (1), but the last conclusion seems disastrous. Perhaps, though, the lesson we get from this is that the esse of Socrates isn’t an entity? Or perhaps we need to reject (1)?

Valuing and behavioral tendencies

It is tempting to say that I value a wager W at x provided that I would be willing to pay any amount up to x for W and unwilling to pay an amount larger than x. But that’s not quite right. For often the fact that a wager is being offered to me would itself be relevant information that would affect how I value the wager.

Let’s say that you tossed a fair coin. Then I value a wager that pays ten dollars on heads at five dollars. But if you were to try to sell me that wager for a dollar, I wouldn’t buy it, because your offering it to me at that price would be strong evidence that you saw the coin landing tails.

Thus, if we want to define how much I value a wager at in terms of what I would be willing to pay for it, we have to talk about what I would be willing to pay for it were the fact that the wager is being offered statistically independent of the events in the wager.

But sometimes this conditional does not help. Imagine a wager W that pays $123.45 if p is true, where p is the proposition that at some point in my life I get offered a wager that pays $123.45 on some eventuality. My probability of p is quite low: it is unlikely anybody will offer me such a wager. Consequently, it is right to say that I value the wager at some small amount, maybe a few dollars.

Now consider the question of what I would be willing to pay for W were the fact that the wager is being offered statistically independent of the events in the wager, i.e., independent of p. Since my being offered W entails p, the only way we can have the statistical independence is if my being offered W has credence zero or p has credence one. It is reasonable to say that the closest possible world where one of these two scenarios holds is a world where p has credence one because some wager involving a $123.45 has already been offered to me. In that world, however, I am willing to pay up to $123.45 for W. Yet that is not what I value W at.

Maybe when we ask what we would be willing to pay for a wager, we mean: what we would be willing to pay provided that our credences stayed unchanged despite the offer. But a scenario where our credences stay unchanged despite the offer is a very weird one. Obviously, when an offer is made, your credence that the offer is made goes up, unless you’re seriously irrational. So this new counterfactual question asks us what we would decide in worlds where we are seriously irrational. And that’s not relevant to the question of how we value the wager.

Maybe instead of asking about the prices at which I would accept an offer, I should instead ask about the prices at which I would make an offer. But that doesn't help either. Go back to the fair coin case. I value a wager that pays you ten dollars on heads at negative five dollars. But I might not offer it to you for eight dollars, because it is likely that you would pay eight dollars for this wager only if you actually saw that the coin turned out heads, in which case this would be a losing proposition for me.

The upshot is, I think, that the question of what one values a wager at is not to be defined in terms of simple behavioral tendencies or even simple counterfactualized behavioral tendencies. Perhaps we can do better with a holistic best-fit analysis.

Thursday, November 18, 2021

The Paradox of Charity

We might call the following three statements "the Paradox of Charity":

  1. In charity, we love our neighbor primarily because of our neighbor’s relation to God.

  2. In the best kind of love, we love our neighbor primarily because of our neighbor’s intrinsic properties.

  3. Charity is the best kind of love.

I think this paradox discloses something very deep.

Note that the above three statements do not by themselves constitute a strictly logical contradiction. To get a strictly logical contradiction we need a premise like:

  1. No intrinsic property of our neighbor is a relation to God.

Now, let’s think (2) through. I think our best reason for accepting (2) is not abstract considerations of intrinsicness, but particular cases of properties. In the best kind of love, perhaps, we love our neighbor because our neighbor is a human being, is a finite person, has a potential for human flourishing, etc. We may think that these features are intrinsic to our neighbor, but we directly see them as apt reasons for the best kind of love, without depending on their intrinsicness.

But suppose ontological investigation of such paradigm properties for which one loves one’s neighbor with the best kind of love showed that these properties are actually relational rather than intrinsic. Would that make us doubt that these properties are a fit reason for the best kind of love? Not at all! Rather, if we were to learn that, we would simply deny (2). (And notice that plenty of continentally-inclined philosophers do think that personhood is relational.)

And that is my solution. I think (1), (3) and (4) are true. I also think that the best kind of neighbor love is motivated by reasons such as that our neighbor is a human being, or a person, or has a potential for human flourishing. I conclude from (1), (3) and (4) that these properties are relations to God.

But how could these be relations to God? Well, all the reality in a finite being is a participation in God. Thus, being human, being a finite person and having a potential for human flourishing are all ways of participating in God, and hence are relations to God. Indeed, I think:

  1. Every property of every creature is a relation to God.

It follows that no creature has any intrinsic property. The closest we come to having intrinsic properties are what one might call “almost intrinsic properties”—properties that are relational to God alone.

We can now come back to the original argument. Once we have seen that all creaturely properties are participations in God, we have no reason to affirm (2). But we can still affirm, if we like:

  1. In the best kind of love, we love our neighbor primarily because of our neighbor’s almost intrinsic properties, i.e., our neighbor’s relations only to God.

And there is no tension with (1) any more.

Wednesday, November 17, 2021

First person survivalship bias?

Suppose I take a nasty fall while biking. But I remain conscious. Here is the obvious first thing for a formal epistemologist to do: increase my credence in the effectiveness of this brand of helmets. But by how much?

In an ordinary case of evidence gathering, I simply conditionalize on my evidence. But this is not an ordinary case, because if things had gone otherwise—namely, if I did not remain conscious—I wouldn’t be able to update or think in any way. It seems like I am now subject to a survivorship bias. What should I do about that? Should I simply dismiss the evidence entirely, and leave unchanged my credence in the effectiveness of helmets? No! For I cannot deny that I am still conscious—my credence for that is now forced to be one. If I leave all my other credences unchanged, my credences will become inconsistent, assuming they were consistent before, and so I have to do something to my other credences to maintain consistency.

It is tempting to think that perhaps I need to compensate for survivorship bias in some way, perhaps updating my credence in the effectiveness of the helmet to be bigger than my priors but smaller than the posteriors of a bystander who had the same priors as I did but got to observe my continued consciousness without a similar bias, since they would have been able to continue to think even were I to become unconscious.

But, no. What I should do is simply update on my consciousness (and on the impact, but if I am a perfect Bayesian agent, I have already done that as soon as it was evident that I would hit the ground), and not worry about the fact that if I weren’t conscious, I wouldn’t be around to update on it. In other words, there is no such problem as survivorship bias in the first person, or at least not in cases like this.

To see this, let’s generalize the case. We have a situation where the probability space is partitioned into outcomes E1, ..., En, each with non-zero prior credence. I will call an outcome Ei normal if on that outcome you would know for sure that Ei has happened, you would have no memory loss, and would be able to maintain rationality. But some of the outcomes may be abnormal. I will have a bit more to say about the kinds of abnormality my argument can handle in a moment.

We can now approach the problem as follows: Prior to the experiment—i.e., prior to the potentially incapacitating observation—you decide rationally what kind of evidence update procedures to adopt. On the normal outcomes, you get to stick to these procedures. On the abnormal ones, you won’t be able to—you will lose rationality, and in particular your update will be statistically independent of the procedure you rationally adopted. This independence assumption is pretty restrictive, but it plausibly applies in the bike crash case. For in that case, if you become unconscious, your credences become fixed at the point of impact or become scrambled in some random way, and you have no evidence of any connection between the type of scrambling and the rational update procedure you adopted. My story can even handle cases where on some of the abnormal outcomes you don’t have any credences, say because your brain is completely wiped or you cease to exist, again assuming that this is independent of the update procedure you adopted for the normal outcomes.

It turns out to be a theorem that under conditions like this, given some additional technical assumptions, you maximize expected epistemic utility by conditionalizing when you can, i.e., whenever a normal outcome occurs. And epistemic utility arguments are formally interchangeable with pragmatic arguments (because rational decisions about wager adoption yield a proper epistemic utility), so we also get a pragmatic argument. The theorem will be given at the end of this post.

This result means we don’t have to worry in firing squad cases that you wouldn’t be there if you weren’t hit: you can just happily update your credences (say, regarding the number of empty guns, the accuracy of the shooters, etc.) on your not being hit. Similarly, you can update on your not getting Alzheimer’s (which is, e.g., evidence against your siblings getting it), on your not having fallen asleep yet (which may be evidence that a sleeping pill isn’t effective), etc., much as a third party who would have been able to observe you on both outcomes should. Whether this applies to cases where you wouldn’t have existed in the first place on one of the items in the partition—i.e., whether you can update on your existence, as in fine-tuning cases—is a more difficult question, but the result makes some progress towards a positive answer. (Of course, it woudn’t surprise me if all this were known. It’s more fun to prove things oneself than to search the literature.)

Here is the promised result.

Theorem. Assume a finite probability space. Let I be the set of i such that Ei is normal. Suppose that epistemic utility is measured by a proper accuracy scoring rule si when Ei happens for i ∈ I, so that the epistemic utility of a credence assignment ci is si(ci) on Ei. Suppose that epistemic utility is measured by a random variable Ui on Ei (not dependent on the choice of the cj for j ∈ I) for i not in I. Let U(c)=∑iI 1Ei ⋅ si(ci)+∑iI 1Ei ⋅ Ui. Assume you have consistent priors p that assign non-zero credence to each normal Ei, and the expectation Ep of the second sum with respect to these priors is well defined. Then the expected value of U(c) with respect to p is maximized when ci(A)=p(A ∣ Ei) for i ∈ I. If additionally the scoring rules are strictly proper, and the p-expectation of the second sum is finite, then the expected value of U(c) is uniquely maximized by that choice of ci.

This is one of those theorems that are shorter to prove than to state, because they are pretty obvious once fairly clearly formulated.

Normally, all the si will be the same. It's worth thinking if any useful generalization is gained by allowing them to be different. Perhaps there is. We could imagine situtions where depending on what happens to you, your epistemic priorities rightly change. Thus, if an accident leaves you with some medical condition, knowing more about that medical condition will be valuable, while if you don't get that medical condition, the value of knowing more about it will be low. Taking that into account with a single scoring rule is apt to make the scoring rule improper. But in the case where you are conditioning on that medical condition itself, the use of different but individually proper scoring rules when the condition eventuates and when it does not can model the situation rather nicely.

Proof of Theorem: Let ci be the result of conditionalizing p on Ei. Then the expectation of si(ci′) with respect to ci is maximized when (and only when, if the conditions of the last sentence of the theorem hold) ci′=ci by propriety of si. But the expectation of si(ci′) with respect to ci equals 1/p(Ei) times the expectation of 1Ei ⋅ si(ci′) with respect to p. So the latter expectation is maximized when (and only when, given the additional conditions) ci′=ci.

Tuesday, November 16, 2021

Functionalism and multiple realizability

Functionalism holds that two (deterministic) minds think the same thoughts when they engage in the same computation and have the same inputs. What does it mean for them to engage in the same computation?

This is a hard question. Suppose two computers run programs that sort a series of names in alphabetical order, but they use different sorting algorithms. Given the same inputs, are the two computers engaging in the same computation?

If we say “no”, then functionalism doesn’t have the degree of multiple realizability that we thought it did. We have no guarantee that aliens who behave very much like us think very much like us, or even think at all, since the alien brains may have evolved to compute using different algorithms from us.

If we say “yes”, then it seems we are much better off with respect to multiple realizability. However, there is a tricky issue here: What counts as the inputs and outputs? We just said that the computers using different sorting algorithms engage in the same computation. But the computer using a quicksort typically returns an answer sooner than a computer using a bubble sort, and heats up less. In some cases, the time at which an output is produced itself counts as an output (think of a game where timing is everything). And heat is a kind of output, too.

In my toy sorting algorithm example, presumably we didn’t count the timing and the heat as features of the outputs because we assumed that to the human designers and/or users of the computers the timing and heat have no semantic value, but are merely matters of convenience (sooner and cooler are better). But when we don’t have a designer or user to define the outputs, as in the case where functionalism is applied to randomly evolved brains, things are much more difficult.

So, in practice, even if we answered “yes” in the toy sorting algorithm case, in a real-life case where we have evolved brains, it is far from clear what counts as an output, and hence far from clear what counts as “engaging in the same computation”. As a result, the degree to which functionalism yields multiple realizability is much less clear.

Subjective guilt and war

One of the well-known challenges in accounting for killing in a just war is the thought that even soldiers fighting on a side without justice think they have justice on their side, hence are subjectively innocent, and thus it seems wrong to kill them.

But I wonder if there isn’t an opposite problem. As is well-known, human beings have a very strong visceral opposition to killing. Even those who kill with justice on their side are apt to feel guilty, and it wouldn’t be surprising if often they not only feel guilty but judge themselves to have done wrong. Thus, it could well be that soldiers who kill on both sides of a war have a tendency to be subjectively guilty, even if one of the sides is waging a just war.

Or perhaps things work out this way: Soldiers who kill tend to be subjectively guilty unless they are waging a clearly just war. If so, then those who are on a side without justice are indeed apt to be subjectively guilty, since rarely does a side without justice appear manifestly just. And those who are on a side with justice are may very well also be subjectively guilty, unless the war is one of those where justice is manifest (as was the case for the Allies in World War II).

I doubt that things work out all that neatly.

In any case, the above considerations do show that a side with justice has very strong moral reason to make that justice as manifest as possible to the soldiers. And when that is not possible, those in charge should be persons of such evident integrity that it is easy to trust their judgment.

Monday, November 15, 2021

Intrinsic evil

Consider this argument:

  1. An action is intrinsically evil if and only if it is wrong to do no matter what.

  2. In doing anything wrong, one does something (at least) prima facie bad with insufficient moral reason.

  3. No matter what, it is wrong to do something prima facie bad with insufficient moral reason.

  4. So in doing anything wrong, one performs an intrinsically evil action.

This conclusion seems mistaken. Lightly slapping a stranger on a bus in the face is wrong, but not intrinsically wrong, because if a malefactor was going to kill everyone on the bus who wasn’t slapped by you, then you should go and slap everybody. Yet the argument would imply that in lightly slapping a stranger on a bus you do something intrinsically wrong, namely slap a stranger with insufficient moral reason. But it seems mistaken to think that in slapping a stranger lightly you perform an intrinsically evil action.

The above argument threatens to eviscerate the traditional Christian distinction between intrinsic and extrinsic evil. What should we say?

Here is a suggestion. Perhaps we should abandon (1) and instead distinguish between reasons why an action is wrong. Intrinsically evil actions are wrong for reasons that do not depend on consideration of consequences and extrinsically evil actions are wrong but not for any reasons that do not depend on consideration of consequences.

Thus, lightly slapping a stranger with insufficient moral reason is extrinsically evil because any reason that makes it wrong is a reason that depends on consideration of consequences. On the other hand, one can completely explain what makes an act of murder wrong without adverting to consequences.

But isn’t the death of the victim a crucial part of the wrongness of murder, and yet a consequence? After all, if the cause of death is murder, then the death is a consequence of the murder. Fortunately we can solve this: the act is no less wrong if the victim does not die. It is the intention of death, not the actuality of death, that is a part of the reasons for wrongness.

So, when we distinguish between acts made wrong by consequences and and wrong acts not made wrong by consequences, by “consequences” we do not mean intended consequences, but only actual or foreseen or risked consequences.

But what if Alice slaps Bob with the intention of producing an on-balance bad outcome? That act is wrong for reasons that have nothing to do with actual, foreseen or risked consequences, but only with her intention. Here I think we can bite the bullet: to slap an innocent stranger with the intention of producing an on-balance bad outcome is intrinsically wrong, just as it is intrinsically wrong to slap an innocent stranger with the intention of causing death.

Note that this would show that an intrinsically evil action need not be very evil. A light slap with the intention of producing an on-balance slightly bad outcome is wrong, but not very wrong. (Similarly, the Christian tradition holds that every lie is intrinsically evil, but some lies are only slight wrongs.)

Here is another advantage of running the distinction in this way, given the Jewish and Christian tradition. If an intrinsically evil action is one that is evil independently of consequences, it could be that such an action could still be turned into a permissible one on the basis of circumstantial factors not based in consequences. And God’s commands can be such circumstantial factors. Thus, when God commands Abraham to kill Isaac, the killing of Isaac becomes right not because of any new consequences, but because of the circumstance of God commanding the killing.

Could we maybe narrow down the scope of intrinsically evil actions even more, by saying that not just consequences, but circumstances in general, aren’t supposed to be among the reasons for wrongness? But if we do that, then most paradigm cases of intrinsically evil actions will fail: for instance, that the victim of a murder is innocent is a circumstance (it is not a part of the agent’s intention).

Trust and scepticism

To avoid scepticism, we need to trust that human epistemic practices and reality match up. This trust is clearly at least a part of a central epistemic virtue.

Now, trusting persons is a virtue, the virtue of faith. But trusting in general, apart from trusting persons, is not. Theism can thus neatly account for how the trusting that is at the heart of human epistemic practices is virtuous: it is an implicit trust in our creator.

Friday, November 12, 2021

Another way out of the metaphysical problem of evil

The metaphysical problem of evil consists in the contradiction between:

  1. Everything that exists is God or is created by God.

  2. God is not an evil.

  3. God does not create anything that is an evil.

  4. There exists an evil.

The classic Augustinian response is to deny (4) by saying that evil “is” just a lack of a due good. This has serious problems with evil positive actions, errors, pains, etc.

Here is a different way out. Say that a non-fundamental object x is an object x such that the proposition that x exists is wholly grounded in some proposition that makes no reference to x. Now we deny (3) and replace it with:

  1. God does not create anything fundamental that is an evil.

How could God create something non-fundamental that is an evil? By a combination of creative acts and refrainings from creative acts whose joint outcome grounds the existence of the non-fundamental evil, while foreseeing without intending the non-fundamental evil. Of course, this requires the kind of story about intention that the Principle of Double Effect uses.

Thus, consider George Shaw’s erroneous (initial) error that there are no platypuses. God creates George Shaw. He creates Shaw’s belief. He creates platypuses. The belief isn’t an evil. The platypuses aren’t an evil. The combination of the belief and the platypuses is an error. But the combination of the two is not a fundamental entity (even if the belief and the platypuses are). God can intend the belief to exist and the platypuses to exist without intending the combination to exist.

A variant virtue ethic centered on virtues and not persons

I’ve been thinking a bit about the virtue ethical claim that the right (i.e., obligatory) action is one that a virtuous person would do and the wrong one is one that a virtuous person wouldn’t do. I’ve argued in my previous posts that this is a problematic claim, since given either naturalism or the Hebrew Scriptures, it is possible for a virtuous person to do something wrong.

Maybe instead of focusing on the person, the virtue ethicist can focus on the virtues. Here is an option:

  1. An action is wrong if and only if it could not properly (non-aberrantly) flow from the relevant virtues.

This principle is compatible with a virtuous person doing something wrong, as long as that wrong thing doesn’t flow from virtue.

The “properly” in (1) is an “in the right way” condition. Once we have allowed, as I think we should, that a virtuous person can do the wrong thing, we should also allow that a wrong action can flow from virtue in some aberrant way. For instance, we can imagine a wholly virtuous person falling prey to a temptation to brag about being wholly virtuous (and instantly losing the virtue, of course). The bragging flows from the virtue—but aberrantly.

A down-side of (1) is that it is a pretty strong condition on permissibility. One might think that there are some permissible morally neutral actions which can be done by a perfectly virtuous person but which do not flow from their virtue. If we accept (1), then in effect we are saying that there are no morally neutral actions. I think that is the right thing to say.

The big problem with (1) is the “properly”.

Naturalists shouldn't be virtue ethicists

Virtue ethics is committed to this claim:

  1. A choice of A is wrong if and only if a person who had the relevant virtues explanatorily prior to having chosen A and was in these circumstances would not have chosen A.

But (1) implies this generalization:

  1. A person who has the relevant virtues explanatorily prior to a choice never chooses wrongly.

In my previous post I argued that Aristotelian Jews and Christians should deny (2), and hence (1).

Additionally, I think naturalists should deny (1). For we live in a fundamentally indeterministic world given quantum mechanics. If a virtuous person were placed in a position of choosing between aiding and insulting a stranger, there will always be a tiny probability of their choosing to insult the stranger. We shouldn’t say that they wouldn’t insult the stranger, only that they would be very unlikely to do so (this is inspired by Alan Hajek’s argument against counterfactuals).

And (2) itself is dubious, unless we have such a high standard of virtue that very few people have virtues. For in our messy chaotic world, very little is at 100%. Rare exceptions should be expected when human behavior is involved.

(Perhaps a dualist virtue ethicist who does not accept the Hebrew Scriptures could accept (1) and (2), holding that a virtuous soul makes the choices and is not subject to the indeterminacy of quantum mechanics and the chaos of the world.)

There is a natural way out of the above arguments, and that it so to change (1) to a probabilistic claim:

  1. A choice of A is wrong if and only if a person who had the relevant virtues explanatorily prior to having chosen A and was in these circumstances would be very unlikely to have chosen A.

But (3) is false. Suppose that Alice is a virtuous person who has a choice to help exactly one of a million strangers. Whichever stranger she chooses to help, she does no wrong. But it is mathematically guaranteed that there is at least one stranger such that her chance of helping them is at most one in a million (for if pn is her chance of helping stranger number n, then p1 + ... + p1000000 ≤ 1, since she cannot help more than one; given that 0 ≤ pn for all n, it follows mathematically that for some n we have pn ≤ 1/1000000). So her helping a particular such stranger is very unlikely to be chosen, but isn’t wrong.

Or for a less weighty case, suppose I say something perfectly morally innocent to start off a conversation. Yet it is very unlikely that a virtuous person would have said so. Why? Because there are so very many perfectly morally innocent ways to start off a conversation, it is very unlikely that they would have chosen the same one I did.

Christians and Jews should not be Aristotelian virtue ethicists

If virtue ethics is correct:

  1. An choice is wrong if and only if a person with the relevant virtues and in these circumstances wouldn’t have made that choice. (Premise)

If Aristotelian virtue ethics is correct:

  1. An adult lacking a virtue is defective. (Premise)


  1. Humans became defective because of the choice of Adam and Eve to eat the forbidden fruit. (Premise)

And it seems that:

  1. Adam and Eve were adult humans when they chose to eat the forbidden fruit. (Premise)

Thus it seems:

  1. When Adam and Eve chose to eat the forbidden fruit, they were not lacking relevant virtues. (By 2–4)

  2. Thus, persons (namely Adam and Eve!) with the relevant virtues and in their circumstances did choose to eat the forbidden fruit. (By 5)

  3. Thus, their choice to eat the forbidden fruit wasn’t wrong. (1 and 6)

  4. But their choice was wrong. (Premise)

  5. Contradiction!

Here is one thing the classic virtue ethicist can question about this argument: the derivation of (5) depends on how we read premise (1). We could read (1) as:

  1. A choice of A is wrong if and only if a person who had the relevant virtues explanatorily prior to having chosen A and was in these circumstances would not have chosen A

or as:

  1. A choice of A is wrong if and only if a person who had the relevant virtues while having chosen A and was in these circumstances would not have chosen A.

If we opt for (10), the derivation of (5) works, and the argument stands. But if we opt for (11) then we can say that as soon as Adam and Eve chose to eat the fruit, they no longer counted as virtuous.

Could the virtue ethicist thus opt for (11) in place of (10)? I don’t think so. It seems central to virtue ethics that the right choices are ones that result from virtue. And that is what (10) captures. To a great extent (11) would trivialize virtue ethics, in that obviously in doing a bad thing one isn’t virtuous.

Ethics and multiverse interpretations of quantum mechanics

Somehow it hasn’t occurred to me until yesterday that quantum multiverse theories (without the traveling minds tweak) undercut half of ethics, just as Lewis’s extreme modal realism does.

For whatever we do, total reality is the same, and hence no suffering is relieved, no joy is added, etc. The part of ethics where consequences matter is all destroyed. There is no point to preventing any evil, since doing so just shifts which branch of the multiverse one inhabits.

At most what is left of ethics is agent-centered stuff, like deontology. But that’s only about half of ethics.

Moreover, even the agent-centered stuff may be seriously damaged, depending on how one interprets personal identity in the quantum multiverse.

Consider three theories.

On the first, I go to all the outgoing branches, with a split consciousness. On this view, no matter what, there will be branches where I act well and branches where I act badly. So much or all of the agent-centered parts of ethics will be destroyed.

On the second, whenever branching happens, the persons in the branches are new persons. If so, then there are no agent-centered outcomes—if I am deliberating between insulting or comforting a suffering person, no matter what, I will do neither, but instead a descendant of me will insult and another descendant will comfort. Again, it’s hard to fit this with the agent-centered parts of ethics.

The third is the infinitely many minds theory on which there are infinitely many minds inhabiting my body, and whenever a branching happens, infinitely many move into each branch. In particular, I will move into one particular branch. On this theory, if somehow I can control which branch I go down (which is not clear), there is room for agent-centered outcomes. But this is not the most prominent of the multiverse theories.

Thursday, November 11, 2021

Forwards causal closure and dualist theories of perception

A standard dualist theory of perception goes like this:

  1. You sense stuff physically, the data goes to the brain, the brain processes the data, and out of the processed data produces qualia.

There is a lot of discussion of the “causal closure” of the physical. What people generally mean by this is that the physical is causally backwards-closed: the cause of a physical thing is itself physical. This is a controversial doctrine, not least because it seems to imply that some physical things are uncaused. But what doesn’t get discussed much is a more plausible doctrine we might call the forwards causal closure of the physical: physical causes only have physical effects. Forwards causal closure of the physical is, I think, a very plausible candidate for a conceptual truth. The physical isn’t spooky—and it is spooky to have the power of producing something spooky. (One could leave this at this conceptual argument, or one could add the scholastic maxim that one cannot cause what one does not in some sense have.)

By forwards closure, on the standard dualist theory, the brain is not a physical thing. This is a problem. It is supposed to be one of the advantages of the standard dualist theory that it is compatible with property dualism on which people are physical but have non-physical properties. But if the brain is not physical, there is no hope for people to be physical! Personally, I don’t mind losing property dualism, but it sure sounds absurd to hold that the brain is not physical.

Recently, I have been thinking about a non-causal dualist theory that goes like this:

  1. You sense stuff physically, the data goes to the brain, the brain processes the data, and the soul “observes” the brain’s processed data. (Or, perhaps more precisely, the person "feels" the neural data through the soul.)

To expand on this, what makes one feel pain is not the existence of a pain quale, but a sui generis “observation” relation between the soul and the brain’s processed data. This observation relation is not caused by the data, but takes place whether there is data there or not (if there isn’t, we have a perceptual blank slate). The soul is not changed intrinsically by the data: the “observation” of a particular datum—say, a datum representing a sharp pain in a toe—is an extrinsic feature of the soul. Note that unlike the standard theory, this up-front requires substance dualism of some sort, since the observing entity is not physical given the sui generis nature of the “observation” relation.

The non-causal dualist theory allows one to maintain forwards closure of the physical and the physicality of the brain. For the brain doesn’t cause a non-physical effect. The brain simply gets “observed”.

It is however possible that the soul causes an effect in the brain—for instance, the “observation” relation may trigger quantum collapse. Thus, the theory may violate backwards closure. And that’s fine by me. Backwards closure does not follow conceptually from the concept of the physical—a physical thing doesn’t become spooky for having a spooky cause.

There is a difficulty here, however. Suppose that the soul acts on the “observed” data, say by causing one to say “You stepped on my foot.” Wouldn’t we want to say that the brain data correlated with the pain caused one to say “You stepped on my foot”?

I think this temptation is resistable. Ridiculously oversimplifying, we can imagine that the soul has a conditional causal power to cause an utterance of “You stepped on my foot” under the condition of “observing” a certain kind of pain-correlated neural state. And while it is tempting to say that the satisfied conditions of a conditional causal power cause the causal power to go off, we need not say that. We can, simply, say that the causal power goes off, and the cause is not the condition, but the thing that has the causal power, in this case the soul.

On this story, if you step on my foot, you don’t cause me to say “You stepped on my foot”, though you do cause the condition of my conditional causal power to say so. We might say that in an extended sense there is a “causal explanation” of my utterance in terms of your stepping, and your stepping is “causally prior” to my utterance, even though this causal explanation is not itself an instance of causation simpliciter. If so, then all the stuff I say in my infinity book on causation should get translated into the language of causal explanation or causal priority. Or we can just say that there is a broad and a narrow sense of “cause”, and in the broad sense you cause me to speak and in the narrow you do not.

I think there is a very good theological reason to think this makes sense. For we shouldn’t say that our actions cause God to act. The idea of causing God to do anything seems directly contrary to divine transcendence. God is beyond our causal scope! Just as by forwards closure a physical thing cannot cause a spiritual effect, so too by transcendence a created thing cannot cause a divine effect. Yet, of course, our actions explain God’s actions. God answers prayers, rewards the just and punishes the unrepentant wicked. There is, thus, some sort of quasi-causal explanatory relation here that can be used just as much for non-causal dualist perception.

Wednesday, November 10, 2021

Online talk: A Norm-Based Design Argument

Thursday November 11, 2021, at 4 pm Eastern (3 pm Central), the Rutgers Center for Philosophy of Religion and the Princeton Project in Philosophy of Religion present a joint colloquium: Alex Pruss (Baylor), "A Norm-Based Design Argument".

The location will be

"The whole is bigger than the part"

Some people don’t like Cantorian ways of comparing the sizes of sets because they want to have a “whole is bigger than the (proper) part” principle, denying which they consider to be counterintuitive.

Suppose that there is a relation ≤ which provides a way of comparing the sizes of sets of real numbers (or just the sizes of countable sets of real numbers) such that:

  1. the comparison satisfies the “the whole is bigger than the part” principle, so that if A is a proper subset of B, then A < B

  2. there are no incommensurable sets: given any A and B, at least one of A ≤ B and B ≤ A holds

  3. the relation ≤ is transitive and reflexive.

Then the Banach-Tarski paradox follows from (a)–(c) without any use of the Axiom of Choice: there is a way to decompose a ball into a finite number of pieces and move them around to form two balls of the same size as the original. And Banach-Tarski feels like a direct violation of the "whole is bigger" principle!

Thus, intuitive as the “whole is bigger” principle is, the price of being able to compare the sizes of sets of real numbers in conformity with the principle is quite high. I suspect that most people who think that denying the “whole is bigger” principle also think Banach-Tarski is super problematic.

For our next observation, let’s add one more highly plausible condition:

  1. the relation ≤ is weakly invariant under reflections of the real line: for any reflection ρ, we have A ≤ B if and only if ρA ≤ ρB.

Proposition: Conditions (a)–(d) are contradictory.

So, I think we should deny that, in the context of comparing the number of elements of a set, the whole is bigger than the proper part.

Proof of Proposition: Write A ∼ B iff A ≤ B and B ≤ A. Then I claim we have A ∼ ρA for any reflection ρ. For otherwise we either have A < ρA or ρA < A by (b). If we have A < ρA, then we also have ρA < ρ2A by (d), and since ρ2A = A, we have ρA < A, a contradiction. If we have ρA < A, then we have ρ2A < ρA by (d), and hence A < ρA, again a contradiction.

Since any translation τ can be made out of two reflections, it follows that A ∼ τA as well. Let τ be translation by one unit to the right. Then {0, 1, 2, ...} ∼ τ{0, 1, 2, ...} = {1, 2, 3, ...}, which contradicts (a).

Monday, November 8, 2021

Infinite Dedekind finite sets

Most paradoxes of actual infinities, such as Hilbert’s Hotel, depend on the intuition that:

  1. A collection is bigger than any proper subcollection.

A Dedekind infinite set is one that has the property that it is the same cardinality as some proper subset. In other words, a Dedekind infinite set is precisely one that violates (1).

In Zermelo-Fraenkel (ZF) set theory, it is easy to prove that any Dedekind infinite set is infinite. More interestingly, assuming the consistency of ZF, there are models of ZF with infinite sets that are Dedekind finite.

It is easy to check that if A is a Dedekind finite set, then A and every subset of A satisfies (1). Thus an infinite but Dedekind finite set escapes most if not all the standard paradoxes of infinity. Perhaps enemies of actual infinity, should thus only object to Dedekind infinities, not all infinities?

However, infinite Dedekind finite sets are paradoxical in their own special way: they have no countably infinite subsets—no subsets that can be put into one-to-one correspondence with the natural numbers. You might think this is absurd: shouldn’t you be able to take one element of an infinite Dedekind finite set, then another, then another, and since you’ll never run out of elements (if you did, the set wouldn’t be finite), you’d form a countably infinite sequence of elements? But, no: the problem is that repeating the “taking” requires the Axiom of Choice, and infinite Dedekind finite sets only live in set-theoretic universes without the Axiom of Choice.

In fact, I think infinite Dedekind finite sets are much more paradoxical than a run-of-the-mill Dedekind infinite sets.

Do we learn anything philosophical here? I am not sure, but perhaps. If infinite Dedekind finite sets are extremely paradoxical, then by the same token (1) seems an unreasonable condition in the infinite case. For Dedekind finitude is precisely defined by (1).