Monday, January 10, 2022

A horizontal aspect to transsubstantiation

The Eucharist has the vertical dimension of our union with Christ and a horizontal dimension of our union with our fellow Christians. The doctrine of transsubstantiation ensures the vertical dimension in an obvious way. But yesterday, while at a Thomistic Institute retreat on the Eucharist, I was struck by the way that transsubstantiation also deeply enhances the horizontal dimension of the Eucharist as a common meal.

Normally, in a common meal we eat together. Sometimes we eat portions cut from one loaf or carved from one animal, and that makes the meal even more unifying. But according to transsubstantiation, in the Eucharist we have a common meal where miraculously we each eat not just a portion of the same food, but the numerically very same portion: the whole of Christ. That is as deep a unity as we can have in eating.

Consider how there is less unity on the main alternatives to transsubstantiation:

  • On symbolic views, we eat and drink different portions of bread and wine with the same symbolism.

  • On consubstantiation, we eat the same Christ along with different portions of bread and wine.

  • On Leibnizian views (where the bread and wine becomes a part of Christ), we eat different parts of the same Christ.

The transsubstantiation view has as much substantial unity in what is eaten as is logically possible. (Though there is some accidental disunity, in that the accidents—shape, color, position—are different for different communicants.)

Monday, December 27, 2021

Essentially evil organizations

Start with this argument:

  1. Everything that exists is God or is created and sustained by God.

  2. God does not create and sustain anything essentially evil.

  3. The KKK is essentially evil.

  4. The KKK is not God.

  5. So, the KKK does not exist.

Now we have a choice-point. We could say:

  1. If the KKK does not exist, no organization exists.

  2. So, no organization exists.

After all, it may seem reasonable to think that the ontology of social groups should not depend on whether the groups are good, neutral or bad.

But I think it’s not unreasonable to deny (6), and to say that the being of a social group is defined by its teleology, and there is no teleology without a good telos. A similar move would allow for a way out of the previous argument.

Thursday, December 23, 2021

Yet another argument against artifacts

  1. If any complex artifacts really exist, instruments of torture really exist.

  2. Instruments of torture are essentially evil.

  3. Nothing that is essentially evil really exists.

  4. So, instruments of torture do not really exist. (2 and 3)

  5. So, no complex artifacts really exist. (1 and 4)

One argument for (3) is from the privation theory of evil.

Another is a direct argument from theism:

  1. Everything that really exists is created by God.

  2. Nothing created by God is essentially evil.

  3. So, nothing that is essentially evil really exists.

Tuesday, December 21, 2021

Divine simplicity and divine knowledge of contingent facts

One of the big puzzles about divine simplicity which I have been exploring is that of God’s knowledge of contingent facts. A sloppy way to put the question is:

  1. How can God know p in one world and not know p in another, even though God is intrinsically the same in both worlds?

But that’s not really a question about divine simplicity, since the same is often true for us. Yesterday you knew that today the sun would rise. Yet there is a possible world w2 which up to yesterday was exactly the same as our actual world w1, but due to a miracle or weird quantum stuff, the sun did not rise today in w2. Yesterday, you were intrinsically the same in w1 and w2, but only in w1 did you know that today the sun would rise. For, of course, you can’t know something that isn’t true.

So perhaps the real question is:

  1. How can God believe p in one world and not believe p in another, even though God is intrinsically the same in both worlds?

I wonder, however, if there isn’t a possibility of a really radical answer: it is false that God believes p in one world and not in another, because in fact God doesn’t have any beliefs in any world—he only knows.

In our case, belief seems to be an essential component of knowledge. But God’s knowledge is only analogical to our knowledge, and hence it should not be a big surprise if the constitutive structure of God’s knowledge is different from our knowledge.

And even in our case, it is not clear that belief is an essential component of knowledge. Anscombe famously thought that there was such a thing as intentional knowledge—knowledge of what you are intentionally doing—and it seems that on her story, the role played in ordinary knowledge by belief was played by an intention. If she is right about that, then an immediate lesson is that belief is not an essential component of knowledge. And in fact even the following claim would not be true:

  1. If one knows p, then one believes or intends p.

For suppose that I intentionally know that I am writing a blog post. Then I presumably also know that I am writing a blog post on a sunny day. But I don’t intentionally know that I am writing a blog post on a sunny day, since the sunniness of the day is not a part of the intention. Instead, my knowledge is based in part on the intention to write a blog post and in part on the belief that it is a sunny day. Thus, knowledge of p can be based on belief that p, intention that p, or a complex combination of belief and intention. But once we have seen this, then we should be quite open to a lot of complexity in the structure of knowledge.

Of course, Anscombe might be wrong about there being such a thing as knowledge not constituted by belief. But her view is still intelligible. And its very intelligibility implies a great deal of flexibility in the concept of knowledge. The idea of knowledge without belief is not nonsense in the way that the idea of a fork without tines is.

The same point can be supported in other ways. We can imagine concluding that we have no beliefs, but we have other kinds of representational states, such as credences, and that we nonetheless have knowledge. We are not in the realm of tineless forks here.

Now, it is true that all the examples I can think of for other ways that knowledge could be constituted in us besides being based on belief still imply intrinsic differences given different contents (beyond the issues of semantic externalism due to twinearthability). But the point is just that knowledge is flexible enough concept, that we should be open to God having something analogous to our knowledge but without any contingent intrinsic state being needed. (One model of this possibility is here.)

Thursday, December 16, 2021

When truth makes you do less well

One might think that being closer to the truth is guaranteed to get one to make better decisions. Not so. Say that a probability assignment p2 is at least as true as a probability assignment p1 at a world or situation ω provided that for every event E holding at ω we have p2(E)≥p1(E) and for every event E not holding at ω we have p2(E)≤p1(E). And say that p2 is truer than p1 provided that strict inequality holds in at least one case.

Suppose that a secret integer has been picked among 1, 2 and 3, and p1 assigns the respective probabilities 0.5, 0.3, 0.2 to the three possibilities while p2 assigns them 0.7, 0.1, 0.2. Then if the true situation is 1, it is easy to check that p2 is truer than p1. But now suppose that you are offered a choice between the following games:

  • W1: on 1 win $2, on 2 win $1100, and on 3 win $1000.

  • W2: on 1 win $1, on 2 win $1000, and on 3 win $1100

If you are going by p1, you will choose W1 and if you are going by p2, you will choose W2. But if the true number is 1, you would be better off picking W1 (getting $2 instead of $1), so the truer probabilities will lead to a worse payoff. C’est la vie.

Say that a scoring rule for probabilities is truth-directed if it never assigns a poorer score for a truer set of probabilities. The above example shows that a proper scoring rule need not be truth-directed. For let s(p)(n) be the payoff you will get if the secret number is n and you make your decision between W1 and W2 rationally on the basis of probability assignment p (with ties broken in favor of W1, say). Then s is a proper (accuracy) scoring rule but the above considerations show that s(p2)(1)<s(p1)(1), even though p2 is truer at 1. In fact, we can get a strictly proper scoring rule that isn’t truth-directed if we want: just add a tiny multiple of a Brier accuracy score to s.

Intuitively we would want our scoring rules to be both proper and truth-directed. But given that sometimes we are pragmatically better off for having less true probabilities, it is not clear that scoring rules should be truth-directed. I find myself of divided mind in this regard.

How common is this phenomenon? Roughly it happens whenever the truer and less-true probabilities disagree on ratios of probabilities of non-actual events.

Proposition: Suppose two probability assignments are such that there are events E1 and E2 with probabilities strictly between 0 and 1, with ω1 in neither event, and such that the ratio p1(E1)/p1(E2) is different from the ratio p2(E1)/p2(E2). Then there are wagers W1 and W2 such p1 prefers W1 and p2 prefers W2, but W1 pays better than W2 at ω1.

Monday, December 13, 2021

An introduction to simple motion detection with Python and OpenCV

One of my kids really liked a cool thing in a children's museum where they had a camera pointed down a hallway, with a screen, and if you waved your hands in certain areas you got to play music. So I decided to make something like this myself. I found this tutorial and with its help produced some simple code. I then wrote up an Instructable explaining how the code works.



Truth directed scoring rules on an infinite space

A credence assignment c on a space Ω of situations is a function from the powerset of Ω to [0, 1], with c(E) representing one’s degree of belief in E ⊆ Ω.

An accuracy scoring rule s assigns to a credence assignment c on a space Ω and situation ω the epistemic utility s(c)(ω) of having credence assignment c when in truth we are in ω. Epistemic utilities are extended real numbers.

The scoring rule is strictly truth directed provided that if credence assignment c2 is strictly truer than c1 at ω, then s(c2)(ω)>s(c1)(ω). We say that c2 is strictly truer than c1 if and only if for every event E that happens at ω, c2(E)≥c1(E) and for every event E that does not happen at ω, c2(E)≤c1(E), and in at least one case there is strict inequality.

A credence assignment c is extreme provided that c(E) is 0 or 1 for every E.

Proposition. If the probability space Ω is infinite, then there is no strictly truth directed scoring rule defined for all credences, or even for all extreme credences.

In fact, there is not even a scoring rule that strictly truth directed when restricted to extreme credences, where an extreme credence is one that assigns 0 or 1 to every event.

This proposition uses the following result that my colleague Daniel Herden essentially gave me a proof of:

Lemma. If PX is the power set of X, then there is no function f : PX → X such that f(A)≠f(B) whenever A ⊂ B.

Now, we prove the Proposition. Fix ω ∈ Ω. Let s be a strictly truth directed scoring rule defined for all extreme credenes. For any subset A of PΩ, define cA to be the extreme credence function that is correct at ω at all and only the events in A, i.e., cA(E)=1 if and only if ω ∈ E and E ∈ A or ω ∉ E and E ∉ A, and otherwise cA(E)=0. Note that cB is strictly truer than cA if and only if A ⊂ B. For any subset A of PΩ, let f(A)=s(cA)(ω).

Then f(A)<f(B) whenever A ⊂ B. Hence f is a strictly monotonic function from PPΩ to the reals. Now, if Ω is infinite, then the reals can be embedded in PΩ (by the axiom of countable choice, Ω contains a countably infinite subset, and hence PΩ has cardinality at least that of the continuum). Hence we have a function like the one the Lemma denies the existence of, a contradiction.

Note: This suggests that if we want strict truth directedness of a scoring rule, the scoring rule had better take values in a set whose cardinality is greater than that of the continuum, e.g., the hyperreals.

Proof of Lemma (essentially due to Daniel Herden): Suppose we have f as in the statement of the Lemma. Let ON be the class of ordinals. Define a function F : ON → A by transfinite induction:

  • F(0)=f(⌀)

  • F(α)=f({F(β):β < α}) whenever α is a successor or limit ordinal.

I claim that this function is one-to-one.

Let Hα = {F(δ):δ < α}.

Suppose F is one-to-one on β for all β < α. If α is a limit ordinal, then it follows that F is one-to-one on α. Suppose instead that α is a successor of β. I claim that F is one-to-one on α, too. The only possible failure of injectivity on α could be if F(β)=F(γ) for some γ < β. Now, F(β)=f(Hβ) and F(γ)=f(Hγ). Note that Hγ ⊂ Hβ since F is one-to-one on β. Hence f(Hβ)≠f(Hγ) by the assumption of the Lemma. So, F is one-to-one on ON by transfinite induction.

But of course we can’t embed ON in a set (Burali-Forti).

Friday, December 10, 2021

Unforgivable offenses that aren't all that terrible

When we talk of something as an unforgivable offense, we usually mean it is really a terrible thing. But if God doesn’t exist, then some very minor things are unforgivable and some very major things are forgivable.

Suppose that I read on the Internet about a person in Ruritania who has done something I politically disapprove of. I investigate to find out their address, and mail them a package of chocolates laced with a mild laxative. The package comes back to me from the post office, because my prospective victim was fictional and there is no such country as Ruritania.

If God doesn’t exist, I have done something unforgivable and beyond punishment. For there is no one with the standing to either forgive or punish me (I assume that the country I live in has a doctrine of impossible attempts on which attempts to harm non-existent persons are not legally punishable). Yet much worse things than this have been forgiven by the mercy of victims.

I ought to feel guilty for my attempt to make the life of my Ruritanian nemesis miserable. And if there is no God, there is no way out of guilt open to me: I cannot be forgiven nor can the offense be expiated by punishment.

The intuition that at least for relatively minor offenses there is an appropriate way to escape from guilt, thus, implies the existence of God—a being such that all offenses are ultimately against him.

Thursday, December 9, 2021

Yet another account of life

I think a really interesting philosophical question is the definition of life. Standard biological accounts fail to work for God and angels.

Here is a suggestion:

  • x has life if and only if it has a well-being.

For living things, one can talk meaningfully of how well or poorly off they are. And that’s what makes them be living.

I think this is a simple and attractive account. I don’t like it myself, because I am inclined to think that everything has a well-being—even fundamental particles. But for those who do not have such a crazy view, I think it is an attractively simple solution to a deep philosophical puzzle.

In search of real parthood

In contemporary mereology, it is usual to have two parthood relations: parthood and proper parthood. On this orthodoxy, it is trivially true that each thing is a part of itself and that nothing can be a proper part of itself.

I feel that this orthodoxy has failed to identify the truly fundamental mereological relation.

If it is trivial that each thing is a part of itself, then that suggests that parthood is a disjunctive relation: x is a part of y if and only if x = y or x is a part* of y, where parthood* is a more fundamental relation. But what then is parthood*? It is attractive to identify it with proper parthood. But if we do that, we can now turn to the trivial claim that nothing can be a proper part of itself. The triviality of this claim suggests that proper parthood is a conjunctive property, namely a conjunction of distinctness with some parthood relation. And on pain of circularity, parthood is not just parthood.

In other words, I find it attractive to think that there is some more fundamental relation than either of the two relations of contemporary mereology. And once we have that more fundamental relation, we can define contemporary mereological parthood as the disjunction of the more fundamental relation with identity and contemporary mereological proper parthood as the conjunction of the more fundamental relation with distinctness.

But I am open to the possibility that the more fundamental relation just is one of parthood and proper parthood, in which case the claim that everything is a part of itself or the claim that nothing is a part of itself is respectively non-trivial.

I will call the more fundamental relation “real parthood”. It is a relation that underlies paradigmatic instances of proper parthood. And now genuine metaphysical questions open up about identity, distinctness and real parthood. We have three possibilities:

  1. Necessarily, each thing is a real part of itself.

  2. Necessarily, nothing is a real part of itself.

  3. Possibly something is a real part of itself and possibly something is not a real part of itself.

If (1) is true, then real parthood is necessarily coextensive with contemporary mereological parthood. If (2) is true, then real parthood is necessarily coextensive with contemporary mereological proper parthood.

My own guess is that if there is such a thing as parthood at all, then (3) is true.

For the more fundamental a relation, the more I want to be able to recombine where it holds. Why shouldn’t God be able to induce the relation between two distinct things or refuse to induce it between a thing and itself? And it’s really uncomfortable to think that whatever the real parthood relation is, God has to be in that relation to himself.

Perhaps, though, the real parthood relation is a kind of dependency relation. If so, then since nothing can be dependent on itself, we couldn’t have a thing being a real part of itself, and real parthood would be coextensive with proper parthood.

All this is making me think that either real parthood is necessarily coextensive with proper parthood, or it is not necessarily coextensive with either of the two relations of contemporary mereology.

Monday, December 6, 2021

Samuel Clarke on our ignorance of the essence of God

At times I am made uncomfortable by this objection to arguments for the existence of God: there feels like there is something fishy about inferring the existence of a being about which we know so very little. It may be that theism is the only reasonable explanation of the universe’s existence, but if we know so very little about that explanation, can the inference to the truth of that explanation be a genuine version of inference to best explanation?

Newton's disciple Samuel Clarke has a nice answer to this objection:

There is not so mean and contemptible a plant or animal, that does not confound the most enlarged understanding upon earth; nay, even the simplest and plainest of all inanimate beings have their essence or substance hidden from us in the deepest and most impenetrable obscurity.

In other words, all our ordinary day-to-day inferences are to things whose essence is hidden.

It may be thought that now that we know about DNA, we do know the essences of plants and animals. But even if that is true, which I am sceptical of, it doesn’t matter: for belief in plants and animals was quite reasonable even before our superior science. And even this day, our knowledge of the essences of the fundamental entities of physics (e.g., particles, fields, wavefunctions) is basically nil. All we know is some facts about the effects of these entities.

Thursday, December 2, 2021

Misleadingness simpliciter

It is quite routine that learning a truth leads to rationally believing new falsehoods. For we all rationally believe many falsehoods. Suppose I rationally believe a falsehood p and I don’t believe a truth q. Then, presumably, I don’t believe the conjunction of p and q. But suppose I learn q. Then, typically, I will rationally come to believe the conjunction of p and q, a falsehood I did not previously believe.

Thus there is a trivial sense in which every truth I learn is misleading. But a definition of misleadingness on which every truth is misleading doesn’t seem right. Or at least it’s not right to say that every truth is misleading simpliciter. What could misleadingness simpliciter be?

In a pair of papers (see references here) Lewis and Fallis argue that we should assign epistemic utilities to our credences in such a way that conditioning on the truth should never be bad for us epistemically speaking—that it should not decrease our actual epistemic utility.

I think this is an implausible constraint. Suppose a highly beneficial medication has been taken by a billion people. I randomly sample a hundred thousand of these people and see what happened to them in the week after receiving the medication. Now, out of a billion people, we can expect about two hundred thousand to die in any given week. Suppose that my random sampling is really, really unlucky, and I find that fifty thousand of the people in my sample died a week because of the medication. Completely coincidentally, of course, since as I said the medication is highly beneficial.

Based on my data, I rationally come to believe the importantly false claim that the medication is very harmful. I also come to believe the true claim that half of my random sample died a week after taking the medication. But while that claim is true, it is quite unimportant except as misleading evidence for the harmfulness of the medication. It is intuitively very plausible that after learning the truth about half of the people in my sample dying, I am worse off epistemically.

It seems clear that in the medication case, my data is true and misleading in a non-trivial way. This suggests a definition of misleadingness simpliciter:

  • A proposition p is misleading simpliciter if and only if one’s overall epistemic utility goes down when one updates on p.

And this account of misleadingness is non-trivial. If we measure epistemic utility using strictly proper scoring rules, and if our credences are consistent, then the expected epistemic value of updating on the outcome of a non-trivial observation is positive. So we should not expect the typical truth to be misleading in the above sense. But some are misleading.

From this point of view, Lewis and Fallis are making a serious mistake: they are trying to measure epistemic utilities in such a way as to rule out the possibility of misleading truths.

By the way, I think I can prove that for any measure of epistemic utility obtained by summing a single strictly proper score across all events, there will be a possibility of misleadingness simpliciter.

Final note: We don’t need to buy into the formal mechanism of epistemic utilities to go with the above definition. We could just say that something is misleading iff coming to believe it would rationally make one worse off epistemically.

Wednesday, December 1, 2021

Investigative scoring rules

Let s be an accuracy scoring rule on a finite probability space. Thus, s(P) is a random variable measuring how close a probability assignment P is to the truth. Here are two reasonable conditions on the rule (the name for the second is made up):

  1. Propriety: EPs(P)≤EPs(Q) for any distinct probability assignments P and Q.

  2. Investigativeness: EPs(P)≤P(A)EPAs(PA)+P(Ac)EPAcs(PAc) whenever 0 < P(A)<1.

where EP is expected value with respect to P, PA is short for P(⋅|A), and Ac is the complement of A. Propriety says that if we are trying to maximize expected accuracy, we will never have reason to evidencelessly switch to a different credence. Investigativeness says that expected accuracy maximization never requires one to close one’s eyes to evidence because the expected accuracy after conditionalizing on learning whether A holds is at least as good as the currently expected accuracy. And we have strict versions of the two conditions provided the inequalities are always strict.

It is well-known that propriety implies investigativeness, and ditto for the strict variants.

One might guess that the other direction holds as well: that investigativeness implies propriety. But (perhaps surprisingly) not! In fact, strict investigativeness does not imply propriety.

Let s(P) be the following score: s(P)(w)=|{A : P(A)=1 and w ∈ A}|. In other words, s(P) measures how many true propositions P assigns probability 1 to. It is easy to see that s(PA)≥s(P) everywhere on A, and ditto for Ac in place of A, so the right-hand side in (2) is at least as big as P(A)EPAs(P)+P(Ac)EPAcs(P)=EPs(P).

But propriety does not hold as long as our probability space has at least two points. For let P be any regular probability—one that assigns a non-zero value to every non-empty set—and let Q be any probability concentrated at one point w0. Then s(P)=1 everywhere (the only subset P assigns probability 1 to is the whole space) while EPs(Q)≥1 + P({w0}) > 1 (since Q assigns probability 1 to {w} and to the whole space), and so we don’t have propriety.

If we want strict investigativeness, just replace s with s + ϵs′ where s′ is a Brier score and ϵ is small and positive. Then we will have strict investigativeness for s′, and hence for s + ϵs′ as well, but if ϵ is sufficiently small, we won’t have propriety.

It is interesting to think if investigativeness plus some additional plausible condition might imply propriety. A very plausible further condition is that if P is at least as close to the truth as Q for every event, then P gets a no-worse score. Another plausible condition is additivity. But my examples satisfy both conditions. I don’t see other plausible conditions to add, besides propriety as such.

Monday, November 29, 2021

Simultaneous causation and occasionalism

In an earlier post, I said that an account that insists that all fundamental causation is simultaneous but secures the diachronic aspects of causal series by means of divine conservation is “a close cousin to occasionalism”. For a diachronic causal series on this theory has two kinds of links: creaturely causal links that function instantaneously and divine conservation links that preserve objects “in between” the instants at which creaturely causation acts. This sounds like occasionalism, in that the temporal extension of the series is entirely due to God working alone, without any contribution from creatures.

I now think there is an interesting way to blunt the force of this objection by giving another role to creatures using a probabilistic trick that I used in my previous post. This trick allows created reality to control how long diachronic causal series take, even though all creaturely causation is simultaneous. And if created reality were to control how long diachronic causal series take, a significant aspect of the diachronicity of diachronic causal series would involve creatures, and hence the whole thing would look rather less occasionalist.

Let me explain the trick again. Suppose time is discrete, being divided into lots of equally-spaced moments. Now imagine an event A1 that has a probability 1/2 of producing an event A2 during any instant that A1 exists in, as long as A1 hasn’t already produced A2. Suppose A1 is conserved for as long as it takes to produce A2. Then the probability that it will take n units of time for A2 to be produced is (1/2)n + 1. Consequently, the expected wait time for A2 to happen is:

  • (1/2)⋅0 + (1/4)⋅1 + (1/8)⋅2 + (1/16)⋅3 + ... = 1.

We can then similarly set things up so that A2 causes A3 on average in one unit of time, and A3 on causes A4 on average in one unit of time, and so on. If n is large enough, then by the Central Limit Theorem, it is likely that the lag time between A1 and An will be approximately n units of time (plus or minus an error on the order of n1/2 units), and if the units of time are short enough, we can get arbitrarily good precision in the lag time with arbitrarily high precision.

If the probability of each event triggering the next at an instant is made smaller than 1/2, then the expected lag time from A1 to An will be less than n, and if the probaility is bigger than 1/2, the expected lag time will be bigger than n. Thus the creaturely trigger probability parameter, which we can think of as measuring the “strength” of the causal power, controls how long it takes to get to An through the “magic” of probabilistic causation and the Central Limit Theorem. Thus, the diachronic time scale is controlled precisely by creaturely causation—even though divine conservation is responsible for Ai persisting until it can cause Ai + 1. This is a more significant creaturely input than I thought before, and hence it is one that makes for rather less in the way of occasionalism.

This looks like a pretty cool theory to me. I don’t believe it to be true, because I don’t buy the idea of all causation being simultaneous, but I think it gives a really nice.

Simultaneous causation and determinism

Consider the Causal Simultaneity Thesis (CST) that all causation is simultaneous. Assume that simultaneity is absolute (rather than relative). Assume there is change. Here is a consequence I will argue for: determinism is false. In fact, more strongly, there are no diachronic deterministic causal series. What is surprising is that we get this consequence without any considerations of free will or quantum mechanics.

Since there is a very plausible argument from presentism to CST (a non-simultaneous fundamental causal relation could never obtain between two existent things given presentism), we get an argument from presentism to indeterminism.

Personally, I am inclined to think of this argument as a bit of evidence against CST and hence against presentism, because it seems to me that there could be a deterministic world, even though there isn’t. But tastes differ.

Now the argument for the central thesis. The idea is simple. On CST, as soon as the deterministic causes of an effect are in place, their effect is in place. Any delay in the effect would mean a violation of the determinism. There can be nothing in the deterministic causes to explain how much delay happens, because all the causes work simultaneously. And so if determinism is true—i.e., if everything has a deterministic cause—then all the effects happen all at once, and everything is already in the final state at the first moment of time. Thus there is no change if we have determinism and CST.

The point becomes clearer when we think about how it is an adherent of CST explains diachronic causal series. We have an item A that starts existing at time t1, persists through time t2 (kept in existence not by its own causal power, as that would require a diachronic causal relation, but either by a conserver or a principle of existential inertia), then causes an item B, which then persists through time t3 and then causes an item C, and so on. While any two successive items in the causal series A, B, C, ... must overlap temporally (i.e., there must be a time at which they both exist), we need not have temporal overlap between A and C, say. We can thus have things perishing and new things coming into being after them.

But if the causation is deterministic, then as soon as A exists, it will cause B, which will cause C, and so on, thereby forcing the whole series to exist at once, and destroying change.

In an earlier post, I thought this made for a serious objection to CST. I asked: “Why does A ‘wait’ until t2 to cause B?” But once we realize that the issue above has to do with determinism, we see that an answer is available. All we need to do is to suppose there is probabilistic causation.

For simplicity (and because this is what fits best with causal finitism) suppose time is discrete. Then we may suppose that at each moment of time at which A exists it has a certain low probability pAB of causing B if B does not already exist. Then the probability that A will cause B precisely after n units of time is (1 − pAB)npAB. It follows mathematically that “on average” it will cause B after pAB/(1 − pAB) fundamental units of time.

It follows that for any desired average time delay, a designer of the universe can design a cause that has that delay. Let’s say that we want B to come into existence on average u fundamental units of time after A has come into existence. Then the designer can give A a causal power of producing B at any given moment of time at which B does not already exist with probability pAB = 1/(1 + u).

The resulting setup will be indeterministic, and in particular we can expect significant random variation in how long it takes to get B from A. But if the designer wants more precise timing, that can be arranged as well. Let’s say that our designer wants B to happen very close to precisely one second after A. The designer can then ensure that, say, there are a million instants of time in a second, and that A has the power to produce an event A1 with a probability at any given instant such that the expected wait time will be 0.0001 seconds (i.e., 100 fundamental units of time), and A1 the power to produce A2 with the same probability, and so on, with A10000 = B. Then by the Central Limit Theorem, the average wait time between A and B can be expected to be fairly close to 10000 × 0.0001 = 1 seconds, and the designer can get arbitrarily high confidence of an arbitrarily high precision of delay by inserting more instants in each second, and more intermediate causes between A and B, with each intermediate cause having an average delay time of 100 fundamental units (say). (This uses the fact that the geometric distribution has a finite third moment and the Barry-Esseen version of the Central Limit Theorem.)

Thus, a designer of the universe can make an arbitrarily precise and reliable near-deterministic changing universe despite CST. And that really blunts the force of my anti-deterministic observation as a consideration against CST.