Showing posts with label Central Limit Theorem. Show all posts
Showing posts with label Central Limit Theorem. Show all posts

Monday, November 29, 2021

Simultaneous causation and occasionalism

In an earlier post, I said that an account that insists that all fundamental causation is simultaneous but secures the diachronic aspects of causal series by means of divine conservation is “a close cousin to occasionalism”. For a diachronic causal series on this theory has two kinds of links: creaturely causal links that function instantaneously and divine conservation links that preserve objects “in between” the instants at which creaturely causation acts. This sounds like occasionalism, in that the temporal extension of the series is entirely due to God working alone, without any contribution from creatures.

I now think there is an interesting way to blunt the force of this objection by giving another role to creatures using a probabilistic trick that I used in my previous post. This trick allows created reality to control how long diachronic causal series take, even though all creaturely causation is simultaneous. And if created reality were to control how long diachronic causal series take, a significant aspect of the diachronicity of diachronic causal series would involve creatures, and hence the whole thing would look rather less occasionalist.

Let me explain the trick again. Suppose time is discrete, being divided into lots of equally-spaced moments. Now imagine an event A1 that has a probability 1/2 of producing an event A2 during any instant that A1 exists in, as long as A1 hasn’t already produced A2. Suppose A1 is conserved for as long as it takes to produce A2. Then the probability that it will take n units of time for A2 to be produced is (1/2)n + 1. Consequently, the expected wait time for A2 to happen is:

  • (1/2)⋅0 + (1/4)⋅1 + (1/8)⋅2 + (1/16)⋅3 + ... = 1.

We can then similarly set things up so that A2 causes A3 on average in one unit of time, and A3 on causes A4 on average in one unit of time, and so on. If n is large enough, then by the Central Limit Theorem, it is likely that the lag time between A1 and An will be approximately n units of time (plus or minus an error on the order of n1/2 units), and if the units of time are short enough, we can get arbitrarily good precision in the lag time with arbitrarily high precision.

If the probability of each event triggering the next at an instant is made smaller than 1/2, then the expected lag time from A1 to An will be less than n, and if the probaility is bigger than 1/2, the expected lag time will be bigger than n. Thus the creaturely trigger probability parameter, which we can think of as measuring the “strength” of the causal power, controls how long it takes to get to An through the “magic” of probabilistic causation and the Central Limit Theorem. Thus, the diachronic time scale is controlled precisely by creaturely causation—even though divine conservation is responsible for Ai persisting until it can cause Ai + 1. This is a more significant creaturely input than I thought before, and hence it is one that makes for rather less in the way of occasionalism.

This looks like a pretty cool theory to me. I don’t believe it to be true, because I don’t buy the idea of all causation being simultaneous, but I think it gives a really nice.

Simultaneous causation and determinism

Consider the Causal Simultaneity Thesis (CST) that all causation is simultaneous. Assume that simultaneity is absolute (rather than relative). Assume there is change. Here is a consequence I will argue for: determinism is false. In fact, more strongly, there are no diachronic deterministic causal series. What is surprising is that we get this consequence without any considerations of free will or quantum mechanics.

Since there is a very plausible argument from presentism to CST (a non-simultaneous fundamental causal relation could never obtain between two existent things given presentism), we get an argument from presentism to indeterminism.

Personally, I am inclined to think of this argument as a bit of evidence against CST and hence against presentism, because it seems to me that there could be a deterministic world, even though there isn’t. But tastes differ.

Now the argument for the central thesis. The idea is simple. On CST, as soon as the deterministic causes of an effect are in place, their effect is in place. Any delay in the effect would mean a violation of the determinism. There can be nothing in the deterministic causes to explain how much delay happens, because all the causes work simultaneously. And so if determinism is true—i.e., if everything has a deterministic cause—then all the effects happen all at once, and everything is already in the final state at the first moment of time. Thus there is no change if we have determinism and CST.

The point becomes clearer when we think about how it is an adherent of CST explains diachronic causal series. We have an item A that starts existing at time t1, persists through time t2 (kept in existence not by its own causal power, as that would require a diachronic causal relation, but either by a conserver or a principle of existential inertia), then causes an item B, which then persists through time t3 and then causes an item C, and so on. While any two successive items in the causal series A, B, C, ... must overlap temporally (i.e., there must be a time at which they both exist), we need not have temporal overlap between A and C, say. We can thus have things perishing and new things coming into being after them.

But if the causation is deterministic, then as soon as A exists, it will cause B, which will cause C, and so on, thereby forcing the whole series to exist at once, and destroying change.

In an earlier post, I thought this made for a serious objection to CST. I asked: “Why does A ‘wait’ until t2 to cause B?” But once we realize that the issue above has to do with determinism, we see that an answer is available. All we need to do is to suppose there is probabilistic causation.

For simplicity (and because this is what fits best with causal finitism) suppose time is discrete. Then we may suppose that at each moment of time at which A exists it has a certain low probability pAB of causing B if B does not already exist. Then the probability that A will cause B precisely after n units of time is (1 − pAB)npAB. It follows mathematically that “on average” it will cause B after pAB/(1 − pAB) fundamental units of time.

It follows that for any desired average time delay, a designer of the universe can design a cause that has that delay. Let’s say that we want B to come into existence on average u fundamental units of time after A has come into existence. Then the designer can give A a causal power of producing B at any given moment of time at which B does not already exist with probability pAB = 1/(1 + u).

The resulting setup will be indeterministic, and in particular we can expect significant random variation in how long it takes to get B from A. But if the designer wants more precise timing, that can be arranged as well. Let’s say that our designer wants B to happen very close to precisely one second after A. The designer can then ensure that, say, there are a million instants of time in a second, and that A has the power to produce an event A1 with a probability at any given instant such that the expected wait time will be 0.0001 seconds (i.e., 100 fundamental units of time), and A1 the power to produce A2 with the same probability, and so on, with A10000 = B. Then by the Central Limit Theorem, the average wait time between A and B can be expected to be fairly close to 10000 × 0.0001 = 1 seconds, and the designer can get arbitrarily high confidence of an arbitrarily high precision of delay by inserting more instants in each second, and more intermediate causes between A and B, with each intermediate cause having an average delay time of 100 fundamental units (say). (This uses the fact that the geometric distribution has a finite third moment and the Barry-Esseen version of the Central Limit Theorem.)

Thus, a designer of the universe can make an arbitrarily precise and reliable near-deterministic changing universe despite CST. And that really blunts the force of my anti-deterministic observation as a consideration against CST.

Tuesday, November 8, 2011

Attitudes to risk and the law of large numbers

People do things that seem to be irrational in respect of maximizing expected utilities. For instance, art collectors buy insurance, even though it seems that the expected payoff of buying insurance is negative—or else the insurance company wouldn't be selling it (some cases of insurance can be handled by distinguishing utilities from dollar amounts, as I do here, but I am inclined to think luxury items like art are not a case like that). Likewise, people buy lottery tickets, and choose the "wrong" option in the Allais Paradox.

Now, there are all sorts of clever decision-theoretic ways of modeling these phenomena and coming up with variations on utility-maximization that handle them. But rather than doing that I want to say something else about these cases.

Why is it good to maximize expected utilities in our choices (and let's bracket all deontic constraints here—let's suppose that none of the choices are deontically significant)? Well, a standard and plausible justification involves the Law of Large Numbers [LLN] (I actually wonder if we shouldn't be using the Central Limit Theorem instead—that might even strengthen the point I am going to make). Suppose you choose between option A and option B in a large number of independent trials. Then, on moderate assumptions on A and B, the LLN applies and says that if the number of trials N is large, probably the payoff for choosing A each time will be relatively close to NE[A] and the payoff for choosing B each time will be relatively close to NE[B], where E[A] and E[B] are the expected utilities of A and B, respectively. And so if E[A]>E[B], you will probably do better in the long run by choosing A rather than by choosing B, and you can (on moderate assumptions on A and B, again) make the probability that you will do better by choosing A as high as you like by making the number of trials large.

But here's the thing. My earthly life is finite (and I have no idea how decision theory is going to apply in the next life). I am not going to have an infinite number of trials. So how well this LLN-based argument works depends on how fast the convergence of observed average payoff to the statistically expected payoff in the LLN is. If the convergence is too slow relative to the expected number of A/B-type choices in my life, the argument is irrelevant. But now here's the kicker. The rate of convergence in the LLN depends on the shape of the distributions of A and B, and does so in such a way that the lop-sided distributions involved in the problems mentioned in the first paragraph of the paper are going to give particularly slow convergence. In other words, the standard LLN-based argument for expected utility maximization applies poorly precisely to the sorts of cases where people don't go for expected utility maximization.

That said, I don't actually think this cuts it as a justification of people's attitudes towards things like lotteries and insurance. Here is why. Take the case of lotteries. With a small number of repetitions, the observed average payoff of playing the lottery will likely be rather smaller than the expected value of the payoff, because the expected value of the payoff depends on winning, and probably you won't win with a small number of repetitions. So taking into account the deviation from the LLN actually disfavors playing the lottery. The same goes for insurance and Allais: taking into account the deviation from the LLN should, if anything, tell against insuring and choosing the "wrong" gamble in Allais.

Maybe there is a more complex explanation--but not justification--here. Maybe people sense (consciously or not—there might be some evolutionary mechanism here) that these cases don't play nice with the LLN, and so they don't do expected utility maximization, but do something heuristic, and the heuristic fails.