Showing posts with label quantum mechanics. Show all posts
Showing posts with label quantum mechanics. Show all posts

Tuesday, April 8, 2025

Empirical mathematics

Suppose I want to figure out a good approximation to the eigenvalues of a certain Hamiltonian involving a moderately large number of Coulomb potentials. It could well be the case that the best way to do so is to synthesize a molecule with that Hamiltonian and then measure its spectrum. In other words, there are mathematical problems where our best solution to the problem uses scientific methods rather than mathematical proof.

Friday, March 22, 2024

Tables and organisms

A common-sense response to Eddington’s two table problem is that a table just is composed of molecules. This leads to difficult questions of exactly which molecules it is composed of. I assume that at table boundaries, molecules fly off all the time (that’s why one can smell a wooden table!).

But I think we could have an ontology of tables where we deny that tables are composed of molecules. Instead, we simply say that tables are grounded in the global wavefunction of the universe. We then deny precise localization for tables, recognizing that nothing is localized in our quantum universe. There is some approximate shape of the table, but this shape should not be understood as precise—there is no such thing as “the set of spacetime points occupied by the table”, unless perhaps we mean something truly vast (since the tails of wavefunctions spread out very far very fast).

That said, I don’t believe in tables, so I don’t have skin in the game.

But I do believe in organisms. Similar issues come up for organisms as for tables, except that organisms (I think) also have forms or souls. So I wouldn’t want to even initially say that organisms are composed of molecules, but that organisms are partly composed of molecules (and partly of form). That still generates the same problem of which exact molecules they are composed of. And in a quantum universe where there are no sharp facts about particle number, there probably is no hope for a good answer to that question.

So maybe it would be better to say that organisms are not even partly composed of molecules, but are instead partly grounded in the global wavefunction of the universe, and partly in the form. The form delineates which aspects of the global wavefunction are relevant to the organism in question.

Wednesday, December 6, 2023

Some quantum semiholisms

I’ve been naively thinking about what a reductive physicalist quantum ontology that matches the Hilbert-space formalism in the Schroedinger picture might look like.

My first thought is something like this. “Space” is (the surface of) a sphere in a separable Hilbert space, with an inner product structure (perhaps derived from a more primitive linearity and metric structure using the polarization identity) and “the universe” is a point particle walking on that sphere.

But that description is missing crucial structure, because when described as above, all the points on the sphere are on par. Although the universe-particle was at a different location on the sphere 13 billion years ago than where it is now, there is nothing to distinguish these two points in the story, and hence nothing to ground the vast changes in the universe between then and now. What we need to do is to paint the sphere with additional structure.

There are multiple ways of having the additional structure. Here are two.

Option I. Introduce a number of additional causally impotent “point particles” living on the sphere but not moving around as “markers”, and define the rich intuitive structure of our universe from the inner-product relationships between the universe-particles and the marker-particles. Here are two variants on this option.

  • (Ia): There are countably many point particles corresponding to basis vectors in some privileged countable Hilbert space basis, and these “marker-particles” are then located at a set of points on our sphere that form an orthonormal basis. For instance, if we “think of” the Hilbert space for a system of N particles as L2(R3N), we might have a different static marker-particle for each 3N-dimensional Hermite polynomial.

  • (Ib): There are uncountably many marker-particles, and they are located at a set of points of the sphere such that the closure of their span is the whole Hilbert space, but they are not orthogonal. For instance, in our N-particle case, we might think of each marker-particle as corresponding to a normalized indicator function of a subset of R3N with non-zero Lebesgue measure, and require them to be located on our Hilbert space sphere in places which give them the “right” inner product relationships for normalized indicator functions.

Note that since what is physically significant are the inner products beween the positions of the marker-particles and the universe-particle, we need not think of the particles as having “absolute positions” on the sphere—we can have a “relationalist” version where all we need is the inner-product relationships between the particles (marker and universe). Or, if we want something more like the Heisenberg picture, we could suppose absolute positions, keep the universe particle static, and make the marker particles move. There are many variants.

Option II. We enrich the structure of our “space” (i.e., the surface of the Hilbert space sphere) by adding fundamental binary relations between points on that sphere that correspond to some privileged collection of operators (e.g., normalized projections onto subsets of R3N with non-zero measure).

Anyway, here is an interesting feature of these two stories. On none of them do we have Schaffer-style holism. On Option I, we have an infinite number of fundamental “particles” in “space” (i.e., on our infinite-dimensional sphere), though only one of them is moving, and we may or may not have the “space” itself. On Option II, we have the two fundamental entities: the universe-particle and the sphere itself, with the universe-particle having merely positional structure, while the sphere has a complex operator structure.

We might call these stories semiholistic. Of course, there are fully holistic stories one can tell as well. But one doesn’t have to.

Monday, December 4, 2023

Metaphysical semiholism

For a while I’ve speculated that making ontological sense of quantum mechanics requires introducing a global entity into our ontology to ground the value of the wavefunction throughout the universe.

One alternative is to divide up the grounding task among the local entities (particles and/or Aristotelian substances). For instance, on a Bohmian story, one could divide up 3N-dimensional configuration space into N cells, one cell for each of the N particles, with each particle grounding the values of the wavefunction in its own cell. But it seems impossible to find a non-arbitrary way to divide up configuration space into such cells without massive overdetermination. (Perhaps the easiest way to think about the problem is to ask which particle gets to determine the value of the wavefunction in a small neighborhood of the current position in configuration space. They all intuitively have “equal rights” to it.)

It just seems neater to suppose a global entity to do the job.

A similar issue comes up in theories that require a global field, like an electromagnetic field or a gravitational field (even if these is to be identified with spacetime).

Here is another, rather different task for a global entity in an Aristotelian context. At many times in evolutionary history, new types of organisms have arisen, with new forms. For instance, from a dinosaur whose form did not require feathers, we got a dinosaur whose form did require feathers. Where did the new form come from? Or suppose that one day in the lab we synthesize something molecularily indistinguishable from a duck embryo. It is plausible to suppose that once it grows up, it will not only walk and quack like a duck, but it will be a duck. But where did it get its duck form from?

We could suppose that particles have a much more complex nature than the one that physics assigns to them, including the power to generate the forms of all possible organisms (or at least all possible non-personal organisms—there is at least theological reason to make that distinction). But it does not seem plausible to suppose that encoded in all the particles we have the forms of ducks, elephants, oak trees, and presumably a vast array of non-actual organisms. Also, it is somewhat difficult to see how the vast number of particles involved in the production of a duck embryo would “divide up” the task of producing a duck form. This is reminiscent of the problem of dividing up the wavefunction grounding among Bohmian particles.

I am now finding somewhat attractive the idea that a global entity carries the powers of producing a vast array of forms, so that if we synthesize something just like a duck embryo in the lab, the global entity makes it into a duck.

Of course, we could suppose the global entity to be God. But that may be too occasionalistic, and too much of a God-of-the-gaps solution. Moreover, we may want to be able to say that there is some kind of natural necessity in these productions of organisms.

We could suppose several global entities: a wavefunction, a spacetime, and a form-generator.

But we could also suppose them to be one entity that plays several roles. There are two main ways of doing this:

  1. The global entity is the Universe, and all the local entities, like ducks and people and particles (if there are any), are parts of it or otherwise grounded in it. (This is Jonathan Schaffer’s holism.)

  2. Local entities are ontologically independent of the global entity.

I rather like option (2). We might call this semi-holism.

But I don’t know if there is anything to be gained by supposing there to be one global entity rather than several.

Wednesday, November 15, 2023

A tweak to Bohmianism

I think there is a sense in which it is correct to say that:

  1. Bohmian quantum mechanics is only known to work empirically if we suppose that the initial configuration of the particles is fine-tuned.

Yet there are famous results that show that:

  1. For typical initial configurations, Bohmian quantum mechanics yields standard quantum Born rule predictions, which we know to work empirically.

It seems that (1) and (2) contradict each other. But that is not so. For the typicality in (2) is measured using a typicality measure Pψ defined in terms of the initial wavefunction ψ of the universe (specifically, I believe, Pψ(A) = ∫A|ψ(q)|2dq for an event A). And a configuration typical relative to Pψ1 need not be typical relative to Pψ2. In fact, if ψ1 and ψ2 are significantly different, then a Pψ1-typical configuration will be Pψ2-atypical.

The fine-tuning I am thinking of in (1) is thus that the initial configuration of particles needs to be fitted to the initial wavefunction ψ: a configuration typical for one wavefunction is not typical for another.

I think there is an interesting solution to the Bohmian fine-tuning which I haven’t heard discussed, either because it’s crazy or because maybe nobody else worries about this fine-tuning or maybe just because I don’t talk to philosophers of quantum mechanics enough. Suppose that the wavefunction of the universe (or, more precisely, the aspect of physical reality that is representated by the mathematics of the wavefunction) has a special causal power in the first moment of its existence, and only then: an indeterministic power to produce a particle configuration, with the power’s stochastic propensities being modeled by Pψ.

This adds a little bit of metaphysical complexity to the Bohmian story, but I think significantly increases the explanatory power in two ways: first, by giving us a proper stochastic ground for the statistic probabilities and, second, by unifying the cause of the initial particle configuration and the cause of the dynamics (admittedly at the expense of a complexity in that in that cause there is a causal power that goes away or becomes irrelevant).

(Maybe this is not necessary. Maybe there are, or can be, some typicality results that don’t require fine-tuning to the initial wavefunction. Or maybe I just misunderstand the framework of the typicality results. I don’t know much about Bohmianism.)

Friday, September 1, 2023

Where are we?

Unless something like the Bohmian interpretation or a spatial collapse theory is right, quantum mechanics gives us good reason to think that the position wavefunction of all our particles is spread across pretty much all of the observable universe. Of course, except in the close vicinity of what we pre-theoretically call “our body”, the wavefunction is incredibly tiny.

What are we to make of that for the “Where am I?” question? One move is to say that we all overlap spatially, occupying most of the observable universe. On a view like this, we better not have position do serious metaphysical or ethical work, such as individuating substances or making moral distinctions based on whether one individual (say, a fetus) is within the space occupied by another.

The other move is to say I am where the wavefunction of my particles is not small. On a view like this, my location is something that comes in degrees depending on what our cut-off for “small” is. We get to save the intuition that we don’t overlap spatially. But the cost of this is that our location is far from a fundamental thing. It is a vague concept, dependent on a cut-off. A more precise thing would be to say things like: “Here I am up to 0.99, and here I am up to 0.50.”

Monday, August 7, 2023

A deterministic collapsing local quantum mechanics without hidden variables beyond the wavefunction

I will give a really, really wacky version of quantum mechanics as a proof of concept that if one wants, one can have all of the following:

  1. Compatibility with experiment

  2. Determinism

  3. Collapse

  4. No “hidden variables” beyond the wavefunction: the wavefunction encompasses all the information about the world

  5. Locality

  6. Schroedinger evolution between collapes.

Here’s the idea. We suppose that the Hilbert space for quantum mechanics is separable (i.e., has a countable basis). A separable Hilbert space has continuum-many vectors, so each quantum state vector can be encoded as a single real number. We suppose, further, that collapse occurs countably many times over the history of the universe. We can now encode all the times and outcomes of the collapses over the history of the universe as a single real number: the outcome of a collapse is a quantum state vector, encodable as a real number, the time of collapse is of course a real number, and a countable sequence of pairs of real numbers can be encoded as a single real number.

We now consider the wavefunction ψ of the universe. For simplicity, consider this as a function on R3n × R where n is the number of particles (if the number of particles changes over time, we will need to tweak this). Say that x ∈ R3n is rational provided that every coordinate of it is a rational number. We now add a new law of nature: ψ(x,t) has the same value for every rational x and every time t, which value encodes the history of all the collapses that ever happen in the history of the universe.

Since standard quantum mechanics does not care about what happens to the wavefunction on sets of measure zero, and the set of rational points of R3n has measure zero, this does not affect Schroedinger evolution between collapses, and so we have 6. We also clearly have 2, 3 and 4. If we suppose a prior probability distribution on the collapses that fits with the Born rule, we get 1. We also have 5, since any open region of space that contains an experiment will also contain the real number encoding the collapse history.

Of course, this is rather nutty. It just shows that because the wavefunction has more room for information than just the quantum state vector—the quantum state vector can be thought of as an equivalence class of wavefunctions differing on sets of measure zero—we can stuff the hidden variables into the wavefunction. Those of us who think that the state vector is the real thing, not the wavefunction, will be quite unimpressed.

Sunday, July 9, 2023

Open futurism and many-worlds quantum mechanics

I’ve been thinking about some odd parallels between the many-worlds interpretation of quantum mechanics and open future views.

On both sets of views, in the case of genuinely chancy future events there is strictly no fact of the matter about what will turn out. On many-worlds, the wavefunction provides a big superposition of the options, but for no one option is it true that it will eventuate. The same is true for open future views, except that what we have instead of a superposition depends on the particular temporal logic chosen.

Yet, despite no fact about outcomes, on both sets of views one would like to be able to make probabilistic predictions about “the outcome”. For instance, one wants to say that if one tosses an indeterministic coin, it is moderately likely that the coin will land on heads and extremely unlikely that it will land on heads. In both cases, this is highly problematic, because on both views it is certain that it is not true that the coin will land on heads. So how can something that is certainly not going to happen be more likely than another event? In both cases, there is a literature trying to answer this problem (and I am not convinced by it).

Anyway, I wonder how far we can take the parallel. The wavefunction in the many-worlds interpretation is a superposition of many options about what the present is like, and is interpreted as a plurality of worlds in which different options are true. Why not do the same in the open-future case? Why not just say that there are now many worlds, including some where the coin will land on heads, some where the coin will land on tails, and some where it will land on edge? After all, if it is reasonable to interpret the superposition this way, why is it not reasonable to interpret the temporal logic this way?

There is, however, one crucial difference. The open futurist insists that reality will collapse: that once the coin lands, there will be a fact about which way it landed. On many-worlds, there is no collapse: there is never a fact about how the coin landed. Nonetheless, this could be accommodated in a many-worlds interpretation of an open-future view: we just suppose that once the coin lands, a lot of the worlds disappear.

So what if there is a parallel? Why does it matter?

Well, here are some things that we might say.

First, in both cases, there is an underlying metaphysics (a non-classical truth assignment to future facts, or a giant superposition), and then we need to interpret that underlying metaphysics. I wonder if it might not be true:

  1. A many-worlds interpretation of the underlying metaphysics is reasonable in the quantum case if and only if it is reasonable in the open-future case.

Suppose (1) is true. Most people think a many-worlds interpretation of open-future is absurd. But then why isn’t the many-worlds interpretation of quantum mechanics (or, more precisely, a quantum mechanics with exceptionlessly unitary evolution and all the facts supervening on the wavefunction) also absurd?

Second, it may well be that the open-futurist finds plausible the standard criticism of many-worlds interpretations that it does not make sense of probabilistic predictions. If so, then they should probably find equally problematic probabilistic predictions on open-future views.

Thursday, June 1, 2023

Might there have been less randomness earlier?

In my previous post, I noted that a branching view of possibility, when continued into an infinite past, leads to the counterintuitive consequence that there is less and less randomness the further back we go.

In this post I want to note that this counterintuitive consequence may in fact be right even with a finite past, given a certain interpretation of quantum mechanics.

Start with the naive consciousness causes collapse (ccc) interpretation of quantum mechanics. On naive ccc, at each moment of time, the laws of nature prevent the world to evolve into a superposition of states that differ with respect to consciousness. Thus, there cannot be a superposition between one’s feeling hot and one’s not feeling hot, or between a cat being aware of its surroundings and a cat being asleep or dead. This is assured by constant collapse with respect to a global consciousness operator C.

Unfortunately, as it stands this is untenable, because it corresponds to a setup where there is constant observation of C, and constant observation of an observable precludes change with respect to that observable by the quantum Zeno effect. In other words, if we had naive ccc, then conscious states would never change, which is empirically absurd.

Here is one way to fix this problem. Suppose that there are special moments in time, which I’ll poetically call “cosmic heartbeats”. Collapse with respect to C only occurs at cosmic heartbeats. If the cosmic “heart rate” is not very fast (i.e., the spacing between the heartbeats is big enough), then the quantum Zeno effect will be negligible, and we needn’t worry about it. And we hypothesize that consciousness only occurs at cosmic heartbeats.

But now let’s consider the history of our universe. In the early universe, the only way to get a non-empty consciousness state is by some ridiculously unlikely feat of quantum tunnelling generating a Boltzmann brain or the like. Thus the only randomness we will have in the early universe will be that induced by pruning away the components of the global wavefunction corresponding to such ridiculously unlikely feats. And that is only a tiny bit of randomness. But as things evolve, we get components of the wave function with significant weight corresponding to the evolution of various conscious critters. Now the periodic collapse will be “deciding” between states of comparable likelihood (e.g., life on earth versus life on some other planet formed from some of the same materials orbiting the sun) rather than just pruning away extremely unlikely options.

One would need to know a lot more physics (and perhaps neuroscience?) to figure out what the cosmic heartrate needs to be to make the theory work. An upper bound is given by the quantum Zeno effect: if the cosmic heartrate is too fast, then we could predict a slowdown of consciousness. A lower bound is given by introspection: the cosmic heartrate had better be at least as fast as the speed at which our conscious states are observed to change.

I wonder if a similar decrease of randomness in the past wouldn’t be predicted by GRW collapse theories.

Tuesday, February 21, 2023

Achievement in a quantum world

Suppose Alice gives Bob a gift of five lottery tickets, and Bob buys himelf a sixth one. Bob then wins the lottery. Intuitively, if one of the tickets that Alice bought for Bob wins, then Bob’s win is Alice’s achievement, but if the winning ticket is not one of the ones that Alice bought for Bob, then Bob’s win is not Alice’s achievement.

But now suppose that there is no fact of the matter as to which ticket won, but only that Bob won. For instance, maybe the way the game works is that there is a giant roulette wheel. You hand in your tickets, and then an equal number of depressions on the wheel gets your name. If the ball ends in a depression with your name, you win. But they don’t write your name down on the depressions ticket-by-ticket. Instead, they count up how many tickets you hand them, and then write your name down on the same number of depressions.

In this case, it seems that Bob’s win isn’t Alice’s achievement, because there is no fact of the matter that it was one of Alice’s tickets that got Bob his win. Nor does this depend on the probabilities. Even if Alice gave Bob a thousand tickets, and Bob contributed only one it seems that Bob’s win isn’t Alice’s achievement.

Yet in a world run on quantum mechanics, it seems that our agential connection to the external world is like Alice’s to Bob’s win. All we can do is tweak the probabilities, perhaps overwhelmingly so, but there is no fact of the matter about the outcome being truly ours. So it seems that nothing is ever our achievement.

That is an unacceptable consequence, I think.

I think there are two possible ways out. One is to shift our interpretation of “achievement” and say that Bob’s win is Alice’s achievement in the original case even when it was the ticket that Bob bought for himself that won. Achievement is just sufficient increase of probability followed by the occurrence of the thus probabilified event.

The second is heavy duty metaphysics. Perhaps our causal activity marks the world in such a way that there is always a trace of what happened due to what. Events come marked with their actual causal history. Sometimes, but not always, that causal history specifies what was actually the cause. Perhaps I turn a quantum probability dial from 0.01 to 0.40, and you turn it from 0.40 to 0.79, and then the event happens, and the event comes metaphysically marked with its cause. Or perhaps when I turn the quantum probability dial and you turn it, I embue it with some of my teleology and when you turn it, you embue it with some of yours, and there is a fact of the matter as to whether a further on down effect comes from your teleology or mine.

I find the metaphysical answer hard to believe, but I find the probabilistic one conceptually problematic.

Friday, December 16, 2022

Panteleology: A few preliminary notes

Panteleology holds that teleology is ubiquitous. Every substance aims at
some end.

The main objection to panteleology is the same as that to panpsychism: the incredulous stare. I think a part of the puzzlement comes from the thought that things that are neither biological nor artifactual “just do what they do”, and there is no such thing as failure. But this seems to me to be a mistake. Imagine a miracle where a rock fails to fall down, despite being unsupported and in a gravitational field. It seems very natural to say that in that case the rock failed to do what rocks should do! So it may be that away from the biological realm (namely organisms and stuff made by organisms) failure takes a miracle, but the logical possibility of such a miracle makes it not implausible to think that there really is a directedness.

That said, I think the quantum realm provides room for saying that things don’t “just do what they do”. If an electron is in a mixed spin up/down state, it seems right to think about it as having a directedness at a pure spin-up state and a directedness at a pure spin-down state, and only one of these directednesses will succeed.

Panteleology seems to be exactly what we would expect in a world created by God. Everything should glorify God.

Panteleology is also entailed by a panpsychism that follows Leibniz in including the ubiquity of “appetitions” and not just perceptions. And it seems to me that if we think through the kinds of reasons people have for panpsychism, these reasons extend to appetitions—just as a discontinuity in perception is mysterious, a discontinuity in action-driving is mysterious.

Tuesday, August 23, 2022

Collapse and unitarity

Quantum collapse is often said to “violate unitarity”. Either I’m confused or this phrasing is misleading or both.

A bounded linear operator P on a Hilbert space H is said to be unitary iff it is surjective and preserves inner products. But as I understand it, quantum collapse is not even an operator. An operator on H is a function from H to H. But a function f, given a specific input |ψ, yields a unique output f(|ψ⟩). Quantum collapse does no such thing. It is an indeterministic process. Sometimes given input 2−1/2(|ψ1⟩+|ψ2⟩) (where |ψ1⟩ and |ψ2⟩ are eigenvectors corresponding to the measurable we are collapsing with respect to) it gives output |ψ1 and sometimes it gives output |ψ2.

While strictly speaking if some process is not modeled by an operator, it is not modeled by a unitary operator, to call that a violation of unitarity is misleading. It is better to say it’s a violation of operationality or functionality. We cannot even say what it would mean for a process not modeled by an operator to be unitary, just as we cannot say what it would mean for a frog to be unitary or a linear operator to be a vertebrate.

One might try to say what it would mean to have unitarity for a non-deterministic evolution. Suppose that |ψ would collapse to |ψ′⟩ and |ϕ would collapse to |ϕ′⟩ under some measurement. Then one could claim that unitarity would say that ϕ′|ψ′⟩=⟨ϕ|ψ. But this assumes that there is a fact of the matter as to what |ψ and |ϕ would collapse to. Now, if |ψ in fact collapses to |ψ′⟩, it might make sense to say that |ψwould collapse to |ψ′⟩. But for unitarity we need the identity ϕ′|ψ′⟩=⟨ϕ|ψ for all inputs |ψ⟩ and |ϕ⟩, not just for the ones that actually occurred.

I suppose one could have a generalized Molinist thesis that there is always a fact of the matter as to what a given wavefunction would collapse to, so that we might be able to define a collapse operator. And then we could say that unitarity fails. But it would still likely be misleading to say that unitarity fails, since we would expect linearity to fail, not merely unitarity. And in any case, such a generalized Molinist thesis is quite dubious.

But I know very little about quantum mechanics, and so I may simply be confused.

Wednesday, July 13, 2022

Two difficulties for wavefunction realism

According to wavefunction realism, we should think of the wavefunction of the universe—considered as a square-integrable function on R3n where n is the number of particles—as a kind of fundamental physical field.

Here are two interesting consequences of wavefunction realism. First, it seems like it should be logically possible for the fundamental physical field to take any logically coherent combination of values on R3n. But now imagine that the initial conditions of the wavefunction “field” are have it take a combination of values that is not a square-integrable function, either because it is nonmeasurable or because it is measurable but non-square-integrable. Then the Schroedinger equation “wouldn’t know” what to do with the wavefunction. In other words, for quantum physics to work, given wavefunction realism, we need a very special initial combination of values of the “wavefunction field”. This is not a knockdown argument, but it does suggest an underexplored need for fine-tuning of initial conditions.

Second, the solutions to the Schroedinger equation, understood distributionally, are only defined up to sets of measure zero. In other words, even though the Schroedinger equation is generally considered to be deterministic (any indeterminism in quantum mechanics comes in elsewhere, say in collapse), nonetheless the solutions to the equation are underdetermined when they are considered as square-integrable fields on R3n—if ψ(⋅,t) is a solution for a given set of initial conditions, so is any function that differs from ψ(⋅,t) only on a set of measure zero. Granted, any two candidates for the wavefunction that differ only on a set of measure zero provide the exact same empirical predictions. However, it is still troubling to think that so much of physical reality would be ungoverned by the laws. (There might be a solution using the lifting theorem mentioned in footnote 6 here, though.)

Thursday, June 30, 2022

Predictions and Everett

Imagine this unfortunate sequence of events will certainly befall you in a classical universe:

  1. You will be made to fall asleep.

  2. Upon waking up, you will be shown a red square.

  3. You will be made to fall asleep again.

  4. While asleep, your memory will be reset to that which you had in step (1).

  5. Upon waking up, you will be shown a green triangle.

  6. You will be made to fall asleep for a third time.

  7. While asleep, your memory will be reset again to that which you had in step (1).

  8. Upon waking up, you will be shown a green circle.

  9. You will then be permanently annihilated.

Questions:

  1. How likely is it that you will be shown a green shape?

  2. How likely is it that you will be shown a red shape?

The answers to these questions are obviously: one and one. You will be shown a green shape twice and a red shape one, and that’s certain.

Now consider a variant story where personal identity is not maintained in sleep. Perhaps each time in sleep the person who fell asleep will be annihilated and replaced by something that is in fact an exact duplicate, but that isn’t identical with the original according to the correct metaphysics of diachronic personal identity. (We can make this work on pretty much any metaphysics of diachronic personal identity. For example, we can make it work on a materialist memory theory as follows. We just suppose that before step (1), you happen to have three exact duplicates alive, who are not you. Then during the nth sleep cycle, the sleeper is annihilated, and a fresh brain is prepared and memories will be copied into it from your nth doppelganger. Since these memories don’t come from you, the resulting brain isn’t yours.)

And in the variant story, let’s ask the questions (10) and (11) again. What will the answers be? Again, it’s easy and obvious: zero and zero. You won’t be shown any shapes, because you will be annihilated in your sleep before any shapes are shown.

Now consider Everettian branching quantum mechanics. Suppose there is a quantum process that will result in your going to sleep in an equal superposition of states between having a red square, a green triangle and a green circle in front of your head, so that upon waking up an observation of the shape will be made. Now ask questions (10) and (11) again.

I contend that this is just as easy as in my classical universe story. Either the branching preserves personal identity or not. If it preserves personal identity, the answer to the questions is one and one. If it fails to preserve personal identity, the answer to the questions is zero and zero. The only relevant ontological difference between the quantum and classical stories is that in the quantum stories the wakeups might count as simultaneous while in the classical story the wakeups are sequential. And that really makes no difference.

In none of the four cases—the classical story with or without personal identity and the branching story with or without personal identity—are the answers to the questions 2/3 and 1/3. But those are in fact the right answers in the quantum case, contrary to the Everett model.

Now, one might object that we care more about decisions than predictions. Suppose that you have a choice between playing a game with one of two three-sided fair quantum dice:

  • Die A is marked: red square, green triangle, green circle.

  • Die B is marked: green square, red triangle, red circle.

And suppose pain will be induced if and only if the die comes up red. Which die should you prudentially choose for playing the game? Again, it depends on whether personal identity is preserved. If not, it makes no difference. If yes, clearly you should go for die A on the Everett model—and that is indeed the intuitively correct answer. But the reason for going for die A on the Everett model is different from the reason for going for it on a non-branching quantum mechanics. On the Everett model, the reason for going for die A is that it’s better to get pain once (die A) rather than twice (die B).

So far so good. But now suppose that you’ve additionally been told that if you go for die A, then before you roll A, an irrelevant twenty-sided die will be rolled. (This is a variant of an example Peter van Inwagen sent me years ago, which was due to a student of his.) Then, intuitively, if you go for die A, there will be twenty red branches and forty green branches on Everett. So on die A, you get pain twenty times if personal identity is preserved, and on die B you get pain only twice. And so you should surely go for die B, which is absurd.

One might reasonably object that there are in fact infinitely many branches no matter what. But then on the no-identity version, the choice is still irrelevant to you prudentially, while on the identity version, no matter what you do, you get pain infinitely many times no matter what you choose. And that doesn’t work, either. And if there is no fact about how many branches there will be, then the answer is just that there is no fact about which option is preferable on the identity version, and on the no-identity version, indifference still follows.

This is all basically well-known stuff. But I like the above way of making it vivid by thinking about classically sequentializing the story.

Thursday, June 23, 2022

What I think is wrong with Everettian quantum mechanics

One can think of Everettian multiverse quantum mechanics as beginning by proposing two theses:

  1. The global wavefunction evolves according to the Schroedinger equation.

  2. Superpositions in the global wavefunction can be correctly interpreted as equally real branches in a multiverse.

But prima facie, these two theses don’t fit with observation. If one prepares a quantum system in a (3/5)|↑⟩+(4/5)|↓⟩ spin state, and then observes the spin, one will will observe spin up in |3/5|^2=9/25 cases and spin down in |4/5|^2=16/25 cases. But (roughly speaking) there will be two equally real branches corresponding to this result, and so prima facie one would expect equally likely observations, which doesn't fit observation. But the Everettian adds a third thesis:

  1. One ought to make predictions as to which branch one will observe proportionately to the square of the modulus of the coefficients that the branch has in the global wavefunction.

Since Aristotelian science has been abandoned, there has been a fruitful division of labor between natural science and philosophy, where investigation of normative phenomena has been relegated to philosophy while science concerned itself with the non-normative. From that point of view, while (1) and (less clearly but arguably) (2) belong to the domain of science, (3) does not. Instead, (3) belongs to epistemology, which is study of the norms of thought.

This point is not a criticism. Just as a doctor who has spent much time dealing sensitively with complex cases will have unique insights into bioethics, a scientist who has spent much time dealing sensitively with evidence will have unique insights into scientific epistemology. But it is useful, because the division of intellectual labor is useful, to remember that (3) is not a scientific claim in the modern sense. And there is nothing wrong with that as such, since many non-scientific claims, such as that one shouldn’t lie and that one should update by conditionalization, are true and important to the practice of the scientific enterprise.

But (3) is a non-scientific claim that is absurd. Imagine that a biologist came up with a theory that predicted, on the basis of their genetics and environment, that:

  1. There are equal numbers of male and female infant spider monkeys.

You might have thought that this theory is empirically disproved by observations of a lot more female than male infant spider monkeys. But our biologist is clever, and comes up with this epistemological theory:

  1. One ought to make predictions as to the sex of an infant spider monkey one will observe in inverse proportion to the ninth power of the average weight of that sex of spider monkeys.

And now, because male spider monkeys are slightly larger than females, we will make predictions that roughly fit our observations.

Here’s what went wrong in our silly biological example. The biologist’s epistemological claim (5) was not fitted to the actual ontology of the biologist’s theory. Instead, basically, the biologist said: when making predictions of future observations, make them in the way that you should if you thought the sex ratios were inversely proportional to the ninth power of the average weights, even though they aren’t.

This is silly. But exactly the same thing is going on in the Everett case. We are being told to make predictions in the way you should if the modulus squares of the weights in the superposition were chances of collapse. But they are not.

It is notorious that any scientific theory can be saved from empirical disconfirmation by adding enough auxiliary scientific hypotheses. But one can also save any scientific theory from empirical disconfirmation by adding an auxiliary philosophical hypothesis as to how confirmation or disconfirmation ought to proceed. And doing that may be worse than obstinately adding auxiliary scientific hypotheses. For auxiliary scientific hypotheses can often be tested and disproved. But an auxiliary epistemological hypothesis may simply close the door to refutation.

To put it positively, we want a certain degree of independence between epistemological principles and the ontology of a theory so that the ontology of the theory can be judged by the principles.

Friday, November 12, 2021

Naturalists shouldn't be virtue ethicists

Virtue ethics is committed to this claim:

  1. A choice of A is wrong if and only if a person who had the relevant virtues explanatorily prior to having chosen A and was in these circumstances would not have chosen A.

But (1) implies this generalization:

  1. A person who has the relevant virtues explanatorily prior to a choice never chooses wrongly.

In my previous post I argued that Aristotelian Jews and Christians should deny (2), and hence (1).

Additionally, I think naturalists should deny (1). For we live in a fundamentally indeterministic world given quantum mechanics. If a virtuous person were placed in a position of choosing between aiding and insulting a stranger, there will always be a tiny probability of their choosing to insult the stranger. We shouldn’t say that they wouldn’t insult the stranger, only that they would be very unlikely to do so (this is inspired by Alan Hajek’s argument against counterfactuals).

And (2) itself is dubious, unless we have such a high standard of virtue that very few people have virtues. For in our messy chaotic world, very little is at 100%. Rare exceptions should be expected when human behavior is involved.

(Perhaps a dualist virtue ethicist who does not accept the Hebrew Scriptures could accept (1) and (2), holding that a virtuous soul makes the choices and is not subject to the indeterminacy of quantum mechanics and the chaos of the world.)

There is a natural way out of the above arguments, and that it so to change (1) to a probabilistic claim:

  1. A choice of A is wrong if and only if a person who had the relevant virtues explanatorily prior to having chosen A and was in these circumstances would be very unlikely to have chosen A.

But (3) is false. Suppose that Alice is a virtuous person who has a choice to help exactly one of a million strangers. Whichever stranger she chooses to help, she does no wrong. But it is mathematically guaranteed that there is at least one stranger such that her chance of helping them is at most one in a million (for if pn is her chance of helping stranger number n, then p1 + ... + p1000000 ≤ 1, since she cannot help more than one; given that 0 ≤ pn for all n, it follows mathematically that for some n we have pn ≤ 1/1000000). So her helping a particular such stranger is very unlikely to be chosen, but isn’t wrong.

Or for a less weighty case, suppose I say something perfectly morally innocent to start off a conversation. Yet it is very unlikely that a virtuous person would have said so. Why? Because there are so very many perfectly morally innocent ways to start off a conversation, it is very unlikely that they would have chosen the same one I did.

Ethics and multiverse interpretations of quantum mechanics

Somehow it hasn’t occurred to me until yesterday that quantum multiverse theories (without the traveling minds tweak) undercut half of ethics, just as Lewis’s extreme modal realism does.

For whatever we do, total reality is the same, and hence no suffering is relieved, no joy is added, etc. The part of ethics where consequences matter is all destroyed. There is no point to preventing any evil, since doing so just shifts which branch of the multiverse one inhabits.

At most what is left of ethics is agent-centered stuff, like deontology. But that’s only about half of ethics.

Moreover, even the agent-centered stuff may be seriously damaged, depending on how one interprets personal identity in the quantum multiverse.

Consider three theories.

On the first, I go to all the outgoing branches, with a split consciousness. On this view, no matter what, there will be branches where I act well and branches where I act badly. So much or all of the agent-centered parts of ethics will be destroyed.

On the second, whenever branching happens, the persons in the branches are new persons. If so, then there are no agent-centered outcomes—if I am deliberating between insulting or comforting a suffering person, no matter what, I will do neither, but instead a descendant of me will insult and another descendant will comfort. Again, it’s hard to fit this with the agent-centered parts of ethics.

The third is the infinitely many minds theory on which there are infinitely many minds inhabiting my body, and whenever a branching happens, infinitely many move into each branch. In particular, I will move into one particular branch. On this theory, if somehow I can control which branch I go down (which is not clear), there is room for agent-centered outcomes. But this is not the most prominent of the multiverse theories.

Thursday, August 19, 2021

A philosophical advantage of quantum mechanics over Newtonian mechanics

We often talk as if quantum mechanics were philosophically much more puzzling than classical mechanics. But there is also a deep philosophical puzzle about Newtonian mechanics as originally formulated—the puzzle of velocities—which disappears on quantum mechanics.

The puzzle of velocities is this. To give a causal explanation of a Newtonian system’s behavior, we have to give the initial conditions for that system. These initial conditions have to include the positions and velocities (or momenta) of all the bodies in the system.

To see why this is puzzling, let’s imagine that t0 is the first moment of the universe’s existence. Then the conditions at t0 explain how things are at all times t > t0. But how can there be velocities at t0? A velocity is a rate of change of position over time. But if t0 is the first moment of the universe’s existence, there were no earlier positions. Granted, there are later positions. But these later positions, given Newtonian dynamics, depend on the velocities at t0 and hence cannot help determine what these velocities are.

One might try to solve this by saying that Newtonian dynamics implies that there cannot be a first moment of physical reality, that physical reality has to have always existed or that it exists on an interval of times open at the lower end. On either option, then, Newtonian dynamics would have to be committed to an infinite temporal regress, and that seems implausible.

Another solution would be to make velocities (or, more elegantly, momenta) equally primitive with positions (indeed, some mathematical formulations will do that). On this view, that the velocity is the rate of change of position would no longer be a definition but a law of nature. This increases the number of laws of nature and the fundamental properties of things. And if it is a mere law of nature that velocity is the rate of change of position, then it would be metaphysically possible, by a miracle, that an object standing perfectly still for days would nonetheless have a high velocity. If that seems wrong, we could just introduce a technical term, say “movement propensity” (that’s kind of what “momentum” is), in place of “velocity”, and it would sound better. However, anyway, while the resulting theory would be mathematically equivalent to Newton’s, and it would solve the velocity problem, it would be a metaphysically different theory, since it would have different fundamental properties.

On the other hand, the whole problem is absent in quantum mechanics. The Schroedinger equation determines the values of the wavefunction at times later than t0 simply on the basis of the values of the wavefunction at t0. Granted, the cost is that we have a wavefunction instead of just positions. And in a way it is really a variant of the making-momenta-primitive solution to the Newtonian problem, because the wavefunction encodes all the information on positions and momenta.

Wednesday, January 20, 2021

I can jump 100 feet up in the air

Consider a possible world w1 which is just like the actual world, except in one respect. In w1, in exactly a minute, I jump up with all my strength. And then consider a possible world w2 which is just like w1, but where moments after I leave the ground, a quantum fluctuation causes 99% of the earth’s mass to quantum tunnel far away. As a result, my jump takes me 100 feet in the air. (Then I start floating down, and eventually I die of lack of oxygen as the earth’s atmosphere seeps away.)

Here is something I do in w2: I jump 100 feet in the air.

Now, from my actually doing something it follows that I was able to do it. Thus, in w2, I have the ability to jump 100 feet in the air.

When do I have this ability? Presumably at the moment at which I am pushing myself off from the ground. For that is when I am acting. Once I leave the ground, the rest of the jump is up to air friction and gravity. So my ability to jump 100 feet in the air is something I have in w2 prior to the catastrophic quantum fluctuation.

But w1 is just like w2 prior to that fluctuation. So, in w1 I have the ability to jump 100 feet in the air. But whatever ability to jump I have in w1 at the moment of jumping is one that I already had before I decided to jump. And before the decision to jump, world w1 is just like the actual world. So in the actual world, I have the ability to jump 100 feet in the air.

Of course, my success in jumping 100 feet depends on quantum events turning out a certain way. But so does my success in jumping one foot in the air, and I would surely say that I have the ability to jump one foot. The only principled difference is that in the one foot case the quantum events are very likely to turn out to be cooperative.

The conclusion is paradoxical. What are we to make of it? I think it’s this. In ordinary language, if something is really unlikely, we say it’s impossible. Thus, we say that it’s impossible for me to beat Kasparov at chess. Strictly speaking, however, it’s quite possible, just very unlikely: there is enough randomness in my very poor chess play that I could easily make the kinds of moves Deep Blue made when it beat him. Similarly, when my ability to do something has extremely low reliability, we simply say that I do not have the ability.

One might think that the question of whether one is able to do something is really important for questions of moral responsibility. But if I am right in the above, then it’s not. Imagine that I could avert some tragedy only by jumping 100 feet in the air. I am no more responsible for failing to avert that tragedy than if the only way to avert it would be by squaring a circle. Yet I can jump 100 feet in the air, while no one can square a circle.

It seems, thus, that what matters for moral responsibility is not so much the answer to the question of whether one can do something, but rather answers to questions like:

  1. How reliably can one do it?

  2. How reliably does one think (or justifiably think or know) one can do it?

  3. What would be the cost of doing it?

Thursday, October 8, 2020

Microphysics and philosophy of mind

Much (but not all) contemporary philosophy of mind is written as if microphysics were fundamental physics. But as far as I know, only on those interpretations of quantum mechanics that disallow indeterminacy as to the number of particles can microphysics be fundamental physics. The most prominent such interpretation is Bohmianism. On most other interpretations, the most we can say about the number of particles is that we are in a superposition between states with different numbers of particles. But reality has to have determinate numbers of fundamental entities. The picture of reality we get from both relativity theory and mainstream interpretations of quantum mechanics other than Bohmianism and its close cousins is that fundamental physical reality consists of global entities such as the spacetime manifold or the wavefunction of the universe rather than microscopic entities like particles. (I am attracted to a non-mainstream interpretation on which the fundamental physical entities may include mid-sized things like dogs and trees.)

Sometimes, pretending microphysics is fundamental physics is excusable. For certain discussions, it doesn’t matter what the fundamental physics is—the arguments work equally well for global and local fundamental entities. In other cases, all that matters is relative fundamentality. Thus, facts about chemistry might be held to be more fundamental relative to biology, and facts about microphysics might be fundamental relative to chemistry, even if the microphysics facts themselves are not fundamental simpliciter, being reducible, say, to facts about global fields.

But even when the arguments do not formally rely on fundamental physics being microphysics, it is risky in a field so reliant on intuition to let one’s intuitions be guided by acting as if fundamental physics were microphysics. And doing this is likely to mis-focus one’s critical attention, say focusing one more on the puzzle of why the functioning of various neurons produces a unified consciousness than on the puzzle of how the functioning of a handful of global entities results in the existence of billions of minded persons.