Showing posts with label fine-tuning. Show all posts
Showing posts with label fine-tuning. Show all posts

Thursday, September 12, 2024

Three-dimensionality

It seems surprising that space is three-dimensional. Why so few dimensions?

An anthropic answer seems implausible. Anthropic considerations might explain why we don’t have one or two dimensions—perhaps it’s hard to have life in one or two dimensions, Planiverse notwithstanding—but thye don’t explain why don’t have thirty or a billion dimensions.

A simplicity answer has some hope. Maybe it’s hard to have life in one and two dimensions, and three dimensions is the lowest dimensionality in which life is easy. But normally when we do engage in simplicity arguments, mere counting of things of the same sort doesn’t matter much. If you have a theory on which in 2050 there will be 9.0 billion people, your theory doesn’t count as simpler in the relevant sense than a theory on which there will be 9.6 billion then. So why should counting of dimensions matter?

There is something especially mathematically lovely about three dimensions. Three-dimensional rotations are neatly representable by quaternions (just as two-dimensional ones are by complex numbers). There is a cross-product in three dimension (admittedly as well as in seven!). Maybe the three-dimensionality of the world suggests that it was made by a mathematician or for mathematicians? (But a certain kind of mathematician might prefer an infinite-dimensional space?)

Tuesday, December 5, 2023

Fields and finetuning

Here is an interesting fine-tuning issue, inspired by a talk I heard from Brian Cutter at the 2023 ACPA meeting.

It seems likely that physical reality will involve one or more fields: objects that assign values to points in space (“ordinary” space or configuration space), which values then govern the evolution of the universe.

The fine-tuning issue is this. A plausible rearrangement principle should allow any mathematical assignment of values of the field to the points in space as metaphysically possible. But intuitively “most” such assignments result in a configuration that cannot meaningfully evolve according to our laws of nature. So we want to have an explanation of the fine-tuning—why are we so lucky as to have an assignment that plays nice with the laws of nature.

For a toy example, consider an electric field, which is a vector field E that generates a force F = qE on a particle of charge q. Intuitively, “most” vector fields will be nonmeasurable. But for a nonmeasurable electric field, we have no hope for a meaningful solution to the differential equations of motion. (OK, I’m ignoring the evolution of the field itself.)

For another example, suppose we think of the quantum wavefunction as a function over configuration space rather than as a vector in Hilbert space (though I prefer the latter formulation). If that function is nonmeasurable—and intuitively “most” are nonmeasurable—then we have no way to use quantum mechanics to predict the further evolution of this wavefunction. And if that function, while measurable, is not square integrable (I don’t know if there is a sense of “most” that applies here), then we have no way to use the Born rule to generate measurement predictions.

Wednesday, April 26, 2023

Multiverses as a skeptical hypothesis

  1. A multiverse hypothesis that counters the fine-tuning argument posits laws of nature that vary across physical reality.

  2. A hypothesis that posits laws of nature that vary across physical reality contradicts the uniformity of nature.

  3. A hypothesis that contradicts the uniformity of nature is a global skeptical hypothesis.

  4. Global skeptical hypotheses should be denied.

  5. So, a multiverse hypothesis that counters the fine-tuning argument should be denied.

The thought behind (1) is that the constants in the laws of nature are part and parcel of the laws. This can be denied. But still, the above argument seems to have some plausibility.

Thursday, July 5, 2018

Existence and arbitrary parameters

Suppose vague existence and vague identity are impossible. Consider cases where a seemingly insignificant difference makes a difference as to existence. For instance, imagine that a tomato plant is slowly crushed. At some point, what is there is no longer identical with the original plant. (One can run the story diachronically or modify it and run at across worlds.)

There will thus be facts that determine when exactly the tomato plant ceased to exist. Moreover, these facts seem to call out for an explanation: Why should this precise degree of crushing make the plant not exist any more?

This degree of crushing seems to be an arbitrary parameter, either a contingent or a necessary one. One reaction to such an arbitrary parameter is to reject the assumption that there is no vagueness in existence or identity. But a theist has another option: The parameter is there, but it is wisely chosen by God.

Note 1: It may seem that an Aristotelian has an answer: The plant ceases to exist when its form departs. But that only pushes the question back to: Why does this precise degree of crushing make the form depart?

Note 2: There could be an indeterministic law of nature that says that given a degree of crushing there is a chance of the tomato plant ceasing to exist. But such a law would have seemingly arbitrary parameters, too.

Wednesday, March 21, 2018

Bohmianism and God

Bohmian mechanics is a rather nice way of side-stepping the measurement problem by having a deterministic dynamics that generates the same experimental predictions as more orthodox interpretations of Quantum Mechanics.

Famously, however, Bohmian mechanics suffers from having to make the quantum equilibrium hypothesis (QEH) that the initial distribution of the particles matches the wavefunction, i.e., that the initial particle density is given by (at least approximately) |ψ|2. In other words, Bohmian mechanics requires the initial conditions to be fine-tuned for the theory to work, and we can then think of Bohmian mechanics as deterministic Bohmian dynamics plus QEH.

Can we give a fine-tuning argument for the existence of God on the basis of the QEH, assuming Bohmian dynamics? I think so. Given the QEH, nature becomes predictable at the quantum level, and God would have good reason to provide such predictability. Thus if God were to opt for Bohmian dynamics, he would be likely to make QEH true. On the other hand, in a naturalistic setting, QEH seems to be no better than an exceedingly lucky coincidence. So, given Bohmian dynamics, QEH does support theism over naturalism.

Theism makes it possible to be an intellectually fulfilled Bohmian. But I don’t know that we have good reason to be Bohmian.

Thursday, November 2, 2017

Four problems and a unified solution

A similar problem occurs in at least four different areas.

  1. Physics: What explains the values of the constants in the laws of nature?

  2. Ethics: What explains parameters in moral laws, such as the degree to which we should favor benefits to our parents over benefits to strangers?

  3. Epistemology: What explains parameters in epistemic principles, such as the parameters in how quickly we should take our evidence to justify inductive generalizations, or how much epistemic weight we should put on simplicity?

  4. Semantics: What explains where the lines are drawn for the extensions of our words?

There are some solutions that have a hope of working in some but not all the areas. For instance, a view on which there is a universe-spawning mechanism that induces random value of constants in laws of nature solves the physics problem, but does little for the other three.

On the other hand, vagueness solutions to 2-4 have little hope of helping in the physics case. Actually, though, vagueness doesn’t help much in 2-4, because there will still be the question of explaining why the vague regions are where they are and why they are fuzzy in the way there are—we just shift the parameter question.

In some areas, there might be some hope of having a theory on which there are no objective parameters. For instance, Bayesianism holds that the parameters are set by the priors, and subjective Bayesianism then says that there are no objective priors. Non-realist ethical theories do something similar. But such a move in the case of physics is implausible.

In each area, there might be some hope that there are simple and elegant principles that of necessity give rise to and explainingthe values of the parameters. But that hope has yet to be born out in any of the four cases.

In each area, one can opt for a brute necessity. But that should be a last resort.

In each area, there are things that can be said that simply shift the question about parameters to a similar question about other parameters. For instance, objective Bayesianism shifts the question of about how much epistemic weight we should put on simplicity to the question of priors.

When the questons are so similar, there is significant value in giving a uniform solution. The theist can do that. She does so by opting for these views:

  1. Physics: God makes the universe have the fundamental laws of nature it does.

  2. Ethics: God institutes the fundamental moral principles.

  3. Epistemology: God institutes the fundamental epistemic principles for us.

  4. Semantics: God institutes some fundamental level of our language.

In each of the four cases there is a question of how God does this. And in each there is a “divine command” style answer and a “natural law” style answer, and likely others.

In physics, the “divine command” style answer is occasionalism; in ethics and epistemology it just is “divine command”; and in semantics it is a view on which God is the first speaker and his meanings for fundamental linguistic structs are normative. None of these appeal very much to me, and for the same reason: they all make the relevant features extrinsic to us.

In physics, the “natural law” answer is theistic Aristotelianism: laws supervene on the natures of things, and God chooses which natures to instantiate; theistic natural law is a well-developed ethical theory, and there are analogues in epistemology and semantics, albeit not very popular ones.

Tuesday, February 28, 2017

An unimpressive fine-tuning argument

One of the forces of nature that the physicists don’t talk about is the flexi force, whose value between two particles of mass m1 and m2 and distance r apart is given by F = km1m2r and which is radial. If k were too positive the universe would fall apart and if k were too negative the universe would collapse. There is a sweet spot of life-permissivity where k is very close to zero. And, in fact, as far as we know, k is exactly zero. :-)

Indeed, there are infinitely many forces like this, all of which have a “narrow” life-permitting range around zero, and where as far as we know the force constant is zero. But somehow this fine-tuning does not impress as much as the more standard examples of fine-tuning. Why not?

Probably it’s this: For any force, we have a high prior probability, independent of theism, that it has a strength of zero. This is a part of our epistemic preference for simpler theories. Similarly, if k is a constant in the laws of nature expressed in a natural unit system, we have a relatively high prior probability that k is exactly 1 or exactly 2 (thought experiment: in the lab you measure k up to six decimal places and get 2.000000; you will now think that it’s probably exactly 2; but if you had uniform priors, your posterior that it’s exactly 2 would be zero).

But his in turn leads to a different explanatory question: Why is it the case that we ought to—as surely we ought, pace subjective Bayesianism—have such a preference, and such oddly non-uniform priors?

Tuesday, November 10, 2015

Parameters in ethics

In physical laws, there are a number of numerical parameters. Some of these parameters are famously part of the fine-tuning problem, but all of them are puzzling. It would be really cool if we could derive the parameters from elegant laws that lack arbitrary-seeming parameters, but as far as I can tell most physicists doubt this will happen. The parameters look deeply contingent: other values for them seem very much possible. Thus people try to come up either with plenitude-based explanations where all values of parameters are exemplified in some universe or other, or with causal explanations, say in terms of universes budding off other universes or a God who causes universes.

Ethics also has parameters. To further spell out an example from Aquinas' discussion of the order of charity, fix a set of specific circumstances involving yourself, your father and a stranger, where both your father and the stranger are in average financial circumstances, but are in danger of a financial loss, and you can save one, but not both, of them from the loss. If it's a choice between saving your father from a ten dollar loss or the stranger from an eleven dollar loss, you should save your father from the loss. But if it's a choice between saving your father from a ten dollar loss or the stranger from a ten thousand dollar loss, you should save the stranger from the larger loss. As the loss to the stranger increases, at some point the wise and virtuous agent will switch from benefiting the father to benefiting the stranger. The location of the switch-over is a parameter.

Or consider questions of imposition of risk. To save one stranger's life, it is permissible to impose a small risk of death on another stranger, say a risk of one in a million. For instance, an ambulance driver can drive fast to save someone's life, even though this endangers other people along the way. But to save a stranger's life, it is not permissible to impose a 99% risk of death on another stranger. Somewhere there is a switch-over.

There are epistemic problems with such switch-overs. Aquinas says that there is no rule we can give for when we benefit our father and when we benefit a stranger, but we must judge as the prudent person would. However I am not interested right now in the epistemic problem, but in the explanatory problem. Why do the parameters have the values they do? Now, granted, the particular switchover points in my examples are probably not fundamental parameters. The amount of money that a stranger needs to face in order that you should help the stranger rather than saving your father from a loss of $10 is surely not a fundamental parameter, especially since it depends on many of the background conditions (just how well off is your father and the stranger; what exactly is your relationship with your father; etc.) Likewise, the saving-risking switchover may well not be fundamental. But just as physicists doubt that one can derive the value of, say, the fine-structure constant (which measures the strength of electromagnetic interactions between charged particles) from laws of nature that contain no parameters other than elegant ones like 2 and π, even though it is surely a very serious possibility that the fine-structure constant isn't truly fundamental, so too it is doubtful that the switchover points in these examples can be derived from fundamental laws of ethics that contain no parameters other than elegant ones. If utilitarianism were correct, it would be an example of a parameter-free theory providing such a derivation. But utilitarianism predicts the incorrect values for the parameters. For instance, it incorrectly predicts that that the risk value at which you need to stop risking a stranger's life to certainly save another stranger is 1, so that you should put one stranger in a position of 99.9999% chance of death if that has a certainty of saving another stranger.

So we have good reason to think that the fundamental laws of ethics contain parameters that suffer from the same sort of apparent contingency that the physical ones do. These parameters, thus, appear to call for an explanation, just as the physical ones do.

But let's pause for a second in regard to the contingency. For there is one prominent proposal on which the laws of physics end up being necessary: the Aristotelian account of laws as grounded in the essences of things. On such an account, for instance, the value of the fine-structure constant may be grounded in the natures of charged particles, or maybe in the nature of charge tropes. However, such an account really does not remove contingency. For on this theory, while it is not contingent that electromagnetic interactions between, say, electrons have the magnitude they do, it is contingent that the universe contains electrons rather than shmelectrons, which are just like electrons, but they engaged in shmelectromagnetic interactions that are just like electromagnetic interactions but with a different quantity playing the role analogous to the fine-structure constant. In a case like this, while technically the laws of physics are necessary, there is still a contingency in the constants, in that it is contingent that we have particles which behave according to this value rather than other particles that would behave differently. Similarly, one might say that it is a necessary truth that such-and-such preferences are to be had between a father and a stranger, and that this necessary truth is grounded in the essence of humanity or in the nature of a paternity trope. But there is still a contingency that our world contains humans and fathers rather than something functionally very similar to humans and fathers but with different normative parameters.

So in any case we have a contingency. We need a meta-ethics with a serious dose of contingency, contingency not just derivable from the sorts of functional behavior the agents exhibit, but contingency at the normative level--for instance, contingency as to appropriate endangering-saving risk tradeoffs. This contingency undercuts the intuitions behind the thesis that the moral supervenes on the non-moral. Here, both Natural Law and Divine Command rise to the challenge. Just as the natures of contingently existing charged objects can ground the fine-structure constants governing their behavior, the natures of contingently existing agents can ground the saving-risking switchover values governing their behavior. And just as occasionalism can have God's causation ground the arbitrary-seeming parameters in the laws of physics, so God's commands can ground the arbitrary-seeming parameters in ethics (the illuminating analogy between occasionalism and Divine Command is due to Mark Murphy). Can other theories rise to the challenge? Maybe. But in any case, it is a genuine challenge.

It would be particularly interesting if there were an analogue to the fine-tuning argument in this case. The fine-tuning argument arises because in some sense "most" of the possible combinations of values of parameters in the laws of physics do not allow for life, or at least for robust, long-lasting and interesting life. I wonder if there isn't a similar argument on the ethics side, say that for "most" of the possible combinations of parameters, we aren't going to have the good moral communities (the good could be prior to the moral, so there may be no circularity in the evaluation)? I don't know. But this would be an interesting research project for a graduate student to think about.

Objection: The switchover points are vague.

Response: I didn't say they weren't. The puzzle is present either way. Vagueness doesn't remove arbitrariness. With a sharp switchover point, just the value of it is arbitrary. But with a vague switchover point, we have a vagueness profile: here something is definitely vaguely obligatory, here it is definitely vaguely vaguely obligatory, here it is vaguely vaguely vaguely obligatory, etc. In fact, vagueness may even multiply arbitrariness, in that there are a lot more degrees of freedom in a vagueness profile than in a single sharp value.

Monday, October 19, 2015

Correcting Bayesian calculations

Normally, we take a given measurement is a sample of a bell-curve distribution centered on the true value. But we have to be careful. Suppose I report to you the volume of a cubical cup. What the error distribution is like depends on how I measured it. Suppose I weighed the cup before and after filling it with water. Then the error might well have the normal distribution we associate with the error of a scale. But suppose instead I measure the (inner) length of one of the sides of the cup, and then take the cube of that length. Then the measurement of the length will be normally distributed, but not the measurement of the volume. Suppose that what I mean by "my best estimate" of a value is the mathematical expectation of that value with respect to my credences. Then it turns out that my best estimate of the volume shouldn't be the cube of the side length, but rather it should be L3+3Lσ2, where L is the side-length and σ is the standard deviation in the side-length measurements. Intuitively, here's what happens. Suppose I measure the side length at 5 cm. Now, it's equally likely that the actual side length is 4 cm as that it is 6 cm. But 43=64 and 63=216. The average of these two equally-likely values is 140, which is actually more than 53=125. So if by best-estimate I mean the estimate that is the mathematical expectation of the value with respect to my credences, the best-estimate for the volume should be higher than the cube of the best-estimate for the side-length. (I'm ignoring complications due to the question whether the side-length could be negative; in effect, I'm assuming that the σ is quite a bit smaller than L.)

There is a very general point here. Suppose that by the best estimate of a quantity I mean the mathematical expectation of that quantity. Suppose that the quantity y I am interested in is given by the formula y=f(x) where x is something I directly measure and where my measurement of x has a symmetric error distribution (error of the same magnitude in either direction are equally likely). Then if f is a strictly convex function, then my best estimate for y should actually be bigger than f(x): simply taking my best estimate for x and applying f will underestimate y. On the other hand, if f is strictly concave, then my best estimate for y should be smaller than f(x).

But now let's consider something different: estimating the weight of evidence. Suppose I make a bunch of observations and update in a Bayesian way on the basis of them to arrive at a final credence. Now, it turns out that when you formulate Bayes' theorem in terms of the log-odds-ratio, it becomes a neat additive theorem:

  • posterior log-odds-ratio = prior log-odds-ratio + log-likelihood-ratio.
[If p is the probability, the log-odds ratio is log (p/(1−p)). If E is the evidence and H is the hypothesis, the log-likelihood-ratio is log (P(E|H)/P(E|~H)).] As we keep on repeating adding new evidence into the mix, we keep on adding new log-likelihood-ratios to the log-odds-ratio. Assuming competency in doing addition, there are two or three sources of error--sources of potential divergence between my actual credences and the rational credences given the evidence. First, I could have stupid priors. Second, I could have the wrong likelihoods. Third, perhaps, I could fail to identify the evidence correctly. Given the additivity between these errors, it's not unreasonable to think that error in the log-odds-ratio will be approximately normally distributed. (All I will need for my argument is that it has a distribution symmetric around some value.)

But as the case of the cubical cup shows, it does not follow that the error in the credence will be normally distributed. If x is the log-odds-ratio and p is the probability or credence, then p=ex/(ex+1). This is a very pretty function. It is concave for log-odds-ratios bigger than 0, corresponding to probabilities bigger than 1/2, and convex for log-odds-ratios smaller than 0, corresponding to probabilities less than 1/2, though it is actually fairly linear over a range of probabilities from about 0.3 to 0.7.

We can now calculate an estimate of the rational credence by applying the function ex/(ex+1) to the log-odds-ratio. This will be equivalent to the standard Bayesian calculation of the rational credence. But as we learn from the cube case, we don't in general get the best estimate of a quantity y that is a mathematical function of another quantity x by measuring x with normally distributed error and computing the corresponding y. When the function in question is convex, my best estimate for y will be higher than what I get in this way. When the function is concave, I should lower it. Thus, as long as we are dealing with small normal error in the log-odds-ratio, when we are dealing with probabilities bigger than around 0.7, I should lower my credence from that yielded by the Bayesian calculation, and when we are dealing with probabilities smaller than around 0.3, I should raise my credence relative to the Bayesian calculation. When my credence is between 0.3 and 0.7, to a decent approximation I can stick to the Bayesian credence, as the transformation function between log-odds-ratios and probabilities is pretty linear there.

How much difference does this correction to Bayesianism make? That depends on what the actual normally distributed error in log-odds-ratios is. Let's make up some numbers and plug into Derive. Suppose my standard deviation in log-odds-ratio is 0.4, which corresponds to an error of about 0.1 in probabilities when around 0.5. Then the correction makes almost no difference: it replaces a Bayesian's calculation of a credence 0.01 with a slightly more cautious 0.0108, say. On the other hand, if my log-odds-ratio standard deviation is 1, which corresponds with a variation of probability of around plus or minus 0.23 when centered on 0.5, then the correction changes a Bayesian's calculation of 0.01 to the definitively more cautious 0.016. But if my log-odds-ratio standard deviation is 2, corresponding to a variation of probability of 0.38 when centered on 0.5, then the correction changes a Bayesian's calculation of 0.01 to 0.04. That's a big difference.

There is an important lesson here. When I am badly unsure of the priors and/or likelihoods, I shouldn't just run with my best guesses and plug them into Bayes' theorem. I need to correct for the fact that my uncertainty about priors and/or likelihoods is apt to be normally (or at least symmetrically about the right value) distributed on the log-odds scale, not on the probability scale.

This could be relevant to the puzzle that some calculations in the fine-tuning argument yield way more confirmation than is intuitively right (I am grateful to Mike Rota for drawing my attention to the last puzzle, in a talk he gave at the ACPA).

Monday, July 22, 2013

Fine-tuning and best-systems accounts of laws

According to best-systems accounts of laws, the laws are the theorems of the best system correctly describing our world. The best system, roughly, is one that optimizes for informativeness (telling us as much as possible about our world) and brevity of expression.

Now, suppose that there is some dimensionless constant α, say the fine-structure constant, which needs to be in some narrowish range to have a universe looking like ours in terms of whether stars form, etc. Simplify to suppose that there is only one such constant (in our world, there are probably more). Suppose also, as might well be the case, that this constant is a typical real number in that it is not capable of a finite description (in the way that e, π, 1, −8489/919074/7 are)—to express it needs something an infinite decimal expansion. The best system will then not contain a statement of the exact value for α. An exact value would require an infinitely long statement, and that would destroy the brevity of the best system. But specifying no value at all would militate against informativeness. By specifying a value to sufficient precision to ensure fine-tuning, the best system thereby also specifies that there are stars, etc.

Suppose the correct value of α is 0.0029735.... That's too much precision to include in the best system—it goes against brevity. But including in the best system that 0.0029<α<0.0030 might be very informative—suppose, for instance, that it implies fine-tuning for stars, for instance.

But then on the best-systems account of laws, it would be a required by law that the first four digits of α after the decimal point be 0029, but there would be no law for the further digits. But surely that is wrong. Surely either all the digits of α are law-required or none of them are.

Tuesday, September 25, 2012

Sacrificing the fine-tuning argument to the argument from evil

The argument from evil is no stronger an argument than the fine-tuning argument. Moreover, the two are nicely paired up. Just as the fine-tuning argument seems to be seriously weakened by supposing a multiverse (since if there are infinitely many worlds, it's less surprising that some support life), so too the argument from evil is seriously weakened by supposing a multiverse of all creation-worthy worlds (since then there will presumably exist infinitely many worlds with lots of evils, as long as they are creation-worthy).

So here is a dialectical move a theist can make. Just sacrifice the fine-tuning argument to the argument from evil. Let the two cancel out! That still leaves the theist with a number of powerful arguments such as the cosmological argument, the argument from religious experience, the argument from moral epistemology, the argument from plausible miracle reports, the argument from consciousness and the argument from nomic regularity. The atheist, however, is left with little ammunition, besides some minor arguments concerning the exact formulation of divine attributes, which minor arguments can balanced off with less weighty arguments for theism, like the ontological argument or the argument from the experience of our lives as planned by another.

And so the balance of evidence, even if one does not take particular theistic arguments as apodeictic (I think one should do that in the case of the cosmological argument), strongly favors theism.

Monday, March 1, 2010

The multiverse and fine-tuning

I was telling a friend about the multiverse explanation for fine-tuning. He asked me a question that I had never thought about: Why assume that the conditions in different universes would be the same? Maybe it's all the same, and so the multiverse does not help with fine-tuning.

In fact, it seems the point can be strengthened. The constants in the laws of nature appear to be the same on earth, on the moon, in M 110 and around distant quasars. By induction we should assume they are the same everywhere. Granted, on some theories other island universes are not connected to ours (though on other theories, there is a containing de Sitter space, and on some theories the other island universes are just very far away). But while that may weaken the induction, it does not destroy it. Even before Europeans heard about Australia and Australians heard about Europe, each group had reason to suppose that apparently basic constants in the laws of nature would be the same in the other place, even though the two places are not landwise connected. Granted, however, the judgment whether some constant is basic is defeasible—thus, if one mistakenly takes the local gravitational acceleration to be a basic constant, one will mistakenly think it is the same on a high mountain as in a valley. But while a judgment of basicality is defeasible, it can still be reasonable.

Now, some multiverse theories grow out of a particular physical theory that implies a variation of constants, say because there is given some universe-generating process. So the point does not damage all multiverse-based explanations of fine-tuning. But it does raise the evidential bar: for, the defeasible presumption is that if there are other universes, they are very much like ours.