Thursday, September 28, 2023

A schematic argument against naturalistic reductions

Here is an argument schema.

  1. If X is reducible to the natural, then likely the vast improvement of natural science over the last three hundred years would have led to a much better knowledge of X.

  2. If X is not reducible to the natural, then it is not likely that the vast improvement of natural science over the last three hundred years would have led to a much better knowledge of X.

  3. The vast improvement of natural science over the last three hundred years has not led to a much better knowledge of X.

  4. So, probably, X is not reducible to the natural.

Some options for X:

  • ethics

  • aesthetics

  • value in general.

I think the best response would be to dispute (1), by saying that (1) is only plausible if we know how to do the reduction. The mere existence of a reduction, when we do not know how to run it, is not enough.

Maybe. But I still think we get some evidence against reductionistic theories in ethics, aesthetics and value in general from fact that great progress in science hasn’t led to great progress in these areas.

Humeanism about causation and functionalism about mind

Suppose we combine a Humean account of causation on which causation is a function of the pattern of intrinsically acausal events in reality with a functionalist account of consciousness. (David Lewis, for instance, accepted both.)

Here is an interesting consequence. Whether you are now conscious depends on what will happen in the future. For if the world were to radically change 14 billion years from the Big Bang, i.e., 200 million years from now, in such a way that the regularities that held for the first 14 billion years would not be laws, then the causal connections that require these regularities to be laws would not obtain either, and hence (unless we got lucky and new regularities did the job) our brains would lack the kind of causal interconnections that are required for a functionalist theory of mind.

This dependence of whether we are now conscious on what will happen in the future is intuitively absurd.

But suppose we embrace it. Then if functionalism is the necessary truth about the nature of mind, the fact that we are now conscious necessarily implies that the future will not be such as to disturb the lawlike regularities on which our consciousness is founded. In other words, on the basis of the fact that there are now mental states, one can a priori conclude things about the arrangement of physical objects in the future.

Indeed, this opens up the way for specific reasoning of the following sort. Given what the constitution of humans brains is, and given functionalism, for these brains to exhibit mental states of the sort they do, such-and-such generalizations must be special cases of laws of nature. But for there to be such laws of nature, then the future must be such-and-such. So, we now have a room for substantive a priori predictions of the future.

This all sounds very un-Humean. Indeed, it sounds like a direct contradiction to the Humean idea that reasoning from present to future is merely probabilistic. But while it is very counterintuitive, it is not actually a contradiction to the Humean idea. For on functionalism plus Humeanism about causation, facts about present mental states are not facts about the present—they are facts about the universe as a whole!

(This was sparked by some related ideas by Harrison Jennings.)

Monday, September 25, 2023

The Principles of Sufficient and Partial Reasons

I have argued that the causal account of metaphysical possibility implies the Principle of Sufficient Reason (see Section 2.2.6.6 here). The argument was basically this: If p is contingently true but unexplained, then let q be the proposition that p is unexplained but true. Consider now a world w where p is false. In w, the proposition q will be possible (by the Brouwer axiom). So by the causal account of modality, something can start a chain of causes leading to q being true. Which, I claimed, is absurd, since that chain would lead both to p being true and to p being unexplained. But the chain would explain p, so we have absurdity.

But it isn’t absurd, or at least not immediately! For the chain need not explain p. It might only explain the aspects of p that do not obtain in w. For a concrete example, suppose that p is a conjunction of p1 and p2, and p1 is false in w but p2 is true. Then a chain that leads to p being true need not explain p2: it might only explain p1, and might leave p2 as is.

I think what my argument for the PSR establishes is a weaker conclusion than the PSR: the Principle of Partial Reason (PPR), that every contingent truth has a partial explanation.

I am pretty sure that PPR plus causal finitism implies PSR, and so the modality argument for PSR can be rescued, albeit at the cost of assuming causal finitism. And, intuitively, it would be weird if PPR were true but PSR were not.

Thursday, September 21, 2023

Dry eternity

Koons and I have used causal paradoxes of infinity, such as Grim Reapers, to argue against infinite causal chains, and hence against an infinite causally-interconnected past. A couple of times people have asked me what I think of Alex Malpass’s Dry Eternity paradox, which is supposed to show that similar problems arise if you have God and an infinite future. The idea is that God is going to stop drinking (holy water, apparently!) at some point, and so he determines henceforth to act by the following rule:

  1. “Every day, God will check his comprehensive knowledge of all future events to see if he will ever drink again. If he finds that he does not ever drink again, he will celebrate with his final drink. On the other hand, if he finds that his final drink is at some day in the future, he does not reward himself in any way (specifically, he does not have a drink all day).”

This leads to a contradiction. (Either there is or is not a day n such that God does not drink on any day after n. If there is such a day, then on day n + 1 God sees that he does not drink on any day after n + 1 and so by the rule God drinks on day n + 1. Contradiction! If there is no such day, then on every day n God sees that he will drink on a day later than n, and so he doesn’t drink on n, and hence he doesn’t ever drink, so that today is a day such that God does not drink on any day after it. Contradiction, again!)

Is this a problem for an infinite future? I don’t think so. For sonsider this rule.

  1. On Monday, God will drink if and only if he foresees that he won’t drink on Tuesday. On Tuesday, God will drink if and only if he remembers that he drank on Monday.

Obviously, this is a rule God cannot adopt for Monday and Tuesday, since then God drinks on Monday if and only if God doesn’t drink on Monday. But this paradox doesn’t involve an infinite future, just two days.

What’s going on? Well it looks like in (2) there are two divine-knowledge-based rules—one for Monday and one for Tuesday—each of which can be adopted individually, but which cannot both be adopted, much like in (1) there are infinitely any divine-knowledge-based rules—one for each future day—any finite number of which can be adopted, but where one cannot adopt infinitely many of them.

What we learn from (2) is that there are logical limits to the ways that God can make use of divine foreknowledge. From (2), we seem to learn that one of these logical limits is that circularity needs to be avoided: a decision on Monday that depends on a decision on Tuesday and vice versa. From (1), we seem to learn that another one of these logical limits is that ungrounded decisional regresses need to be avoided: a decision that depends on a decision that depends on a decision and so on ad infinitum. This last is a divine analogue to causal finitism (the doctrine that nothing can have infinitely many things in its causal history), while what we got from (2) was a divine analogue to the rejection of causal circularity. It would be nice if there were some set of principles that would encompass both the divine and the non-divine cases. But in any case, Malpass’s clever paradox does no harm to causal finitism, and only suggests that causal finitism is a special case of a more general theory that I have yet to discover the formulation of.

The infinite future problem for causal accounts of metaphysical possibility

Starting with my dissertation, I’ve defended an account of metaphysical possibility on which it is nothing other than causal possibility. I would try to define this as follows:

  • p is possible0 iff p is actually true

  • p is possiblen + 1 iff things have the causal power to make it be that p is possiblen.

  • p is possible iff p is possiblen for some n.

I eventually realized that this runs into problems with infinite future cases. Suppose a coin will be tossed infinitely many times, and, as we expect, will come up heads infinitely many times and tails infinitely many times. Let p be the proposition that all the tosses will be heads. Then p is false but possible. Moreover, it is easy to convince oneself that it’s not possiblen for any finite n. Possibilityn involves n branchings from the actual world, while p requires infinitely many branchings from the actual world.

This has worried me for years, and I still don’t have a satisfying solution.

But yesterday I realized a delightful fact. This problem does nothing to undercut the basic insight of my account of metaphysical possibility, namely that metaphysical possibility is causal possibility. All the problem does is undercut one initially plausible way to given an account of causal possibility. But if we agree that there is such a thing as causal possibility, and I think we should, then we can still say that metaphysical possibility is causal possibility, even if we do not know exactly how to define causal possibility in terms of causal powers.

(There is one danger. Maybe the true account of causal possibility depends on metaphysical possibility.)

Wednesday, September 20, 2023

A dilemma for best-systems accounts of laws

Here is a dilemma for best-systems accounts of laws.

Either:

  1. law-based scientific explanations invoke the lawlike generalization itself as part of the explanation, or

  2. they invoke the further fact that this generalization is a law.

Thus, if it is a law that all electrons are charged, and Bob is an electron, on (1) we explain Bob’s charge as follows:

  1. All electrons are charged.

  2. Bob is an electron.

  3. So and that’s why Bob is charged.

But on (2), we replace (3) with:

  1. It is a law that all electrons are charged.

Both options provide the Humean with problems.

If it is just the lawlike generalization that explains, then the explanation is fishy. The explanation of why Bob is charged in terms of all electrons being charged seems too close to explaining a proposition by a conjunction that includes it:

  1. Bob is charged because Bob is charged and Alice is charged.

Indeed both (3)–(5) and (7) are objectionably cases of explaining the mysterious by the more mysterious: the conjunction is more mysterious than its conjunct and the universal generalization is more mysterious than its instances.

On the other hand, suppose that our explanation of why Bob is charged is that it’s a law that all electrons are charged. This sounds correct in general, but is not appealing on a best-systems view. For on a best-systems view, what the claim that it’s a law that all electrons are charged adds to the claim that all electrons are charged is that the generalization that all electrons are charged is sufficiently informative and brief to make it into the best system. But the fact that it is thus informative and brief does not help it explain anything.

Moreover, if the problem with (3)–(5) was that universal generalizations are too much like conjunctions, the problem will not be relieved by adding more conjuncts to the explanation, namely that the generalization is sufficiently informative and brief.

Gratuitous and objective evil

Suppose, highly controversially, that no defensible atheist account of objective value is possible. Now consider a paradigmatic apparently gratuitous horrendous evil E—say, one of the really awful things done to children described by Ivan in the Brothers Karamazov. The following two claims are both intuitive:

  1. E is gratuitous

  2. E is objectively evil.

But if there is no defensible account of objective evil on atheism, then (1) and (2) are in serious tension. For if there cannot be objective evil on atheism, then (2) cannot be true on atheism. Thus, (2) implies theism. But on the other hand, (1) implies atheism, since E gratuitous just in case if God existed, then E would be an evil that God has conclusive moral reason to prevent.

On our initial assumption about atheism, then, we need to choose between (1) and (2). And here there is no difficulty. That the things described by Ivan are objectively evil is way more clear than that God would have conclusive moral reason to prevent them, even if the latter claim is very likely in isolation.

Is a defensible atheist account of objective value possible? I used to think there was no special difficulty, but I’ve since come to be convinced that probably the only tenable account of objective value is an Aristotelian one based on form, and that human form requires something like a divine source. That said, even if objective value is something the atheist can defend, nonetheless knowledge of objective value is very difficult for the atheist. For objective value has to be (I know this is controversial) non-natural, and on atheism it is very difficult to explain how we could acquire the power to get in touch with non-natural aspects of reality.

But if knowledge of objective value is very difficult for the atheist, then we have tension between:

  1. E is gratuitous

  2. I know that E is objectively evil.

And (3) is still, I think, significantly more plausible than (1).

Tuesday, September 19, 2023

The evidential force of there being at least one gratuitous evil is low

Suppose we keep fixed in our epistemic background K general facts about human life and the breadth and depth of evil in the world, and consider the impact on theism of the additional piece of evidence that at least one of the evils is apparently gratuitous—i.e., one such that has resisted finding a theodicy despite strenuous investigation.

Now, clearly, if we found that there is not even one gratuitous evil would be extremely good evidence for the existence of God—for if there is no God, it is amazing if of the many evils there are, none were apparently gratuitous, but less amazing if there is a God. And hence, by a standard Bayesian theorem, finding that there is at least one gratuitous evil must be some evidence against the existence of God. But at the same time, the fact that F is strong evidence for T does not mean that the absence of F is strong evidence against T. Whether it is or is not depends on details.

But the background K contains some relevant facts. One of these is that we are limited knowers, and while we have had spectacular successes in our ability to understand the world and events around us, it is not incredibly uncommon to find things that have (so far) defeated our strenuous investigation. Some of these are scientific questions, and some are interpersonal questions—“Why did he do that?” Given this, it seems unsurprising, even if God exists, that we would sometimes be stymied in figuring out why God did something, including why he failed to prevent some evils. Thus, the probability of at least one of the vast numbers of evils in K being apparently gratuitous, given the existence of God, is pretty high, though slightly lower than given the non-existence of God. This means that the evidential force for atheism of there being at least one apparently gratuitous evil is fairly low.

Furthermore, one can come up with a theodicy for the gratuitous part of a gratuitous evil. When a person’s motives are not transparent to us we are thereby provided with an opportunity for exercising the virtue of trust. And reversely, a person’s always explaining themselves when they have been apparently unjustified does not build trust, on the other hand, but suspicion. Given the evils themselves as part of the background K, that some of them be apparently gratuitous provides us with an opportunity to exercise trust in God in a way that we would not be able to if none of the evils were apparently gratuitous. Given K (which presumably includes facts about us not being always in the luminous presence of God), it would be somewhat surprising if God always made sure we could figure out why he allowed evils. Again, this makes the evidential force for atheism of the apparent gratuity of evil fairly low.

Now, it may well be that when we consider the number or the type (perhaps they are of a type where divine explanations of permission would be reasonably expected) of apparently gratuitous evils, things change. Nothing I have said in this post undermines that claim. My only point is that the mere existence of an apparently gratuitous evil is very little evidence against theism.

Monday, September 18, 2023

Hiddenness and evil

There is some discussion in the literature whether the problem of hiddenness is a species of the problem of evil. I think the theist should say that it is, and can even identify the type of evil it is. There are some important propositions which it is normal for a human being to know, or at least to believe, ignorance of which is constitutive of not having a flourishing life. Examples include not realizing that one’s fellows are persons, not realizing that one is a person, not possessing basic moral truths, etc. If God in fact exists, then the proposition that God exists falls in the same category of propositions it is normal for humans to believe, and without believing which we cannot flourish.

A corollary of this is that if we can find a good theodicy for other cases of ignorance of truths needed for a flourishing human life, then we have hope that that theodicy would apply to ignorance of the existence of God.

Wednesday, September 13, 2023

Ontology and duck typing

Some computer languages (notably Python) favor duck-typing: instead of relying on checking whether an object officially falls under a type like duck, one checks whether it quacks, i.e., whether it has the capabilities of a duck object. You can have a dog object that behaves like a vector, and a vector object that behaves like a dog.

It would be useful to explore how well one could develop an ontology based on duck-typing rather than on categories. For instance, instead of some kind of categorical distinction between particulars and universals, one simply distinguishes between objects that have the capability to instantiate and objects that have the capability to be instantiated, without any prior insistence that if you can be instantiated, then you are abstract, non-spatiotemporal, etc. Now it may turn out that either contingently or necessarily none of the things that are spatiotemporal can be instantiated, but on the paradigm I am suggesting, the explanation of this would not lie in a categorical difference between spatiotemporal entities and entities that have the capability of being instantiated. It may lie in some incompatibility between the capabilities of being instantiated and occupying spacetime (though it’s hard to see what that incompatibility would be) or it may just be a contingent fact that there is no object has both capabilities.

As a theist, I think there is a limit to the duck typing. There will, at least, need to be a categorical difference between God and creature. But what if that’s the only categorical difference?

Tuesday, September 12, 2023

On two problems for non-Humean accounts of laws

There are three main views of laws:

  • Humeanism: Laws are a summing up of the most important patterns in the arrangement of things in spacetime.

  • Nomism: Laws are necessary relations between universals.

  • Powerism: Laws are grounded in the essential powers of things.

The deficiencies of Humeanism are well known. There are also deficiencies in nomism and powerism, and I want to focus on two.

The first is that they counterintuitively imply that laws are metaphysically necessary. This is well-known.

The second is perhaps less well-known. Nomism and powerism work great for fundamental laws, and for those non-fundamental laws that are logical deductions from the fundamental laws. But there is a category of non-fundamental laws, which I will call impure laws, which are not derivable solely from the fundamental laws, but from the fundamental laws conjoined with certain facts about the arrangement of things in spacetime.

The most notorious of the impure laws is the second law of thermodynamics, that entropy tends to increase. To derive this from the fundamental laws, we need to add some fact about the initial conditions, such as that they have a low entropy. The nomic relations between universals and the essential powers of things do not yield the second law of thermodynamics unless they are combined with facts about which universals are instantiated or which things with which essential powers exist.

A less obvious example of an impure law seems to be conservation of energy. The necessary relations between universals will tell us that in interactions between things with precisely such-and-such universals energy is conserved. And it might well be that the physical things in our world only have these kinds of energy-conserving universals. But things whose universals don’t conserve energy are surely metaphysically possible, and the fact that such things don’t exist is a contingent fact, not grounded in the necessary relations between universals. Similarly, substances with causal powers that do not conserve energy are metaphysically possible, and the non-existence of such things is at best a contingent fact. Thus, to derive the law of conservation of energy, we need not only the fundamental laws grounded in relations between universals or essential powers, but we also need the contingent fact that conservation-violators don’t exist.

Finally, the special sciences (geology, biology, etc.) are surely full of impure laws. Some of them perhaps even merely local ones.

One might bite the bullet and say that the impure laws are not laws at all. But that makes the nomist and powerist accounts inadequate to how “law” gets used in science.

The Humean stands in a different position. If they can account for fundamental laws, impure laws are easy, since the additional grounding is precisely a function of patterns of arrangement. The Humean’s difficulty is with the fundamental laws.

There is a solution, and this is for the nomist and powerist to say that “law of nature” is spoken in many ways, analogically. The primary sense is the fundamental laws that the theories nicely account for. But there are also non-fundamental laws. The pure ones are logical consequences of the fundamental laws, and the impure ones are particularly important consequences of the fundamental laws conjoined with important patterns of things in nature. In other words, impure laws are to be accounted for by a hybrid of the non-Humean theory and the Humean theory.

Now let’s come back to the other difficulty: the necessity worry. I submit that our intuitions about the contingency of laws of nature are much stronger in the case of impure laws than fundamental laws or pure non-fundamental laws. It is not much of a bullet to bite to say that matching charges metaphysically cannot attract—it is quite plausible that this is explained by thevery nature of charge. It is the impure laws where contingency is most obvious: it is metaphysically possible for entropy to decrease (funnily enough, many Humeans deny this, because they define the direction of time in terms of the increase of entropy), and it is metaphysically possible for energy conservation to be violated. But on our hybrid account, the contingency of impure laws is accounted for by the Humean element in them.

Of course, we have to check whether the objections to Humeanism apply to the hybrid theory. Perhaps the most powerful objection to a Humean account of laws is that it only sums up and does not explain. But the hybrid theory can explain, because it doesn’t just sum up—it also cites some fundamental laws. Moreover, it may be the case that the patterns that need to be added to get the impure laws could be initial conditions, such as that the initial entropy is law or that no conservation-violators come into existence. But fundamental law plus initial conditions is a perfectly respectable form of explanation.

Ontology as a contingent science

Consider major dividing lines in ontology, such as between trope theory and Platonism. Assume theism. Then all possibilities for everything other than God are grounded in God.

If God is ontologically like us, and in particular not simple, then it is reasonable to think that the correct ontological theory is necessarily determined by God’s nature. For instance, if God has tropes, then necessarily trope theory holds for creatures. If God participates in distinct Platonic forms like Divinity and Wisdom, then necessarily Platonism holds for creatures.

But the orthodox view (at least in Christianity and Judaism) is that God is absolutely simple, and predication works for God very differently from how it works for us. In light of this, why should we think that God had to create a tropist world rather than a Platonic one, or a Platonic one rather than a tropist one? Neither seems more or less suited to being created by God. It seems natural, in light of the radical difference between God and creatures, to think that God could create either kind of world.

If so, then many ontological questions seem to become contingent. And that’s surprising and counterintuitive.

Well, maybe. But I think there is still a way—perhaps not fully satisfactory—of bringing some of these questions back to the realm of necessity. Our language is tied to our reality. Suppose that we live in a tropist world. It seems that the correct account of predication is then a tropist one: A creature is wise if and only if it has a wisdom trope. A Platonic world has no wisdom tropes, and hence no wise creatures. Indeed, nothing can be predicated of any creature in it. What might be going on in the Platonic world is that there are things there that are structurally analogous wise things, or to predication. We can now understand our words “wise” and “predicated” narrowly, in the way they apply to creatures in our world, or we can understand them broadly as including anything structurally analogous to these meanings. If we understand them narrowly, then it is correct to say that “Nothing in the Platonist world is wise” and “Nothing is correctly predicated of anything in the Platonist world.” But in the wide, analogical sense, there are wise things and there is predication in the Platonist world. Note, too, that even in our world it is correct to say “God is wise” and “Something is correctly predicated of God” only in the wide senses of the terms.

On this account, necessity returns to ontology—when we understand things narrowly. But the pretensions of ontology should be chastened by realizing that God could have made a radically different world.

And maybe there is an advantage to this contingentism. Our reasoning in ontology is always somewhat driven by principles of parsimony. But while one can understand why parsimony is appropriately pursued in study of the contingent—for God can be expected to create the contingent parsimoniously, both for aesthetic reasons and to fit reality to our understanding—I have always been mystified why it is appropriately pursued in the study of the necessary. But if ontology is largely a matter of divine creative choice, then parsimony is to be sought in ontological theories just as in physical ones, and with the same theological justification.

The above sounds plausible. But I have a hard time believing in ontology as a contingent science.

Thursday, September 7, 2023

Reverse valve masking

I was exposed to Covid recently, so by University rules I need to mask for a while. I don't particularly love masking at the gym, but I found a nice solution. My go-to mask for physical activity during the pandemic was the Trend Air Stealth N100 respirator, with the valve replaced by a 3D-printed blocker. But now I don't need to protect myself, just others. So I simply put the valve back in, but in reverse, so I get clear air intake but my exhalations go through the N100 filters. The respirator was already pretty breathable, but now it's even better, though it still looks super-weird and I need to remember not to use the respirator with this modification when I actually need protection, e.g., when doing woodworking.



Wednesday, September 6, 2023

On the plurality of bestnesses

According to the best-systems account of laws (BSA), the fundamental laws of nature are the axioms of the system that are true and optimize a balance of informativeness and brevity in a perfectly natural language (i.e., the language cuts reality perfectly at the joints). There are some complications in probabilistic cases, but those will only make my argument below more compelling.

Here is the issue I want to think about: There are many reasonable ways of defining the “balance of informativeness and brevity”.

First, in the case of theories that rule out all but a finite number of worlds, we can say that a theory is more informative if it is compatible with fewer worlds. In such a case, there may be some natural information-theoretic way of measuring informativeness. But in fact, we do not expect the laws of nature to rule out all but a finite number of worlds. We expect them to be compatible with an infinite number of worlds.

Perhaps, though, we get lucky and the laws place restrictions on the determinables in such a way that provides for a natural state space. Then we can try to measure what proportion of that state space is compatible with the laws. This is going to be technically quite difficult. The state space may well turn out to be unbounded and/or infinite dimensional, without a natural volume measure. But even if there is a natural volume measure, it is quite likely that the restrictions placed by the laws make the permitted subset of the state space have zero volume (e.g., if the state space includes inertial and gravitational mass, then laws that say that inertial mass equals gravitational mass will reduce the number of dimensions of the state space, and the reduced space is apt to have zero volume relative to the full space). So we need some way of comparing subsets with zero volume. And mathematically there are many, many tools for this.

Second, brevity is always measured relative to a language. And while the requirement that the language be perfectly natural, i.e., that it cut nature at the joints, rules out some languages, there will be a lot of options remaining. Minimally, we will have a choice point about grouping, Polish notation, dot notation, parentheses, and a slew of other options we haven’t thought of yet, and we will have choice points about the primitive logical operators.

Finally, we have a lot of freedom in how we combine the informativeness and brevity measures. This is especially true since it is unlikely that the informativeness measure is a simple numerical measure, given the zero-volume issue.

We could suppose that there is some objective fact, unknowable to humans, as to what is the right way to define the informativeness and brevity balance, a fact that yields the truth about the laws of nature. This seems implausible. Absent such a fact, what the laws are will be relative to the choice of informativeness and brevity
measure ρ. We might have gotten lucky, and in our world all the measures yield the same laws, but we have little reason to hope for that, and even if this is correct, that’s just our world.

Thus, the story implies that for any reasonable informativeness and brevity measure ρ, we have a concept of a lawρ. This in itself sounds a bit wrong. It makes the concept of a law not sound objective enough. Moreover, corresponding to each reasonable choice of ρ, it seems we will have a potentially different way to give a scientific explanation, and so the objectivity of scientific explanations is also endangered.

But perhaps worst of all, what BSA had going for it was simplicity: we don’t need any fundamental law or causal concepts, just a Humean mosaic of the distribution of powerless properties. However, the above shows that there is enormous complexity in the account of laws. This is not ideological complexity, but it is great complexity nonetheless. If I am right in my preceding post that at least on probabilistic BSA the fact that something is a law actually enters into explanation, and if I am right in this post that the BSA concept of law has great complexity, then this will end up greatly complicating not just philosophy of science, but scientific explanations.

On probabilistic best-systems accounts, laws aren't propositions

According to the probabilistic best-systems account of laws (PBSA), the fundamental laws of nature are the axioms of the system that optimizes a balance of probabilistic fit to reality, informativeness, and brevity in a perfectly natural language.

But here is a tricky little thing. Probabilistic laws include statements about chances, such as that an event of a certain type E has a chance of 1/3. But on PBSA, chances are themselves defined by PBSA. What it means to say “E has a chance of 1/3” seems to be that the best system entails that E has a chance of 1/3. On its face, this is circular: chance is defined in terms of entailment of chance.

I think there may be a way out of this, but it is to make the fundamental laws be sentences that need not express propositions. Here’s the idea. The fundamental laws are sentences in an formal language (with terms having perfectly natural meanings) and an additional uninterpreted chance operator. There are a bunch of choice-points here: is the chance operator unary (unconditional) or binary (conditional)? is it a function? does it apply to formulas, sentences, event tokens, event types or propositions? For simplicity, I will suppose it’s unary function applying to event types, even though that’s likely not the best solution in the final analysis. We now say that the laws are the sentences provable from the axioms of our best system. These sentences include the uninterpreted chance(x) function. We then say stuff like this:

  1. When a sentence that does not use the chance operator is provable from the axioms, that sentence contributes to informativeness, but when that sentence is in fact false, the fit of the whole system becomes  − ∞.

  2. When a sentence of the form chance(E) = p is provable from the axioms, then the closeness of the frequency of event type E to p contributes to fit (unless the fit is  − ∞ because of the previous rule), and the statement as such contributes to informativeness.

I have no idea how fit is to be measured when instead of being able to prove things like chance(E) = p, we can prove less precise statements like chance(E) = chance(F) or chance(E) ≥ p. Perhaps we need clauses to cover cases like that, or maybe we can hope that we don’t need to deal with this.

An immediate problem with this approach is that the laws are no longer propositions. We can no longer say that the laws explain, because sentences in a language that is not fully interpreted do not explain. But we can form propositions from the sentences: instead of invoking a law s as itself an explanation, we can invoke as our explanation the second order fact that s is a law, i.e., that s is provable from the axioms of the best system.

This is counterintuitive. The explanation of the evolution of amoebae should not include meta-linguistic facts about a formal language!

Friday, September 1, 2023

Where are we?

Unless something like the Bohmian interpretation or a spatial collapse theory is right, quantum mechanics gives us good reason to think that the position wavefunction of all our particles is spread across pretty much all of the observable universe. Of course, except in the close vicinity of what we pre-theoretically call “our body”, the wavefunction is incredibly tiny.

What are we to make of that for the “Where am I?” question? One move is to say that we all overlap spatially, occupying most of the observable universe. On a view like this, we better not have position do serious metaphysical or ethical work, such as individuating substances or making moral distinctions based on whether one individual (say, a fetus) is within the space occupied by another.

The other move is to say I am where the wavefunction of my particles is not small. On a view like this, my location is something that comes in degrees depending on what our cut-off for “small” is. We get to save the intuition that we don’t overlap spatially. But the cost of this is that our location is far from a fundamental thing. It is a vague concept, dependent on a cut-off. A more precise thing would be to say things like: “Here I am up to 0.99, and here I am up to 0.50.”