Showing posts with label Ockham's razor. Show all posts
Showing posts with label Ockham's razor. Show all posts

Thursday, June 20, 2024

Panomnipsychism

We have good empirical ways of determining the presence of a significant amount of gold and we also have good empirical ways of determining the absence of a significant amount of gold.

Not so with consciousness. While I can tell that some chunks of matter exhibit significant consciousness (especially, the chunk that I am made of), to tell that a chunk of matter—say, a rock or a tree—does not exhibit significant consciousness relies very heavily on pre-theoretical intuition.

This makes it very hard to study consciousness scientifically. In science, we want to come up with conditions that help us explain why a phenonenon occurs where it occurs and doesn’t occur where it doesn’t occur. But if we can’t observe where consciousness does not occur, things are apt to get very hard.

Consider panomnipsychism: every chunk of matter exhibits every possible conscious state at every moment of its existence. This explains all our observations of consciousness. And since we don’t observe any absences of consciousness, panomnipsychism is not refutable by observation. Moreover, panomnipsychism is much simpler than any competing theory, since competing theories will have to give nontrivial psychophysical laws that say what conscious states are correlated with what physical states. It’s just that panomnipsychism doesn’t fit with our intuitions that rocks and trees aren’t conscious.

One might object that panomnipsychism incorrectly predicts that I am right now having an experience of hang gliding, and I can tell that I am not having any such experience. Not so! Panomnipsychism does predict that the chunk of matter making me up currently is having an experience of hang-gliding-while-not-writing-a-post, and that this chunk is also having an experience of writing-a-post-while-not-hang-gliding. But these experiences are not unified with each other on panomnipsychism: they are separate strands of conscious experience attached to a single chunk of matter. My observation of writing without gliding is among the predictions of panomnipsychism.

It is tempting to say that panomnipsychism violates Ockham’s razor. Whether it does or does not will depend on whether we understand Ockham’s razor in terms of theoretical complexity or in terms of the number of entities (such as acts of consciousness). If we understand it in terms of theoretical complexity, then as noted panomnipsychism beats its competitors. But if we understand Ockham’s razor in terms of the number of entities, then we should reject Ockham’s razor. For we shouldn’t have a general preference for theories with fewer entities. For instance, the argument that the world will soon come to an end because otherwise there are more human beings in spacetime is surely a bad one.

I think there is nothing wrong with relying on intuition, including our intuitions about the absence of consciousness. But it is interesting to note how much we need to.

Tuesday, May 11, 2021

The weird view that particles don't survive substantial change

I have a weird view: when a dog or another substance ceases to exist, all its particles cease to exist, being replaced by new particles with very similar physical parameters (with the new physical parameters being predictable via the laws of nature). Similar things happen when a new substance comes into existence, and when a particle is incorporated into or leaves a substance: no particles survive such things.

I have good Aristotelian reasons for this view. Particles are not substances, since substances cannot have substances as parts, and hence ontologically depend on substances for their existence. Thus, when the substance perishes, the particles do as well.

The view seems preposterously unparsimonious. I disagree. Let’s compare the view to some competitors. First of all, it’s clear to me that some version of four-dimensionalism is true, so let’s start with four-dimensionalist views.

A standard four-dimensionalism is perdurantism: four-dimensional objects are made up of instantaneous temporal parts—infinitely many of them if time is continuous. These instantaneous temporal parts come in and out of existence all the time, with very similar physical parameters to their predecessors. My weird view is compatible with the idea that particles actually all exist only instantaneously, akin to the perdurantist’s temporal parts. Such a view could be more parsimonious than standard perdurantism for two reasons: first, it needn’t posit temporal parts of substances, and, second, it needn’t posit wholes made up of the instantaneous particles.

An alternate version of my weird view says that particles do not survive change of substance, but live as long as they recognizably remain in the same substance. Imagine a particle that is eaten by a dog and some months later sloughed off. On my view, there are three particle-like objects in the story: the pre-dog particle, the in-dog particle, and the post-dog particle. On standard perdurantism, there are as many particle-like objects as moments of time in this story. Granted, some may think it weirder that the temporal boundaries in the existence of particles are determined by their allegiances to substances rather than by instants of time. But there is nothing weird about that if one takes seriously the priority of substances to their parts.

My view is admittedly less parsimonious than a four-dimensionalist view on which substances and particles are temporally extended, have no temporal parts, and particles outlast their substances. But such a four-dimensionalist has an implausible consequence. Many people will find plausible the idea that in some exceptional cases substances can share parts: conjoined twins are a standard example. But on this version of four-dimensionalism, it is now a matter of course that distinct substances share parts. The dog dies and some of its particles become a part of a flower: so the dog and the flower, considered as four-dimensional entities, have these particles as common parts. You and I share probably share parts with dinosaurs. So while my weird view is less parsimonious than a no-temporal-parts four-dimensionalism with particles that outlive substances, it is not less plausible.

The main alternative to four-dimensionalism is presentism. Is a presentist version of my view less parsimonious than a typical competing presentist view? In one sense, not. For at the present time, my view doesn’t posit additional present particles over and beyond those present particles posited by competing presentist views. And only present particles exist according to presentism! But more seriously, my view does posits that particles cease to exist and come into existence more than on typical presentist alternatives. So in that sense it is less parsimonious.

Thus, parsimony cuts against my view on presentism, but it may actually favor it on four-dimensionalism.

Monday, May 10, 2021

Is our universe of sets minimal?

Our physics is based on the real numbers. Physicists use the real numbers all over the place: quantum mechanics takes place in a complex Hilbert space, and the complex numbers are isomorphic to pairs of real numbers, while relativity theory takes place in a manifold that is locally isomorphic to a Lorentzian four-dimensional real space.

The real numbers are one of an infinite family of mathematical objects known as real closed fields. Other real closed fields than the real numbers could be used in physics instead—for instance, the hyperreals—and I think we would have the same empirical predictions. But the real numbers are simpler and more elegant: for instance, they are the only Dedekind-complete and the minimal Cauchy-complete real closed field.

At the same time, the mathematics behind our physics lives within a set theoretic universe. That set theoretic universe is generally not assumed to be particularly special. For instance, I know of no one who assumes that our set theoretic universe is isomorphic to Shepherdson’s/Cohen’s minimal model of set theory. On the contrary, it is widely assumed that our set theoretic universe has a standard transitive set model, which implies that it is not minimal, and few people seem to believe the Axiom of Constructibility which would hold in a minimal model.

This seems to me be rationally inconsistent. If we are justified in thinking that the mathematics underlying the physical world is based on a particularly elegant real closed field even though other fields fit our empirical data, we would also be justified in thinking it’s based on a particularly elegant universe of sets even though other universes fit our empirical data.

(According to Shipman, the resulting set theory would be one equivalent to ZF + V=L + “There is no standard model”.)

Monday, September 21, 2015

Platonism and Ockham's razor

One of the main objections against Platonism is that it offends against Ockham's razor by positing a large number of fundamental entities. But the Platonist can give the following response: By positing these fundamental entities, I can reduce the number of fundamental predicates to one, namely instantiation. I don't need fundamental predicates like "... is charged" or "... loves ...". All I need is a single multigrade fundamental predicate "... instantiate(s) ...", and I can just reduce the claim that Jones is charged to the claim that Jones instantiates charge, and the Juliet loves Romeo to the claim that Julie and Romeo instantiates loving. In other words, the Platonist's offenses against Ockham's razor in respect of ontology are largely compensated for by a corresponding reduction of ideology.

Largely, but so far not entirely. For the Platonist does need to introduce the "... instantiate(s) ..." predicate which the nominalist has no need for. On pain of a Bradley-type regress, the Platonist cannot handle that predicate using her general schema.

(But maybe Platonist can go one step further. She can eliminate single quantifiers from her ideology, too, using the Fregean move of replacing, say, ∃xF(x) with Instantiates(Fness, instantiatedness). Extending this to nested quantifiers is hard, but perhaps not impossible. If that task can be completed, then it seems that our Platonist has gained a decisive advantage over the nominalist: she has only one fundamental predicate and no quantifiers other than names (if names count as quantifiers). Not so, though! For this move needs to be able to handle complex predicates F, and the property Fness corresponding to such a complex predicate will probably have to stand in various structural relations to other properties, and we have complication.)

Friday, January 9, 2015

If you're going to be a Platonist dualist, why not be an idealist?

Let's try another exercise in philosophical imagination. Suppose Platonism and dualism are true. Then consider a theory on which our souls actually inhabit a purely mathematical universe. All the things we ever observe—dust, brains, bodies, stars and the like—are just mathematical entities. As our souls go through life, they become "attached" to different bits and pieces of the mathematical universe. This may happen according to a deterministic schedule, but it could also happen an indeterministic way: today you're attached to part of a mathematical object A1, and tomorrow you might be attached to B2 or C2, instead. You might even have free will. One model for this is the traveling minds story, but with mathematical reality in the place of physical reality.

This is a realist idealism. The physical reality around us on this story is really real. It's just not intrinsically different from other bits of Platonic mathematical reality. The only difference between our universe and some imaginary 17-dimensional toroidal universe is that the mathematical entities constituting our universe are connected with souls, while those constituting that one are not.

One might wonder if this is really a form of idealism. After all, it really does posit physical reality. But physical reality ends up being nothing but Platonic reality.

The view is akin to Tegmark's ultimate ensemble picture, supplemented with dualism.

Given Platonism and dualism, this story is an attractive consequence of Ockham's Razor. Why have two kinds of things—the physical universe and the mathematical entities that represent the physical universe? Why not suppose they are the same thing? And, look, how neatly we solve the problem of how we have mathematical knowledge—we are acquainted with mathematical objects much as we are with tables and chairs.

"But we can just see that chairs and tables aren't made of mathematical entities?" you may ask. This, I think, confuses not seeing that chairs and tables are made of mathematical entities with seeing that they are not made of them. Likewise, we do not see that chairs and tables are made of fundamental particles, but neither do we see that they are not made of them. The fundamental structure of much of physical reality is hidden from our senses.

So what do we learn from this exercise? The view is, surely, absurd. Yet given Platonism and dualism, Ockham's razor strongly pulls to it. Does this give us reason to reject Platonism or dualism? Quite possibly.

Monday, February 24, 2014

"If there are so many, then probably there are more"

Suppose the police have found one person involved in the JFK assassination. Then simplicity grounds may give us significant reason to think that that one person is the sole killer. But suppose that they have found 15 people involved. Then while the hypothesis H15 that there were exactly 15 conspirators is simpler than the hypothesis Hn that there were exactly n for n>15, nonetheless barring special evidence that they got them all, we should suspect that there are more conspirators at large. With that large number, it's just not that likely that all were caught.

Why is this? I think it's because even though prior probabilities decrease with complexity, the increment of complexity from H15 to, say, H16 or H17 is much smaller than the increment of complexity from H1 to H2. Maybe P(H2)≈0.2P(H1). But surely we do not have P(H16)≈0.2P(H15). Rather, we have a modest decrease, maybe P(H16)≈0.9P(H15) and P(H17)≈0.9P(H16). If so, then P(H16)+P(H17)≈1.7P(H15). Unless we receive specific evidence that favors H15 over H16 and H17, something like this will be true of the posterior probabilities, and so the disjunction of H16 and H17 will be significantly more likely that H15.

Thus we have a heuristic. If our information is that there are at least n items of some kind, but we have no evidence that there are no more, then when n is small, say 1 or 2 or maybe 3, it may be reasonable to think there are no more items of that kind. But if n is bigger—my intuition is that the switch-around is around 6—then under these conditions it is reasonable to think there are more. If there are so many, then probably there are more. And this just follows from the fact that the increase in complexity from 1 to 2 is great, and from 2 to 3 is significant, but from 6 to 7 or maybe even 4 to 5 it's not very large.

This is all just intuitive, since I do not have any precise way to assign prior probabilities. But staying at this intuitive level, we get some nice intuitive applications:

  • If after thorough investigation we have found only one kind of good that could justify God's permitting evil, then we have significant evidence that it's the only such good. And if some evil is no justified by that kind of good, then that gives significant evidence that it's not justified. But suppose we've found six, say. And it's easy to find at least six: (1) exercise of virtues that deal with evils; (2) significant freedom; (3) preservation of laws of nature; (4) opportunities to go beyond justice via forgiveness[note 1]; (5) adding variety to life; (6) punishment; (7) the great goods of the Incarnation and sacrifice of the cross. So we have good reason to think there are more permission-of-evil justifying goods that we have not yet found. (Alston makes this point.)
  • Suppose our best definition of knowledge has three clauses. Then we might reasonably suspect that we've got the definition. But it is likely, given Gettier stuff, that one needs at least four clauses. But for any proposed definition with four clauses, we should be much more cautious to think we've got them all.
  • Suppose we think we have four fundamental kinds of truths, as Chalmers does (physics, qualia, indexicals and that's all). Then we shouldn't be confident that we've got them all. But once we realize that the list leaves out severel kinds (e.g., morality, mathematics, intentions and intentionality, pace Chalmers), our confidence that we have them all should be low.
  • If our best physics says that there are two fundamental laws, we have some reason to think we've got it all. But if it says that there six, we should be dubious.