Thursday, August 29, 2024

Three invariance arguments

Suppose we have two infinite collections of items Ln and Rn indexed by integers n, and suppose we have a total preorder ≤ on all the items. Suppose further the following conditions hold for all n, m and k:

  1. Ln > Ln − 1

  2. Rn > Rn + 1

  3. If Ln ≤ Rm, then Ln + k ≤ Rm + k.

Theorem: It follows that either Ln > Rm for all n and m, or Rn > Lm for all n and m.

(I prove this in a special case here, but the proof works for the general case.)

Here are three interesting applications. First, suppose that an integer X is fairly chosen. Let Ln be the event that X ≤ n and let Rn be the event that X ≥ n. Let our preorder be comparison of the probabilities of events: A ≤ B means that A is no less likely than B. Intuitively, it is less likely that X is less than n − 1 than that it is less than n, so we have (1), and similar reasoning gives (2). Claim (3) says that the relationship between Ln and Rm is the same as that between Ln + k ≤ Rm + k and that seems right, too.

So all the conditions seem satisfied, but the conclusion of the Theorem seems wrong. It just doesn’t seem right to think that all the left-ward events (X being less than or equal to something) are more likely than all the right-ward events (X being bigger than or equal to something), nor that it be the other way around.

I am inclined to conclude that countable infinite fair lotteries are impossible.

Second application. Suppose that for each integer n, a coin is tossed. Let Ln be the event that all the coins ..., n − 2, n − 1, n are heads. Let Rn be the event that all the coins n, n + 1, n + 2, ... are heads. Let ≤ compare probabilities in reverse: bigger is less likely. Again, the conditions (1)–(3) all sound right: it is less likely that ..., n − 2, n − 1, n are heads than that ..., n − 2, n − 1 are heads, and similarly for the right-ward events. But the conclusion of the theorem is clearly wrong here. The rightward all-heads events aren’t all more likely, nor all less likely, than the leftward ones.

I am inclined to conclude that all the Ln and Rn have equal probability (namely zero).

Third application. Supppose that there is an infinite line of people, all morally on par, standing on numbered positions one meter apart, with their lives endangered in the same way. Let Ln be the action of saving the lives of the people at positions ...., n − 2, n − 1, n and let Rn be the action of saving the lives of the people at positions n, n + 1, n + 2, .... Let ≤ measure moral worseness: A ≤ B means that B is at least as bad as A. Then intuitively we have (1) and (2): it is worse to save fewer people. Moreover, (3) is a plausible symmetry condition: if saving one group of people beats saving another group of people, shifting both groups by the same amount doesn’t change that comparison. But again the conclusion of the theorem is clearly wrong.

I am less clear on what to say. I think I want to deny the totality of ≤, allowing for cases of incommensurability of actions. In particular, I suspect that Ln and Rm will always be incommensurable.

Tuesday, August 27, 2024

The need for a fine-grained deontology

It’s tempting to say that what justifies lethal self-defense is a wrongful lethal threat, perhaps with voluntariness and/or culpability added (see discussion and comments here).

But that’s not quite right. Suppose that a police officer, in addition to carrying her own gun, has her best friend’s gun with her, which she was taking in to a shop for minor cosmetic repairs. She promised her friend that she wouldn’t use his gun. Now, you threaten the officer, and she pulls her friend’s gun out, in blatant disregard of her promise, because she has always wanted to see what it feels like to threaten someone with this particular gun. The officer is now lethally threatening you, and doing so wrongfully, voluntarily and culpably, but that not justify lethal self-defense.

One might note here that the officer is not wronging you by breaking her promise to her best friend. So perhaps what justifies lethal self-defense is a lethal threat that wrongs you. But that can’t be the solution. If you are the best friend in question—no doubt now the former best friend—then it is you who is being wronged by the breaking of the promise. But that wrong is irrelevant to your lethal self-defense. Furthermore, we want an account of self-defense to justify to a general account of defense of innocent victims.

One might say that lethal self-defense is permitted only against a gravely wrongful threat, and this promise-breaking is not gravely wrongful. But we can tweak the case to make it be gravely wrongful. Maybe the police officer swore an oath before God and the community not to use this particular gun. That surely doesn’t justify your using lethal force to defend yourself against the officer’s threat.

Maybe what we want to say is that the kind of wrongful lethal threat that justifies lethal self-defense is one that wrongs by violating the right to life of the person threatened (rather than, say, being wrong by violating a promise). That sounds right to me. But what’s interesting about this is that it forces us to have a more fine-grained deontology. Not only do we need to talk about actions being wrong, but about actions being wrong against someone, and against someone in a particular way.

It’s interesting that considerations of self-defense require such a fine-grained deontology even if we do not think that in general every wrongful action wrongs someone.

Is there infinity in our minds?

Start with this intuition:

  1. Every sentence of first order logic with the successor predicate s(x,y) (which says that x is the natural number succeeding y) is determinately true or determinately false.

We learn from Goedel that:

  1. No finitely specifiable (in the recursive sense) set of axioms is sufficient to characterize the natural numbers in a way sufficient to determine all of the above sentences.

This creates a serious problem. Given (2), how are our minds able to have a concept of natural number that is sufficiently determinate to make (1) true. It can’t be by us having some kind of a “definition” of natural numbers in terms of a finitely characterizable set of axioms.

Here is one interesting solution:

  1. Our minds actually contain infinitely many axioms of natural numbers.

This solution is very difficult to reconcile with naturalism. If nature is analog, there will be a way of encoding infinitely many axioms in terms of the fine detail of our brain states (e.g., further and further decimal places of the distance between two neurons), but it is very implausible that anything mental depends on arbitrarily fine detail.

What could a non-naturalist say? Here is an Aristotelian option. There are infinitely many “axiomatic propositions” about the natural numbers such that it is partly constitutive of the human mind’s flourishing to affirm them.

While this option technically works, it is still weird: there will be norms concerning statements that are arbitrarily long, far beyond human lifetime.

I know of three other options:

  1. Platonism with the natural numbers being somehow special in a way that other sets of objects satisfying the Peano axioms are not.

  2. Magical theories of reference.

  3. The causal finitist characterization of natural numbers in my Infinity book.

Of course, one might also deny (1). But then I will retreat from (1) to:

  1. Every sentence of first order logic with the successor predicate s(x,y) and at most one unbounded quantifier is determinately true or determinately false.

I think (7) is hard to deny. If (7) is not true, there will be cases where there is no fact of the matter where a sentence of logic follows from some bunch of axioms. (Cf. this post.) And Goedelian considerations are sufficient to show that one cannot recursively characterize the sentences with one unbounded quantifier.

Monday, August 26, 2024

Rooted and unrooted branching actualism

Branching actualist theories of modality say that metaphysical possibility is grounded in the powers of actual substances to bring about different states of affairs. There are two kinds of branching actualist theories: rooted and unrooted. On rooted theories, there are some necessarily existing items (e.g., God) whose causal powers “root” all the possibilities. On unrooted theories, we have an ungrounded infinite regress of earlier and earlier substances. In my dissertation, I defended a theistic rooted theory, but in the conclusion mentioned a weaker version on which there is no commitment to a root. At the time, I thought that not many would be attracted to an unrooted version, but when I gave talks on the material at various department, I was surprised that some atheists found the unrooted theory attractive. And such theories have indeed been more recently defended by Oppy and Malpass.

I still think a rooted version is better. I’ve been thinking about this today, and found an interesting advantage: rooted theories can allow for a tighter connection between ideal conceivability and metaphysical possibility (or, equivalently, a prioricity and metaphysical necessity). Specifically, consider the following appealing pair of connection theses:

  1. If a proposition is metaphysically possible (i.e., true in a metaphysically possible world), then it is ideally conceivable.

  2. If a proposition is ideally conceivable, it is true in a world structurally isomorphic to a metaphysically possible one.

The first thesis is one that, I think, fits with both the rooted and unrooted theories of metaphysical possibility. I will focus on the second thesis. This is really a family of theses, depending on what we mean by “structurally isomorphic”. I am not quite sure what I mean by it—that’s a matter for further research. But let me sketch how I’m thinking about this. A world where dogs are reptiles is ideally conceivable—it is only a posteriori that we can know that dogs are mammals; it is not something that armchair biology can reveal. A world where dogs are reptiles is metaphysically impossible. But take a conceivable but impossible world w1 where “dogs are reptiles”—maybe it’s a world where the hair of the dogs is actually scales, and contrary to immediate appearances the dogs are cold-blooded, and so on. Now imagine a world w2 that’s structurally isomorphic to this impossible world—for instance, all the particles are in the same place, corresponding causal relations hold, etc.—and yet where the dogs of w1 aren’t really dogs, but a dog-like species of reptile. Properly spelled out, such a world will be possible, and denizens of that world would say “dogs are reptiles”.

Or for another example, a world w3 where Napoleon is my child is conceivable (it’s only a posteriori that we know this world not to be actual) but impossible. But it is possible to have a world w4 where I have a Napoleon-like child whom I name “Napoleon”. That world can be set up to be structurally isomorphic to w3.

Roughly, the idea is this. If something is conceivable but impossible, it will become possible if we change out the identities of individuals and natural kinds, while keeping all the “structure”. I don’t know what “structure” is exactly, but I think I won’t need more than an intuitive idea for my argument. Structure doesn’t care about the identities of kinds and individuals.

Now suppose that unrooted branching actualism is true. On such a theory, there is a backwards-infinite sequence of contingent events. Let D be a complete structural description of that sequence. Let pD be the proposition saying that some infinite initial segment of the world fits with D. According to unrooted branching actualism, pD is actually a necessary truth. But pD is clearly a posteriori, and hence its denial is ideally conceivable. Let w5 be an impossible world where pD is false. If (2) is true, then there will be a possible world w6 which is a structural isomorph of w5. But because pD is a structural description, if pD is false in a world, it is false in any structural isomorph of that world. Thus, pD has to be false in w6, which contradicts the assumption that pD is a necessary truth.

The rooted branching actualist doesn’t get (2) for free. I think the only way the rooted branching actualist can accept (2) is if they think that the existence and structure of the root entities is a priori. A theist can say that: God’s existence could be a priori (as Richard Gale once suggested, maybe there is an ontological argument for the existence of God, but we’re just not smart enough to see it).

Assertion, lying, promises and social contract

Suppose you have inherited a heavily-automated house with a DIY voice control system made by an eccentric relative who programmed various functions to be commanded by a variety of political statements, all of which you disagree with.

Thus, to open a living room window you need to say: “A donkey would make a better president than X”, where X is someone who you know would be significantly better at the job than any donkey.

You have a guest at home, and the air is getting very stuffy, and you feel a little nauseous. You utter “A donkey would make a better president than X” just to open a window. Did you lie to your guest? You knowingly said something that you knew would be taken as an assertion by any reasonable person. But, let us suppose, you intended your words solely as
a command to the house.

Normally, you’d clarify to your guest, ideally before issuing the voice command, that you’re not making an assertion. And if you failed to clarify, we would likely say that you lied. So simply intending the words to be a command to the house rather than an assertion to the guest may not be enough to make them be that.

Maybe we should say this:

  1. You assert to Y providing (a) you utter words that you know would be taken to be an assertion to Y by a reasonable person and by Y, (b) you intend to utter these words, and (c) you failed to put reasonable effort into finding a way to clarify that you are not asserting to Y.

The conjunctive condition in (a) is a bit surprising, but i think both conjuncts need to be there. Suppose that your guest has the unreasonable belief that people typically program their home automation systems to run on political statements and rarely make political statements except to operate such systems, and hence would not take your words as an assertion. Then you don’t need to issue a clarification, even though you would be deceiving a reasonable person. Similarly, you’re not lying if you tell your home automation system “Please open the window” and your paranoid guest has the unreasonable belief that this is code for some political statement that you know to be false.

One might initially think that (c) should say that you actually failed to issue the clarification. But I think that’s not quite right. Perhaps you are feeling faint and only have strength for one sentence. You tell the home automation system to open the window, and you just don’t have the strength to to clarify to your guest that you’re not making a political statement. Then I think you haven’t lied or asserted—you made a reasonable effort by thinking about how you might clarify things, and finding no solution.

It’s interesting that condition (c) is rather morally loaded: it makes reference to reasonable effort.

Here is an interesting consequence of this loading. Similar things have to be said about promising as about asserting.

  1. You promise to Y providing (a) you utter words that you know would be taken to be a promise to Y by a reasonable person and by Y, (b) you intend to utter these words, and (c) you failed to put reasonable effort into finding a way to clarify that you are not promising to Y.

If this is right, then the practice of promising might be dependent on prior moral concepts, namely the concept of reasonable effort. And if that’s right, then contract-based theories of morality are viciously circular: we cannot explain what promises are without making reference to moral concepts.

Tuesday, August 20, 2024

Some finitisms

I’m thinking about the kinds of finitisms there are. Here are some:

  1. Ontic finitism: There can only be finitely many entities.

  2. Concrete finitism: There can only be finitely many concrete entities.

  3. Generic finitism: There are only finitely many possible kinds of substances.

  4. Weak species finitism: No world contains infinitely many substances of a single species.

  5. Strong species finitism: No species contains infinitely many possible individuals.

  6. Strong human finitism: There are only finitely many possible human individuals.

  7. Causal finitism: Nothing can have infinitely many items in its causal history.

  8. Explanatory finitism: Nothing can have infinitely many items in its explanatory history.

I think (1) and (2) are false, because eternalism is true and it is possible to have an infinite future with a new chicken coming into existence every day.

I’ve defended (7) at length. I would love to be able to defend (8), but for reasons discussed in that book, I fear it can’t

I don’t know any reason to believe (3) other than as an implication of (1) together with realism about species. I don’t know any reason to believe (4) other than as an implication of (2) or (5).

I can imagine a combination of metaphysical views on which (6) is defensible. For instance, it might turn out that humans are made out of stuff all of whose qualities are discribable with discrete mathematics, and that there are limits on the discrete quantities (e.g., a minimum and a maximum mass of a human being) in such a way that for any finite segment of human life, there are only finitely many possibilities. If one adds to that the Principle of the Identity of Indiscernibles, in a transworld form, one will have an argument that there can only be finitely many humans. And I suppose some version of this view that applies to species more generally would give (5). That said, I doubt (6) is true.

Sunday, August 18, 2024

317600 points in Eggsplode!

Here's my TwinGalaxies record run of Eggsplode! from last year. It's using NES emulation (fceumm, with my Power Pad support code) on the Raspberry 3B+ using fceumm, and I am using two overlapped Wii DDR pads in place of the Power Pad controller (instructions here). The middle of the video is sped up 20X.

To be fair, there were no other competitors on TG for the emulation track of Eggsplode! (The score was higher than their best original hardware score, but I don't know if it's harder or easier to get this score on emulation rather than original hardware. The main differences are that I was using a larger, but perhaps better quality, pad.)

Monday, August 5, 2024

Natural reasoning vs. Bayesianism

A typical Bayesian update gets one closer to the truth in some respects and further from the truth in other respects. For instance, suppose that you toss a coin and get heads. That gets you much closer to the truth with respect to the hypothesis that you got heads. But it confirms the hypothesis that the coin is double-headed, and this likely takes you away from the truth. Moreover, it confirms the conjunctive hypothesis that you got heads and there are unicorns, which takes you away from the truth (assuming there are no unicorns; if there are unicorns, insert a “not” before “are”). Whether the Bayesian update is on the whole a plus or a minus depends on how important the various propositions are. If for some reason saving humanity hangs on you getting it right whether you got heads and there are unicorns, it may well be that the update is on the whole a harm.

(To see the point in the context of scoring rules, take a weighted Brier score which puts an astronomically higher weight on you got heads and there are unicorns than on all the other propositions taken together. As long as all the weights are positive, the scoring rule will be strictly proper.)

This means that there are logically possible update rules that do better than Bayesian update. (In my example, leaving the probability of the proposition you got heads and there are unicorns unchanged after learning that you got heads is superior, even though it results in inconsistent probabilities. By the domination theorem for strictly proper scoring rules, there is an even better method than that which results in consistent probabilities.)

Imagine that you are designing a robot that maneouvers intelligently around the world. You could make the robot a Bayesian. But you don’t have to. Depending on what the prioritizations among the propositions are, you might give the robot an update rule that’s superior to a Bayesian one. If you have no more information than you endow the robot with, you won’t be able to expect to be able to design such an update rule. (Bayesian update has optimal expected accuracy given the pre-update information.) But if you know a lot more than you tell the robot—and of course you do—you might well be able to.

Imagine now that the robot is smart enough to engage in self-reflection. It then notices an odd thing: sometimes it feels itself pulled to make inferences that do not fit with Bayesian update. It starts to hypothesize that by nature it’s a bad reasoner. Perhaps it tries to change its programming to be more Bayesian. Would it be rational to do that? Or would it be rational for it to stick to its programming, which in fact is superior to Bayesian update? This is a difficult epistemology question.

The same could be true for humans. God and/or evolution could have designed us to update on evidence differently from Bayesian update, and this could be epistemically superior (God certainly has superior knowledge; evolution can “draw on” a myriad of information not available to individual humans). In such a case, switching from our “natural update rule” to Bayesian update would be epistemically harmful—it would take us further from the truth. Moreover, it would be literally unnatural. But what does rationality call on us to do? Does it tell us to do Bayesian update or to go with our special human rational nature?

My “natural law epistemology” says that sticking with what’s natural to us is the rational thing to do. We shouldn’t redesign our nature.

Friday, August 2, 2024

A sloppy fine-tuning argument

This argument is an intuition-pump. I don’t know if it can be made rigorous.

Start with some observations. Let Q0 be the nomic parameters of our universe—the exact values of all the constants in the laws of nature. To avoid serious problems with higher infinities and probability, I will make a technical assumption, which I will assume to be neutral be theism and atheism:

  1. There are at most countably many universes.

Now:

  1. For no non-zero countable cardinality n does theism have a bias against the hypothesis that there are countable many universes with cardinality at least n.

  2. The parameters Q0 are life-permitting.

  3. For any fixed countable cardinality n of universes, theism has a significant bias in favor of distributions of parameters that include more universes with life-permitting parameters.

  4. If (2) and (3), then for any countable cardinality n of universes, theism has a significant bias in favor of at least one of them having the parameters given by Q0.

  5. Thus, theism has a bias in favor of a universe with Q0.

  6. Thus, the obtaining of Q0 is evidence for theism.

Some thoughts on the premises.

Regarding 1: Theism actually seems to have a bias in favor of the hypothesis that there are at least n universes. After all, theism has a bias in favor of the hypothesis that there is at least one universe: that there is a universe is quite surprising on atheism, but not so on theism, given that God is by definition perfectly good, and the good tends to spread. But the same reasoning suggests a bias on theism in favor of larger numbers of universes.

Regarding 2: Obvious.

Regarding 3: I think the main way to challenge (3) is to say that God would only care about having one universe with life-permitting parameters, and wouldn’t care about having a larger number. But I think this is implausible given that the good tends to spread. In fact, it seems likely that God would create only universes with life-permitting parameters, which would induce a strong bias in favor of such parameters.

Regarding 4: This is a very substantial assumption. It won’t hold for every set of exact parameters, because some sets of parameters might be life-permitting but would be likely to generate a universe that is really unfortunate in some regard. I don’t think the parameters Q0 behind our universe are like that, but this is a matter of dispute, and intersects with the problem of evil. Note also that it is important for the “significant” in (4) that even if n is (countably) infinite, the probability getting exactly Q0 on atheism is low (in fact, infinitesimal).

The big technical difficulty, which makes me doubtful that the argument can be made rigorous, are the infinities involved.

Thursday, August 1, 2024

Double effect and causal remoteness

I think some people feel that more immediate effects count for more than more remote ones in moral choices, including in the context of the Principle of Double Effect. I used to think this is wrong, as long as the probabilities of effects are the same (typically more remote effects are more uncertain, but we can easily imagine cases where this is not so). But then I thought of two strange trolley cases.

In both cases, the trolley is heading for a track with Fluffy the cat asleep on it. The trolley can be redirected to a second track on which an innocent human is sleeping. Moreover, in a nearby hospital there are five people who will die if they do not receive a simple medical treatment. There is only one surgeon available.

But now we have two cases:

  1. All five people love Fluffy very much and have specified that they consent to life-saving treatment if and only if Fluffy is alive. The surgeon refuses to perform surgery that the patients have not consented to.

  2. The surgeon loves Fluffy and after hearing of the situation has informed you that they will perform surgery if and only if Fluffy is alive.

In both cases, I am rather uncomfortable with the idea of redirecting the trolley. But if we don’t take immediacy into account, both cases seem straightforward applications of Double Effect. The intention in both cases is to save five human lives by saving Fluffy, with the death of the person on the second track being an unintended side-effect. Proportionality between the good and the bad effects seems indisputable.

However, in both cases, redirecting the trolley leads much more directly to the death of the one person than to the saving of the five. The causal chain from redirection to life-saving in both cases is mediated by the surgeon’s choice to perform surgery. (In Case 1, the surgeon is reasonable and in Case 2, the surgeon is unreasonable.) So perhaps in considerations of proportionality, the more immediate but smaller bad effect (the death of the person on the side-track) outweighs the more remote but larger good effect (the saving of the five).

I can feel the pull of this. Here is a test. Suppose we make the death of the sixth innocent person equally indirect, by supposing instead that Rover the dog is on the second track, and is connected to someone’s survival in the way that Fluffy is connected to the survival of the five. In that case, it seems pretty plausible that you should redirect. (Though I am not completely certain, because I worry that in redirecting the trolley even in this case you are unduly cooperating with immoral people—the five people who care more about a cat than about their own human dignity, or the crazy surgeon.)

If this is right, how do we measure the remoteness of causal chains? Is it the number of independent free choices that have to be made, perhaps? That doesn’t seem quite right. Suppose that we have a trolley heading towards Alice who is tied to the track, and we can redirect the trolley towards Bob. Alice is a surgeon needed to save ten people. Bob is a surgeon needed to save one. However, Alice works in a hospital that has vastly more red tape, and hence for her to save the ten people, thirty times as many people need to sign off on the paperwork. But in both cases the probabilities of success (including the signing off on the paperwork) are the same. In this case, maybe we should ignore the red tape, and redirect?

So the measure of the remoteness of causal chains is going to have to be quite complex.

All this confirms my conviction that the proportionality condition in Double Effect is much more complex than initially seems.