Thursday, June 5, 2025

What is an existential quantifier?

What is an existential quantifier?

The inferentialist answer is that an existential quantifier is any symbol that has the syntactic features of a one-place quantifier and obeys the same logical rules of an existential quantifier (we can precisely specify both the syntax and logic, of course). Since Carnap, we’ve had good reason to reject this answer (see, e.g., here).

Here is a modified suggestion. Consider all possible symbols that have the syntactic features of a one-place quantifier and obeys the rules of an existential quantifier. Now say that a symbol is an existential quantifier provided that it is a symbol among these symbols that maximizes naturalness, in the David Lewis sense of “naturalness”.

Moreover, this provides the quantifier variantist or pluralist (who thinks there are multiple existential quantifiers, none of them being the existential quantifier) with an answer to a thorny problem: Why not simply disjoin all the existential quantifiers to make a truly unrestricted existential quantifier, and say that that is the existential quantifier? THe quantifier variantist can say: Go ahead and disjoin them, but a disjunction of quantifiers is less natural than its disjuncts and hence isn’t an existential quantifier.

This account also allows for quantifier variance, the possibility that there is more than one existential quantifier, as long as none of these existential quantifiers is more natural than any other. But it also fits with quantifier invariance as long as there is a unique maximizer of naturalness.

Until today, I thought that the problem of characterizing existential quantifiers was insoluble for a quantifier variantist. I was mistaken.

It is tempting to take the above to say something deep about the nature of an existential quantifier, and maybe even the nature of being. But I think it doesn’t quite. We have a characterization of existential quantifiers among all possible symbols, but this characterization doesn’t really tell us what they mean, just how they behave.

Tuesday, June 3, 2025

Combining epistemic utilities

Suppose that the right way to combine epistemic utilities or scores across individuals is averaging, and I am an epistemic act expected-utility utilitarian—I act for the sake of expected overall epistemic utility. Now suppose I am considering two different hypotheses:

  • Many: There are many epistemic agents (e.g., because I live in a multiverse).

  • Few: There are few epistemic agents (e.g., because I live in a relatively small universe).

If Many is true, given averaging my credence makes very little difference to overall epistemic utility. On Few, my credence makes much more of a difference to overall epistemic utility. So I should have a high credence for Few. For while a high credence for Few will have an unfortunate impact on overall epistemic utility if Many is true, because the impact of my credence on overall epistemic utility will be small on Many, I can largely ignore the Many hypothesis.

In other words, given epistemic act utilitarianism and averaging as a way of combining epistemic utilities, we get a strong epistemic preference for hypotheses with fewer agents. (One can make this precise with strictly proper scoring rules.) This is weird, and does not match any of the standard methods (self-sampling, self-indication, etc.) for accounting for self-locating evidence.

(I should note that I once thought I had a serious objection to the above argument, but I can't remember what it was.)

Here’s another argument against averaging epistemic utilities. It is a live hypothesis that there are infinitely many people. But on averaging, my epistemic utility makes no difference to overall epistemic utility. So I might as well believe anything on that hypothesis.

One might toy with another option. Instead of averaging epistemic utilities, we could average credences across agents, and then calculate the overall epistemic utility by applying a proper scoring rule to the average credence. This has a different problematic result. Given that there are at least billions of agents, for any of the standard scoring rules, as long as the average credence of agents other than you is neither very near zero nor very near one, your own credence’s contribution to overall score will be approximately linear. But it’s not hard to see that then to maximize expected overall epistemic utility, you will typically make your credence extreme, which isn’t right.

If not averaging, then what? Summing is the main alternative.

Closed time loop

Imagine two scenarios:

  1. An infinitely long life of repetition of a session meaningful pleasure followed by a memory wipe.

  2. A closed time loop involving one session of the meaningful pleasure followed by a memory wipe.

Scenario (1) involves infinitely many sessions of the meaningful pleasure. This seems better than having only one session as in (2). But subjectively, I have a hard time feeling any preference for (1). In both cases, you have your pleasure, and it’s true that you will have it again.

I suppose this is some evidence that we’re not meant to live in a closed time loop. :-)

Monday, June 2, 2025

Shuffling an infinite deck

Suppose infinitely many blindfolded people, including yourself, are uniformly randomly arranged on positions one meter apart numbered 1, 2, 3, 4, ….

Intuition: The probability that you’re on an even-numbered position is 1/2 and that you’re on a position divisible by four is 1/4.

But then, while asleep, the people are rearranged according to the following rule. The people on each even-numbered position 2n are moved to position 4n. The people on the odd numbered positions are then shifted leftward as needed to fill up the positions not divisible by 4. Thus, we have the following movements:

  • 1 → 1

  • 2 → 4

  • 3 → 2

  • 4 → 8

  • 5 → 3

  • 6 → 12

  • 7 → 5

  • 8 → 16

  • 9 → 6

  • and so on.

If the initial intuition was correct, then the probability that now you’re on a position that’s divisible by four is 1/2, since you’re now on a position divisible by four if and only if initially you were on a position divisible by two. Thus it seems that now people are no longer uniformly randomly arranged, since for a uniform arrangement you’d expect your probability of being in a position divisible by four to be 1/4.

This shows an interesting difference between shuffling a finite and an infinite deck of cards. If you shuffle a finite deck of cards that’s already uniformly distributed, it remains uniformly distributed no matter what algorithm you use to shuffle it, as long as you do so in a content-agnostic way (i.e., you don’t look at the faces of the cards). But if you shuffle an infinite deck of distinct cards that’s uniformly distributed in a content-agnostic way, you can destroy the uniform distribution, for instance by doubling the probability that a specific card is in a position divisible by four.

I am inclined to take this as evidence that the whole concept of a “uniformly shuffled” infinite deck of cards is confused.