Wednesday, March 19, 2025

Reducing promises to assertions

To promise something, I need to communicate something to you. What is that thing that I need to communicate to you? To a first approximation, what I need to communicate to you is that I am promising. But that’s circular: it says that promising is communicating that I am promising. This circularity is vicious, because it doesn’t distinguish promising from asking: asking is communicating that I am asking.

But now imagine I have a voice-controlled robot named Robby, and I have programmed him in such a way that I command him by asserting that Robby will do something because I have said he will do it. Thus, to get him to vacuum the living room, I assert “Robby will immediately vacuum the living room because I say so.” As long as what I say is within the range of Robby’s abilities, any statement I make in Robby’s vicinity about what he will do because I say he will do it is automatically true. This is all easily imaginable.

Now, back to promises. Perhaps it works like this. I have a limited power to control the normative sphere. This normative power generates an effect in normative space precisely when I communicate that I am generating that effect. Thus, I can promise to buy you lunch by asserting “I will be obligated to you to buy you lunch.” And I permit you to perform heart surgery by asserting “You will cease to have a duty of respect for my autonomy not to perform heart surgery on me.” As long as what I say is within my normative capabilities, by communicating that I am making it true by communicating it, I make it be true, just as Robby will do what I assert he will do because of my say-so, as long as it is within his physical capabilities.

This solves the circularity problem for promising because what I am communicating is not that I am promising, but the normative effect of the promising:

  1. x promises to ϕ to y if and only if x successfully exercises a communicative normative power to gain an obligation-to-y by ϕing

  2. a communicative normative power for a normative effect F is a normative power whose object is F and whose successful exercise requires the circumstance that one express that one is producing F by communicating that one is so doing.

There are probably some further tweaks to be made.

Of course, in practice, we communicate the normative effect not by describing it explicitly, but by using set phrases, contextual cues, etc.

This technique allows us to reduce promising, consenting, requesting, commanding and other illocutionary forces to normative power and communicating, which is basically a generalized version of assertion. But we cannot account for communicating or asserting in this way—if we try to do that, we do get vicious circularity.

Tuesday, March 18, 2025

A curious poker variant

In some games like Mafia, uttering falsehoods is a part of the game mechanic. These falsehoods are no more lies than falsehoods uttered by an actor in a performance are lies.

Now consider a variant of poker where a player is permitted to utter falsehoods when and only when they have a Joker in hand. In this case when the player utters a falsehood with Joker in hand, there is no lie. The basic communicative effect of uttering s is equivalent to asserting “s or I have a Joker in hand (or both)”, though there may be additional information conveyed by bodily expression, tone of voice, or context.

If this analysis of poker variant is correct, then the following seems to follow by analogy. Suppose, as many people think, that it is morally permissible to utter falsehoods in “assertoric contexts” to save innocent lives. (An assertoric context is roughly one where the speaker is appropriately taken to be asserting.) Given that we are always playing the “morality game”, by analogy this would mean that in paradigm instances when we utter a declarative sentence s, we are actually communicating something like “s or I am speaking to save innocent lives.” If this is right, then it is impossible to lie to save innocent lives, just as in my poker variant it is impossible to lie when one knows one has the Joker in hand (unless maybe one is really bad at logic).

The above argument supports this premise:

  1. If it is morally permissible to utter falsehoods in assertoric contexts to save innocent lives, it is not possible to lie to save innocent lives.

But:

  1. It is possible to lie to save innocent lives.

I conclude:

  1. It is not morally permissible to utter falsehoods in assertoric contexts to save innocent lives.

In short: lying is wrong, even to save innocent lives.

Monday, March 17, 2025

Evolution of my views on mathematics

I have for a long time inclined towards ifthenism in mathematics: the idea that mathematics discovers truths of the form "If these axioms are true, then this thing is true as well."

Two things have weakened my inclination to ifthenism.

The first is that there really seems to be a privileged natural number structure. For any consistent sufficiently rich recursive axiomatization A of the natural numbers, by Goedel’s Second Incompleteness Theorem (plus Completeness) there is a natural number structure satisfying A accordingto which A is inconsistent and there is a natural number structure satisfying A according to which A is consistent. These two structures can’t be on par—one of them needs to be privileged.

The second is an insight I got from Linnebo’s philosophy of mathematics book: humans did mathematics before they did axiomatic mathematics. Babylonian apparently non-axiomatic but sophisticated mathematics came before Greek axiomatic geometry. It is awkward to think that the Babylonians were discovering ifthenist truths, given that they didn’t have a clear idea of the antecedents of the ifthenist conditionals.

I am now toying with the idea that there is a metaphysically privileged natural number structure but we have ifthenism for everything else in mathematics.

How is the natural number structure privileged? I think as follows: the order structure of the natural numbers is a possible order structure for a causal sequence. Causal finitism, by requiring all initial segments under the causal relation to be finite, requires the order type of the natural numbers to be ω. But once we have fixed the order type to be ω, we have fixed the natural number structure to be standard.

Thursday, March 6, 2025

Definitions

In the previous post, I offered a criticism of defining logical consequence by means of proofs. A more precise way to put my criticism would be:

  1. Logical consequence is equally well defined by (i) tree-proofs or by (ii) Fitch-proofs.

  2. If (1), then logical consequence is either correctly defined by (i) and correctly defined by (ii) or it is not correctly defined by either.

  3. If logical consequence is correctly defined by one of (i) and (ii), it is not correctly defined by the other.

  4. Logical consequence is not both correctly defined by (i) and and correctly defined by (ii). (By 3)

  5. Logical consequence is neither correctly defined by (i) nor by (ii). (By 1, 2, and 4)

When writing the post I had a disquiet about the argument, which I think amounts to a worry that there are parallel arguments that are bad. Consider the parallel argument against the standard definition of a bachelor:

  1. A bachelor is equally well defined as (iii) an unmarried individual that is a man or as (iv) a man that is unmarried.

  2. If (6), then a bachelor is either correctly defined by (iii) and correctly defined by (iv) or it is not correctly defined by either.

  3. If logical consequence is correctly defined by one of (iii) and (iv), it is not correctly defined by the other.

  4. A bachelor is not both correctly defined by (iii) and correctly defined by (iv). (By 9)

  5. A bachelor is neither correctly defined by (iii) nor by (iv). (By 6, 7, and 10)

Whatever the problems of the standard definition of a bachelor (is a pope or a widower a bachelor?), this argument is not a problem. Premise (9) is false: there is no problem with saying that both (iii) and (iv) are good definitions, given that they are equivalent as definitions.

But now can’t the inferentialist say the same thing about premise (3) of my original argument?

No. Here’s why. That ψ has a tree-proof from ϕ is a different fact from the fact that ψ has a Fitch-proof from ϕ. It’s a different fact because it depends on the existence of a different entity—a tree-proof versus a Fitch-proof. We can put the point here in terms of grounding or truth-making: the grounds of one involve one entity and the grounds of the other involve a different entity. On the other hand, that Bob is an unmarried individual who is a bachelor and that Bob is a bachelor who is unmarried are the same fact, and have the same grounds: Bob’s being unmarried and Bob’s being a man.

Suppose one polytheist believes in two necessarily existing and essentially omniscient gods, A and B, and defines truth as what A believes, while her coreligionist defines truth as what B believes. The two thinkers genuinely disagree as to what truth is, since for the first thinker the grounds of a proposition’s being true are beliefs by A while for the second the grounds are beliefs by B. That necessarily each definition picks out the same truth facts does not save the definition. A good definition has to be hyperintensionally correct.

Logical consequence

There are two main accounts of ψ being a logical consequence of ϕ:

  • Inferentialist: there is a proof from ϕ to ψ

  • Model theoretic: every model of ϕ is a model of ψ.

Both suffer from a related problem.

On inferentialism, the problem is that there are many different concepts of proof all of which yield an equivalent relation of between ϕ and ψ. First, we have a distinction as to how the structure of a proof is indicated: is a tree, a sequence of statements set off by subproof indentation, or something else. Second, we have a distinction as to the choice of primitive rules. Do we, for instance, have only pure rules like disjunction-introduction or do we allow mixed rules like De Morgan? Do we allow conveniences like ternary conjunction-elimination, or idempotent? Which truth-functional symbols do we take as undefined primitives and which ones do we take as abbreviations for others (e.g., maybe we just have a Sheffer stroke)?

It is tempting to say that it doesn’t matter: any reasonable answers to these questions make exactly the same ψ be logical consequence of the same ϕ.

Yes, of course! But that’s the point. All of these proof systems have something in common which makes them ``reasonable’’; other proof systems, like ones including the rule of arbitrary statement introduction, are not reasonable. What makes them reasonable is that the proofs they yield capture logical consequence: they have a proof from ϕ to ψ precisely when ψ logically follows from ϕ. The concept of logical consequence is thus something that goes beyond them.

None of these are the definition of proof. This is just like the point we learn from Benacerraf that none of the set-theoretic “constructions of the natural numbers” like 3 = {0, 1, 2} or 3 = {{{0}}} gives the definition of the natural numbers. The set theoretic constructions give a model of the natural numbers, but our interest is in the structure they all have in common. Likewise with proof.

The problem becomes even worse if we take a nominalist approach to proof like Goodman and Quine do, where proofs are concrete inscriptions. For then what counts as a proof depends on our latitude with regard to the choice of font!

The model theoretic approach has a similar issue. A model, on the modern understanding, is a triple (M,R,I) where M is a set of objects, R is a set of relations and I is an interpretation. We immediately have the Benacerraf problem that there are many set-theoretic ways to define triples, relations and interpretations. And, besides that, why should sets be the only allowed models?

One alternative is to take logical consequence to be primitive.

Another is not to worry, but to take the important and fundamental relation to be metaphysical consequence, and be happy with logical consequence being relative to a particular logical system rather than something absolute. We can still insist that not everything goes for logical consequence: some logical systems are good and some are bad. The good ones are the ones with the property that if ψ follows from ϕ in the system, then it is metaphysically necessary that if ϕ then ψ.

Wednesday, March 5, 2025

A praise-blame asymmetry

There is a certain kind of symmetry between praise and blame. We praise someone who incurs a cost to themselves by going above and beyond obligation and thereby benefitting another. We blame someone who benefits themselves by failing to fulfill an obligation and thereby harming another.

But here is a fun asymmetry to note. We praise the benefactor in proportion to the cost to the benefactor. But we do not blame the malefactor in proportion to the benefit to the malefactor. On the contrary, when the benefit to the malefactor is really small, we think the malefactor is more to be blamed.

Realism about arithmetical truth

It seems very plausible that for any specific Turing machine M there is a fact of the matter about whether M would halt. We can just imagine running the experiment in an idealized world with an infinite future, and surely either it will halt or it won’t halt. No supertasks are needed.

This commits one to realism about Σ1 arithmetical propositions: for every proposition expressible in the form nϕ(n) where ϕ(n) has only bounded quantifiers, there is a fact of the matter whether the proposition is true. For there is a Turing machine that halts if and only if nϕ(n).

But now consider a Π2 proposition, one expressible in the form mnϕ(m,n), where again ϕ(m,n) has only bounded quantifiers. For each fixed m, there is a Turing machine Mm whose halting is equivalent to nϕ(m,n). Imagine now a scenario where on day n of an infinite future you build and start Mm. Then there surely will be a fact of the matter whether any of these Turing machines will halt, a fact equivalent to mnϕ(m,n).

What about a Σ3 proposition, one expressible in the form rmnϕ(r,m,n)? Well, we could imagine for each fixed r running the above experiment starting on day r in the future to determine whether the Π2 proposition mnϕ(r,m,n) is true, and then there surely is a fact of the matter whether at least one of these experiments gives a positive answer.

And so on. Thus there is a fact of the matter whether any statement in the arithmetical hierarchy—and hence any statement in the language of arithmetic—is true or false.

This argument presupposes a realism about deterministic idealized machine counterfactuals: if I were to build such and such a sequence of deterministic idealized machines, they would behave in such and such a way.

The argument also presupposes that we have a concept of the finite and of countable infinity: it is essential that our Turing machines be run for a countable sequence of steps in the future and that the tape begin with a finite number of symbols on it. If we have causal finitism, we can get the concept of the finite out of the metaphysics of the world, and a discrete future-directed causal sequence of steps is guaranteed to be countable.

Tuesday, March 4, 2025

Degrees of gratitude

How grateful x should be to y for ϕing depends on:

  1. The expected benefit to x

  2. The actual benefit to x

  3. The expected cost to y

  4. The actual deontic status of yϕing

  5. The believed deontic status of y’s ϕing.

The greater the expected benefit, the greater the appropriate gratitude. Zeroing the expected benefit zeroes the appropriate gratitude: if someone completely accidentally benefited me, no gratitude is appropriate.

I think the actual benefit increases the expected gratitude, even when the expected benefit is fixed. If you try to do something nice for me, I owe you thanks, but I owe even more thanks when I am an actual beneficiary. However, zeroing the actual benefit does not zero the expected gratitude—I should still be grateful for your trying.

The more costly the gift to the giver, the more gratitude is appropriate. But zeroing the cost does not zero the expected gratitude: I owe God gratitude for creating me even though it took no effort. I think that in terms of costs, it is only the expected and not the actual cost that matters for determining the appropriate gratitude. If you bring flowers to your beloved and slip and fall on the way back from the florist and break your leg, it doesn’t seem to me that more gratitude is appropriate.

I think of deontic status here as on a scale that includes four ranges:

  1. Wrong (negative)

  2. Merely permissible (neither obligatory nor supererogatory) (zero)

  3. Obligatory (positive)

  4. Supererogatory (super positive)

In cases where both the actual and believed deontic status falls in category (i), no gratitude is appropriate. Gratitude is only appopriate for praiseworthy actions.

The cases of supererogation call for more gratitude than the cases of obligation, other things being equal. But nonetheless cases of obligatory benefiting also call for gratitude. While y might say “I just did my job”, that fact does not undercut the need for gratitude.

Cases where believed and actual deontic status come apart are complicated. Suppose that a do-not-resuscitate order is written in messy handwriting, and a doctor misreads it as a resuscitate order, and then engages in heroic effort to resuscitate, succeeds, and in fact benefits the patient. (Maybe the patient thought that they would not be benefited by resuscitation, but in fact they are.) I think gratitude is appropriate, even if the action was actually wrong.

There is presumably some very complicated function from factors (1)–(5) (and perhaps others) to the degree of appropriate gratitude.

I am really grateful to Juliana Kazemi for a conversation on relevant topics.

Wednesday, February 26, 2025

Against full panpsychism

I have access to two kinds of information about consciousness: I know the occasions on which I am conscious and the occasions on which I am not. Focusing on the second, we get this argument:

  1. If panpsychism is true, everything is always conscious.

  2. In dreamless sleep, I exist and am not conscious.

  3. So, panpsychism is false.

One response is to retreat to a weaker panpsychism on which everything is either conscious or has a conscious part. On the weaker panpsychism, one can say that in dreamless sleep, I have some conscious parts, say particles in my big toe.

But suppose we want to stick to full panpsychism that holds that everything is always conscious. This leaves two options.

First, one could deny that we exist in dreamless sleep. But if we don’t exist in dreamless sleep, then it is not possible to murder someone in dreamless sleep, and yet it obviously is.

Second, one could hold that we are conscious in dreamless sleep but the consciousness is not recorded to memory. This seems a dubious skeptical hypothesis. But let’s think about it a bit more. Presumably, the same applies under general anaesthesia. Now, while I’m far from expert on this, it seems plausible that the brain functioning under general anaesthesia is a proper subset of my present brain functioning. This makes it plausible that my experiences under general anaesthesia are a proper subset of my present wakeful experiences. But none of my present wakeful experiences—high level cognition, sensory experience, etc.—are a plausible candidate for an experience that I might have under general anaesthesia.

Tuesday, February 25, 2025

Being known

The obvious analysis of “p is known” is:

  1. There is someone who knows p.

But this obvious analysis doesn’t seem correct, or at least there is an interesting use of “is known” that doesn’t fit (1). Imagine a mathematics paper that says: “The necessary and sufficient conditions for q are known (Smith, 1967).” But what if the conditions are long and complicated, so that no one can keep them all in mind? What if no one who read Smith’s 1967 paper remembers all the conditions? Then no one knows the conditions, even though it is still true that the conditions “are known”.

Thus, (1) is not necessary for a proposition to be known. Nor is this a rare case. I expect that more than half of the mathematics articles from half a century ago contain some theorem or at least lemma that is known but which no one knows any more.

I suspect that (1) is not sufficient either. Suppose Alice is dying of thirst on a desert island. Someone, namely Alice, knows that she is dying of thirst, but it doesn’t seem right to say that it is known that she is dying of thirst.

So if it is neither necessary nor sufficient for p to be known that someone knows p, what does it mean to say that p is known? Roughly, I think, it has something to do with accessibility. Very roughly:

  1. Somebody has known p, and the knowledge is accessible to anyone who has appropriate skill and time.

It’s really hard to specify the appropriateness condition, however.

Does all this matter?

I suspect so. There is a value to something being known. When we talk of scientists advancing “human knowledge”, it is something like this “being known” that we are talking about.

Imagine that a scientist discovers p. She presents p at a conference where 20 experts learn p from her. Then she publishes it in a journal when 100 more people learn it. Then a Youtuber picks it up and now a million people know it.

If we understand the value of knowledge as something like the sum of epistemic utilities across humankind, then the successive increments in value go like this: first, we have a move from zero to some positive value V when the scientist discovers p. Then at the conference, the value jumps from V to 21V. Then after publication it goes from 21V to 121V. Then given Youtube, it goes from 121V to 100121V. The jump at initial discovery is by far the smallest, and the biggest leap is when the discovery is publicized. This strikes me as wrong. The big leap in value is when p becomes known, which either happens when the scientist discovers it or when it is presented at the conference. The rest is valuable, but not so big in terms of the value of “human knowledge”.

Monday, February 24, 2025

Epistemically paternalistic lies

Suppose Alice and Bob are students and co-religionists. Alice is struggling with a subject and asks Bob to pray that she might do fine on the exam. She gets 91%. Alice also knows that Bob’s credence in their religion is a bit lower than her own. When Bob asks her how she did, she lies that she got 94%, in order to boost Bob’s credence in their religion a bit more.

Whether a religion is correct is very epistemically important to Bob. But whether Alice got 91% or 94% is not at all epistemically important to Bob except as evidence for whether the religion is correct. The case can be so set up that by Alice’s lights—remember, she is more confident that the religion is correct than Bob is—Bob can be expected to be better off epistemically for boosting his credence in the religion. Moreover, we can suppose that there is no plausible way for Bob to find out that Alice lied. Thus, this is an epistemically paternalistic lie expected to make Bob be better off epistemically.

And this lie is clearly morally wrong. Thus, our communicative behavior is not merely governed by maximization of epistemic utility.

More on averaging to combine epistemic utilities

Suppose that the right way to combine epistemic utilities across people is averaging: the overall epistemic utility of the human race is the average of the individual epistemic utilities. Suppose, further, that each individual epistemic utility is strictly proper, and you’re a “humanitarian” agent who wants to optimize overall epistemic utility.

Suppose you’re now thinking about two hypotheses about how many people exist: the two possible numbers are m and n, which are not equal. All things considered, you have credence 0 < p0 < 1 in the hypothesis Hm that there are m people and 1 − p0 in the hypothesis Hn that there are n people. You now want to optimize overall epistemic utility. On an averaging view, if Hm is true, if your credence is p1, your contribution to overall epistemic utility will be:

  • (1/m)T(p1)

and if Hm is false, your contribution will be:

  • (1/n)F(p1),

where your strictly proper scoring rule is given by T, P. Since your credence is p1, by your lights the expected value after your changing your credence to p0 will be:

  • p0(1/m)T(p1) + (1−p0)(1/n)F(p1) + Q(p0)

where Q(p0) is the contribution of other people’s credences, which I assume you do not affect with your choice of p1. If m ≠ n and T, F is strictly proper, the expected value will be maximized at

  • p1 = (p0/m)/(p0/m+(1−p0)/n) = np0/(np0+m(1−p0)).

If m > n, then p1 < p0 and if m < n, then p1 > p0. In other words, as long as n ≠ m, if you’re an epistemic humanitarian aiming to improve overall epistemic utility, any credence strictly between 0 and 1 will be unstable: you will need to change it. And indeed your credence will converge to 0 if m > n and to 1 if m < n. This is absurd.

I conclude that we shouldn’t combine epistemic utilities across people by averaging the utilities.

Idea: What about combining them by computing the epistemic utilities of the average credences, and then applying a strictly proper scoring rule, in effect imagining that humanity is one big committee and that a committee’s credence is the average of the individual credences?

This is even worse, because it leads to problems even without considering hypotheses on which the number of people varies. Suppose that you’ve just counted some large number nobody cares about, such as the number of cars crossing some intersection in New York City during a specific day. The number you got is even, but because the number is big, you might well have made a mistake, and so your credence that the number is even is still fairly low, say 0.7. The billions of other people on earth all have credence 0.5, and because nobody cares about your count, you won’t be able to inform them of your “study”, and their credences won’t change.

If combined epistemic utility is given by applying a proper scoring rule to the average credence, then by your lights the expected value of the combined epistemic utility will increase the bigger you can budge the average credence, as long as you don’t get it above your credence. Since you can really only affect your own credence, as an epistemic humanitarian your best bet is to set your credence to 1, thereby increasing overall human credence from 0.5 to around 0.5000000001, and making a tiny improvement in the expected value of the combined epistemic utility of humankind. In doing so, you sacrifice your own epistemic good for the epistemic good of the whole. This is absurd!

I think the idea of averaging to produce overall epistemic utilities is just wrong.

Friday, February 21, 2025

Adding or averaging epistemic utilities?

Suppose for simplicity that everyone is a good Bayesian and has the same priors for a hypothesis H, and also the same epistemic interests with respect to H. I now observe some evidence E relevant to H. My credence now diverges from everyone else’s, because I have new evidence. Suppose I could share this evidence with everyone. It seems obvious that if epistemic considerations are the only ones, I should share the evidence. (If the priors are not equal, then considerations in my previous post might lead me to withhold information, if I am willing to embrace epistemic paternalism.)

Besides the obvious value of revealing the truth, here are two ways to reason for this highly intuitive conclusion.

First, good Bayesians will always expect to benefit from more evidence. If my place and that of some other agent, say Alice, were switched, I’d want the information regarding E to be released. So by the Golden Rule, I should release the information.

Second, good Bayesians’ epistemic utilities are measured by a strictly proper scoring rule. But if Alice’s epistemic utilities for H are measured by a strictly proper (accuracy) scoring rule s that assigns an epistemic utility s(p,t) to a credence p when the actual truth value of H is t, which can be zero or one. By definition of strict propriety, the expectation by my lights of what Alice’s epistemic utility for a given credence should be is strictly maximized when that credence equals my credence. Since Alice shares the priors I had before I observed E, if I can make E evident to her, her new posteriors will match my current ones, and so revealing E to her will maximize my expectation of her epistemic utility.

So far so good. But now suppose that the hypothesis H = HN is that there exist N people other than me, and my priors assign probability 1/2 to there being N and 1/2 to its being n, where N is much larger than n. Suppose further that my evidence E ends up significantly supporting hypothesis Hn, so that my posterior p in HN is smaller than 1/2.

Now, my expectation of the total epistemic utility of other people if I reveal E is:

  • UR = pNs(p,1) + (1−p)ns(p,0).

And if I conceal E, my expectation is:

  • UC = pNs(1/2,1) + (1−p)ns(1/2,0).

If we had N = n, then it would be guaranteed by strict propriety that UR > UC, and so I should reveal. But we have N > n. Moreover, s(1/2,1) > s(p,1): if some hypothesis is true, a strictly proper accuracy scoring rule increases strictly monotonically with the credence. If N/n is sufficiently large, the first terms of UR and UC will dominate, and hence we will have UC > UR, and thus I should conceal.

The intuition behind this technical argument is this. If I reveal the evidence, I decrease people’s credence in HN. If it turns out that the number of people other than me actually is N, I have done a lot of harm, because I have decreased the credence of a very large number N of people. Since N is much larger than n, this consideration trumps considerations of what happens if the number of people is n.

I take it that this is the wrong conclusion. On epistemic grounds, if everyone’s priors are equal, we should release evidence. (See my previous post for what happens if priors are not equal.)

So what should we do? Well, one option is to opt for averaging rather than summing of epistemic utilities. But the problem reappears. For suppose that I can only communicate with members of my own local community, and we as a community have equal credence 1/2 for the hypothesis Hn that our local community of n people contains all agents, and credence 1/2 for the hypothesis Hn + N that there is also a number N of agents outside our community much greater than n. Suppose, further, that my priors are such that I am certain that all the agents outside our community know the truth about these hypotheses. I receive a piece of evidence E disfavoring Hn and leading to credence p < 1/2. Since my revelation of E only affects the members of my own commmunity, depending on which hypothesis is true, if p is my credence after updating on E, the relevant part of the expected contribution to the utility of revealing E with regard to hypothesis Hn is:

  • UR = p((n−1)/n)s(p,1) + (1−p)((n−1)/(n+N))s(p,0).

And if I conceal E, my expectation contribution is:

  • UC = p((n−1)/n)s(1/2,1) + (1−p)((n−1)/(n+N))s(p,0).

If N is sufficiently large, again UC will beat UR.

I take it that there is something wrong with epistemic utilitarianism.

Bayesianism and epistemic paternalism

Suppose that your priors for some hypothesis H are 3/4 while my priors for it are 1/2. I now find some piece of evidence E for H which raises my credence in H to 3/4 and would raise yours above 3/4. If my concern is for your epistemic good, should I reveal this evidence E?

Here is an interesting reason for a negative answer. For any strictly proper (accuracy) scoring rule, my expected value for the score of a credence is uniquely maximized when the credence is 3/4. I assume your epistemic utility is governed by a strictly proper scoring rule. So the expected epistemic utility, by my lights, of your credence is maximized when your credence is 3/4. But if I reveal E to you, your credence will go above 3/4. So I shouldn’t reveal it.

This is epistemic paternalism. So, it seems, expected epistemic utility maximization (which I take it has to employ a strictly proper scoring rule) forces one to adopt epistemic paternalism. This is not a happy conclusion for expected epistemic utility maximization.

Tuesday, February 18, 2025

An example of a value-driven epistemological approach to metaphysics

  1. Everything that exists is intrinsically valuable.

  2. Shadows and holes are not intrinsically values.

  3. So, neither shadows nor holes exist.