Wednesday, May 25, 2022

Anti-Bayesian update and scoring rules in infinite spaces

Bayesian update on evidence E is transitioning from a credence function P to the credence function P(⋅∣E). Anti-Bayesian update on E is moving from P to P(⋅∣Ec) (where Ec is the complement of E). Whether one thinks that Bayesian update is rationally required, it is clear that Bayesian update is better than anti-Bayesian update.

But here is a fun fact (assuming the Axiom of Choice). For any scoring rule on an infinite space, there is a finitely additive probability function P and an event E such that 0 < P(E) < 1 where P(⋅∣E) and P(⋅∣Ec) get exactly the same score everywhere in the probability space. It follows that when dealing with finitely additive probabilities on infinite spaces, a scoring rule will not always be able to distinguish Bayesian update from anti-Bayesian update. This is a severe limitation of scoring rules as a tool for evaluating the accuracy of a credence function in infinite cases.

Here’s a proof of the fun fact. Let s be a scoring rule. Say that two credence functions are maximally opinionated provided that they assign 0 or 1 to every event. It is known that then there are two different maximally opinionated finitely additive probability functions p and q such that s(p) = s(q) everywhere. Let P = (p+q)/2 be their average. Let E be an event such that p(E) = 1 and q(E) = 0 (such an event exists because p and q are maximally opinionated and yet different). Then P(⋅∣E) = p and P(⋅∣Ec) = q while P(E) = 1/2. Hence conditionalization on E and Ec has exactly the same score.

One might take this as some evidence that finite additivity is not good enough.

Tuesday, May 24, 2022

Physicalism and the progress of science

People sometimes use the progress of science to argue for physicalism about the mind. But it seems to me that Dostoevskii made more progress in understanding the human mind by existential reflection than anybody has by studying the brain directly. More generally, if we want to understand human minds, we should turn to literature and the spiritual masters rather than to neuroscience.

Thus, any argument for physicalism about the mind from the progress of science is seriously flawed. And perhaps we even have some evidence against physicalism. For it is a surprising fact that we learn more about the mind by the methods of the humanities than by study of the brain if the mind is the brain.

GreaseWeazle

I'm trying to thin the herd of old computers at home. I realized that the only real reason I had a 20-year-old Linux box at home was if I ever wanted to use a 3.5" drive in it to deal with floppies for various systems, especially my HP 1653B oscilloscope (I could get a USB floppy drive for one of the laptops at home, but they aren't usually compatible with non-DOS disk formats). 

Moreover, the 3.5" drive in the computer wasn't even working. Aligning the heads on the drive solved that problem, and then I assembled a GreaseWeazle using one of the blue pill microcontroller boards I have lying around. Then I made a 3D printable case for the messy assembly.

And now I can read and copy floppies for my oscilloscope on my laptop. :-)



Wednesday, May 18, 2022

Dog whistles

From time to time I’ve had occasion to make use of examples where someone says different things to two different interlocutors in a single utterance. My favorite examples were pointing to a bottle and saying “Gift!”, which would mean a very different thing to a German speaker and to an English speaker, or using coded language while speaking to someone while knowing a spy is overhearing. Such examples illustrate the interesting fact that we cannot identify propositions with equivalence classes of utterance tokens, because a single utterance token can express different propositions.

But arguments based on such contrived cases have a tendency to be less than convincing. However, it has just occurred to me that dog whistles in politics are a real-life example of the same phenomenon, and one technically within a single language.

By the way, if we’re looking for equivalence classes that function like propositions, I guess instead of looking at equivalence classes of tokens utterances, we should look at equivalence classes of context-token pairs, where a context includes the language and dialect as well as the (actual? intended?) audience.

Tuesday, May 17, 2022

A near lie

Alice knows that her friend Bob has no pets and no experience with birds. While recommending Bob for a birdkeeping job at a zoo and having discovered or to be surprisingly ignorant about birds, she says:

  1. Bob has a fine collection of Southern yellow-beaked triggles.

It seems that Alice is lying. Yet it seems that to lie one must assert, and to assert one must express a proposition. But Alice’s sentence does not express a proposition since “triggle” is meaningless.

Sentence (1) seems to entail the falsehood:

  1. Bob owns some birds.

But entailment is a relation between propositions, and (1) neither is nor expresses a proposition. We might want to say that if it did express a proposition, it would express a proposition entailing (2). But even that isn’t so clear. After all, maybe a world where “triggle” denotes a science-fictional beaked reptile is closer than a world where it denotes a kind of bird (imagine that some science-fiction writer almost wrote Southern yellow-beaked triggles as reptiles into a story but stopped themselves at the last moment).

Here is what I think I want to say about what Alice did. According to Jorge Garcia, what makes lying bad one linguistically solicits trust that what one is saying is true, while at the same time betraying that trust. Alice did exactly that, but without asserting. So, while Alice did not lie, she did something that is wrong for the same reason that lying is.

Wednesday, May 11, 2022

Chinese Room thought experiments

Thought experiments like Searle’s Chinese Room are supposed to show that understanding and consciousness are not reducible to computation. For if they are, then a bored monolingual English-speaking clerk who moves around pieces of paper with Chinese letters letters—or photographic memories of them in his head—according to a fixed set of rules counts as understanding Chinese and having the consciousness that goes with that.

I used to find this an extremely convincing argument. But I am finding it less so over time. Anybody who thinks that computers could have understanding and consciousness will think that a computer can run two different simultaneous processes of understanding and consciousness sandboxed apart from one another. Neither process will have the understanding and consciousness of what is going on in the other process. And that’s very much what the functionalist should say about the Chinese Room. We have two processes running in the clerk’s head. One process is English-based and the other is a Chinese-based process running in an emulation layer. There is limited communication between the two, and hence understanding and consciousness do not leak between them.

If we accept the possibility of strong Artificial Intelligence, we have two choices of what to say about sandboxed intelligent processes running on the same hardware. We can say that there is one person with two centers of consciousness/understanding or that there are two persons each with one center. On the one person with two mental centers view, we can say that the clerk does understand Chinese and does have the corresponding consciousness, but that understanding is sandboxed away from the English-based processing, and in particular the clerk will not talk about it (much as in the computer case, we could imagine the two processes communicating with a user through different on-screen windows). On the two person view, we would say that the clerk does not understand Chinese, but that a new person comes into existence who does understand Chinese.

I am not saying that the proponent of strong AI is home free. I think both the one-person-two-centers and two-person views have problems. But these are problems that arise purely in the computer case, without any Chinese room kind of stuff going on.

The one-person-two-centers view of multiple intelligent processes running on one piece of hardware gives rise to insoluble questions of the unity of a piece of hardware. (If each process runs on a different processor core, do we count as having one piece of hardware or not? If not, what if they are constantly switching between cores? If yes, what if the separate the cores to separate pieces of silicon that are glued along an edge?) The two-persons view, on the other hand, is incompatible with animalism in our own case. Moreover, it ends up identifying persons with software processes, which leads to the unfortunate conclusion that when the processes are put to sleep, the persons temporarily cease to exist—and hence that we do not exist when sufficiently deeply asleep.

These are real problems, but no additional difficulty comes from the Chinese room case that I can see.

Tuesday, May 10, 2022

Towards a static solution to Wordle

A static solution to Wordle would be a sequence of five guess words which would distinguish all the answer words. I've run C-based parallel nearly-brute force (with some time-saving heuristics) code to try to see if there is a static solution. No luck so far. The closest I have is flitt dawds vughy kerel combo paean, which leaves two pairs undistinguished (spine/snipe and gauge/gauze). There may be a full solution, but I don't have it. (Note: I am working with the original answer list, not the modified New York Times one.)

Friday, May 6, 2022

Punishment and the law

Here’s a valid argument:

  1. It is only permissible to punish a person for doing what is morally wrong.

  2. It is permissible for the state to punish a person for disobeying law.

  3. Therefore, disobeying law is morally wrong.

This is already an interesting and somewahat controversial conclusion. It pushes us to the view that when the law forbids something that is innately morally permissible—such as driving on the left side of the road—that thing becomes morally impermissible.

We can then continue arguing to another controversial conclusion:

  1. It is not morally wrong to disobey unjust requirements.

  2. Therefore, no unjust requirement is law.

I suppose all this focuses one’s attention on (1). The opposing view would be that it is permissible to punish a person for doing things that are legally wrong even when they are merely legally wrong. But this seems mistaken. A person who fulfills all moral imperatives is perfectly innocent. But it is wrong to punish a perfectly innocent person.

Note that the first argument implies that taking literally the idea of what some Catholic authors called “purely penal laws”, where there is no moral obligation to obey, just an obligation to pay the penalty if one is caught disobeying, is highly problematic. For if it’s penal, it imposes a punishment, and it’s wrong to impose a punishment for what isn’t wrong to do. That said, it may be that the idea of “purely penal laws” is just a misuse of the word “penal”. We can think of them as laws that simply impose a special fee applicable if one is caught disobeying, but that fee is not a punishment. We can imagine, for instance, a setup where there is a set fee for traveling by bus with a ticket and a larger fee for traveling without a ticket which is levied at random, namely when a ticket checker is present. (I remember that once in Poland buses had a sign detailing a with-ticket price and a without-ticket price, the second being an order of magnitude higher.) But it is a difficult question when something is a fee and when it is a punishment. This question famously came up for Obamacare.

Wednesday, May 4, 2022

Evils that are evidence for theism

It’s mildly interesting to note, when evaluating the evidential impact of evil, that there can be evil events that would be evidence for the existence of God. For instance, suppose that three Roman soldiers who witnessed Christ’s resurrection conspired to lie that he didn’t see Christ get resurrected. That they lied that they didn’t see Christ get resurrected entails that they thought they witnessed the resurrection, and that would be strong evidence for the existence of God, even after factoring in the counterevidence coming from the evil of the lie. (After all, we already knew that there are lots of lies in the world, so learning of one more won’t make much of a difference.)

In fact, this is true even for horrendous and apparently gratuitous evils. We could imagine that the three soldiers’ lies crush someone’s hopes for the coming of the Messiah, and that could be a horrendous evil. And it could also be the case that we can’t see any possible good from the lie, and hence the lie is apparently gratuitous.

Monday, May 2, 2022

An argument for probabilism without assuming strict propriety

Suppose that s is a proper scoring rule on a finite space Ω continuous on probabilities and suppose that for no probability p is the expectation Eps(p) infinitely bad (i.e., no probability is infinitely bad by its own lights). Suppose that s is probability distinguishing: there isn’t a non-probability c and probability p such that s(c) = s(p) everywhere. Then any non-probability credence c is weakly s-dominated by some probability p: i.e., s(p)(ω) is at least as good as s(c)(ω) for all ω, and strictly better for at least one ω. (This follows from the fact that Lemma 1 of this short piece holds with the same proof when q is a non-probability.)

If one thinks that one should always switch to a weakly dominating option, then this conclusion provides an argument for probabilism.

One might, however, reasonably think that it is only required to switch to a weakly dominating option when one assigns non-zero probability of the weakly dominating option being better. If so, then we get a weaker conclusion: your credences should either be irregular (i.e., assign zero to some non-empty set) or probabilistic. But a view that permits violations of the axioms of probability but only when one has irregular credences seems really implausible. So your credences should be probabilistic.

The big question is whether probability distinguishing is any more plausible as a condition on a scoring rule than strictness of propriety. I think it has some plausibility, but I am not quite sure how to argue for it.

Truth-directedness and propriety of scoring rules does not imply strict propriety

A scoring rule assigns a score to a credence assignment (which can but need not satisfy the axioms of probability), where a score is a random variable measuring how close the credence assignment is to the truth.

A scoring rule is strictly truth-directed provided that if c is a credence assignment that is closer to the truth than c is at ω, then c gets a better a score at ω. A scoring rule is proper provided that for all probabilities p, the p-expected value of the score of a probability p is at least as good as the p-expected value of the score of any other credence, and is strictly proper.

Propriety for a scoring rule is a pretty plausible condition, but it’s a bit harder to argue philosophically for strict propriety. But scoring-rule based philosophical arguments for probabilism—the doctrine that credences ought to be probabilities—require strict propriety.

In a clever move, Campbell-Moore and Levinstein showed that propriety plus strict truth-directedness and additivity (the idea that the score can be decomposed into a sum of single-event scores) implies strict propriety.

Here’s an interesting fact I will show: propriety plus strict truth-directedness do not imply strict propriety in the absence of additivity. Further, my counterexample will be bounded, infinitely differentiable and strictly proper on the probabilities. Personally don’t find additivity all that plausible, so I conclude the Campbell-Moore and Levinstein move does not move the discussion of strict propriety and probabilism ahead much.

Let Ω = {0, 1}. Given a credence function c (with values in [0,1]) on the powerset of Ω, define the credence function c* which has the same value as c on the empty set and on Ω, but where c*({0}) is the number z in [0,1] that minimizes (c({0})−z)2 + (c({1})−(1−z))2, and where c*({1}) = 1 − c*({0}). In other words, c* is the credence function closest to c in the Euclidean metric such that c*({0}) + c*({1}) = 1.

Now let b*(c) = b(c*). Then b* agrees with b score on the probabilities, and hence is strictly proper on them. Further, every value of b* is a Brier score of some credence, and hence b* is proper.

We now check that it is strictly truth-directed. Brier scores are strictly truth-directed. Thus, replacing a credence function with one that is closer to the truth on Ω or on the empty set will improve the b* score. Moreover, it is easy to check that c*({0}) = (1+c({0})−c({1}))/2. It’s easy to check that if we tweak c({0}) to move us closer to the truth at some fixed ω ∈ {0, 1}, then c* will be closer to the truth at ω as well, and similarly if we tweak c({1}) to be closer to the truth at ω, and in both cases we will improve the score by the strict truth-directedness of Brier scores.

Finally, however, note that b* is not strictly proper and does not have a domination theorem of the sort used in arguments for probabilism, since the b*-score of any credence c that fails to be a probability due to its being the case c({0}) + c({1}) ≠ 1 but that gets the right values on the empty set and Ω (zero and one, respectively) is equal to the b*-score of c*, and c* will be a probability in that case.

Note that in the example above we don't have quasi-strict propriety either.