Wednesday, January 27, 2021

Nonadditive strictly proper scoring rules and arguments for probabilism

[This post uses the wrong concept of a strictly proper score. See the comments.]

A scoring rule for a credence assignment is a measure of the inaccuracy of the credences: the lower the value, the better.

A proper scoring rule is a scoring rule with the property that for each probabilistically consistent credence assignment P, the expected value according to P of the score for P is maximized at P. If it’s maximized uniquely at P, the scoring rule is said to be strictly proper.

A scoring rule is additive provided that it is the sum of scoring rules each of which depends only on the credence assigned to a single proposition and the truth value of that proposition.

The formal epistemology literature has a lot of discussion of a strict domination theorem that given an additive strictly proper scoring rule, you will do better to have a credence assignment that is probabilistically consistent: indeed, another credence assignment will give a better score in every possible world.

The assumption of strict propriety gets a fair amount of discussion. Not so the assumption of additivity.

It turns out that if you drop additivity, the theorem fails. Indeed: this is trivial. Consider any strictly proper scoring rule s, and modify it to a rule s* that assigns the score −∞ to any inconsistent credence. Then any inconsistent credence receives the best possible score in every possible world. Moreover, s* is still strictly proper if s is because the definition of strict propriety only involves the behavior of the scoring rule as applied to consistent credences, and hence s* is strictly proper if and only if s is. And, of course, s* is not additive.

But of course my rule s* is very much ad hoc and it is gerrymandered to reward inconsistency. Can we make a non-additive scoring for which the domination theorem fails that lacks such gerrymandering and is somewhat natural?

I think so. Consider a finite probability space Ω, with n points ω1, ..., ωn in it. Now, consider a scoring rule generated as follows.

Say that a simple gamble g on Ω is an assignment of values to the n points. Let G be a set of simple gambles. Imagine an agent who decides which simple gamble g in G to take by the following natural method: she calculates ∑iP({ωi})g(ωi), where P is her credence assignment, and chooses the gamble g that maximizes this sum. If there is a tie, she has some tie-resolution mechanism. Then, we can say that the G-score of her credences is the negative of the utility gained from the gamble she chose. In other words, her G-score at location ωi is −g(ωi) where g is a maximally auspicious gamble according to her credences.

It is easy to see that G-score is a proper score. Moreover, if there are never any ties in choosing the maximally auspicious gamble, the score is strictly proper.

This is a very natural way to generate a score: we generate a score by looking how well you would do when acting on the credences in the face of a practical decision. But any scores generated in this way will fail to satisfy the domination theorem. Here’s why: the scoring rule scores any inconsistent non-negative credence P that is non-zero on some singleton the same way as it scores the consistent credence P* defined by P*(A)=∑ω ∈ AP({ω})/∑ω ∈ ΩP({ω}). Thus, the domination theorem will fail to apply to any scoring rule generated in the above way, since the domination thing does not happen for consistent credences.

The only thing that remains is to check that there is some natural strictly proper rule that can be generated using the above method. Here’s one. Let Gn be the set of simple gambles that assign to the n points of Ω values that lie in the n-dimensional unit ball. In other words, each simple gamble g ∈ Gn is such that ∑i(g(ai))2 ≤ 1.

A bit of easy constrained maximization using Lagrange multipliers shows that if P is a credence assignment on Ω such that P({ωi}) ≠ 0 for at least one point ωi ∈ Ω, then there is a unique maximally auspicious gamble g and it is given by g(ωj)=P({ωj})/(∑i(P({ωi}))2)1/2. Because of the uniqueness, we have a strictly proper scoring rule.

The Gn-score of a credence assignment P is then s(P, ωj)= − P({ωj})/(∑i(P({ωi}))2)1/2.

This looks fairly natural. The choice of Gn seems fairly natural as well. There is no gerrymandering going on. And yet the domination theorem fails for the Gn-score. (I think any strictly convex set of simple gambles works for Gn, actually.)

Thus, absent some good argument for why Gn-score is a bad way to score credences, it seems that the scoring rule domination argument isn’t persuasive.

More generally, consider any credence-based procedure for deciding between finite sets of gambles that has the following two properties:

  1. The procedure yields a gamble that maximizes expected utility in the case of consistent credences, and

  2. The procedure never recommends a gamble that is dominated by another gamble.

There are such procedures that apply to interesting classes of inconsistent credences and that are nonetheless pretty natural. Given any such procedure, we can extend it arbitrarily to apply to all inconsistent credences, we assign a score to a credence assignment as the negative of the value of the selected gamble, and we have a proper score to which the domination theorem doesn’t apply. And if make our set of gambles be the n-ball Gn, then the score is strictly proper.

Monday, January 25, 2021

Killing and letting die

  1. It is murder to disconnect a patient who can only survive with a ventilator without consent and in order to inherit from them.

  2. Every murder is a killing.

  3. So, it is a killing to disconnect a patient who can only survive with a ventilator without consent and in order to inherit from them.

  4. Whether an act is a killing does not depend on consent or intentions.

  5. So, it is a killing to disconnect a patient who can only survive with a ventilator.

Of course, whether such a disconnection is permissible or not is a further question, since not every killing is wrong (e.g., an accidental killing need not be wrong).

Learning whether p by bringing it about that p

Alice is driving to an appointment she doesn’t care much about. She is, however, curious whether she will arrive on time. To satisfy her curiosity, she stops driving, since she knows that if she stops driving, she won’t arrive on time.

It seems a bit perverse to bring it about that p in order to know whether p. Yet there are cases where people do that.

A straightforward family of cases is very pragmatic. You can only make preparations for something if you know what will happen, so you force a particular thing to happen. For instance, you can only book vacation travel when you know where you will decide to go—so, you decide where to go.

One family of cases is linked to anxiety. Not knowing whether p can induce a lot of anxiety, and knowing for sure can relieve that anxiety. This is, presumably, one of the reasons why peopel turn themselves in for crimes: to relieve the anxiety of not knowing whether one will be arrested today, one ensures that one is arrested today.

Another family is scientific. One arranges a laboratory setup in part precisely to know what the experimental setup is like.

But the Alice case is different from all these. In all of the above cases, you seek knowledge whether p for the sake of something other than knowledge whether p: to buy plane tickets, to relieve anxiety, or to learn some other scientific facts.

What seems perverse, then, is to bring it about that p for the sake of knowing whether p for the sake of knowing (“[t]o satisfy her curiosity”, I said of Alice).

I wonder, now, whether Alice is really being perverse. Maybe it’s just this: there are very few things that we can bring about where there is significant non-instrumental value in knowing them. There is very little value in knowing whether one will arrive on time to the appointment apart from instrumental considerations. Most of the things knowledge of which has significant non-instrumental value are out of our hands: theological, philosophical and scientific facts. But if there is very little value, it’s not worth much trouble. If an appointment is of so little value that it’s worth missing it to know whether one will make it on time to it, it’s probably not worth going to in the first place!

Wednesday, January 20, 2021

Jan 26 Bios Centre Talk: Defining Murder

On January 26, 2021 at 18:30 GMT / 12:30 PM Central / 1:30 PM Eastern, I will be giving a work in progress Zoom talk on Defining Murder at the Bios Centre in London. I will have interesting cases, and various questions, but I don't know if I'll have any good answers.

Everyone is welcome, but you need to contact the organizer to sign up: amccarthy@bioscentre.org.

I can jump 100 feet up in the air

Consider a possible world w1 which is just like the actual world, except in one respect. In w1, in exactly a minute, I jump up with all my strength. And then consider a possible world w2 which is just like w1, but where moments after I leave the ground, a quantum fluctuation causes 99% of the earth’s mass to quantum tunnel far away. As a result, my jump takes me 100 feet in the air. (Then I start floating down, and eventually I die of lack of oxygen as the earth’s atmosphere seeps away.)

Here is something I do in w2: I jump 100 feet in the air.

Now, from my actually doing something it follows that I was able to do it. Thus, in w2, I have the ability to jump 100 feet in the air.

When do I have this ability? Presumably at the moment at which I am pushing myself off from the ground. For that is when I am acting. Once I leave the ground, the rest of the jump is up to air friction and gravity. So my ability to jump 100 feet in the air is something I have in w2 prior to the catastrophic quantum fluctuation.

But w1 is just like w2 prior to that fluctuation. So, in w1 I have the ability to jump 100 feet in the air. But whatever ability to jump I have in w1 at the moment of jumping is one that I already had before I decided to jump. And before the decision to jump, world w1 is just like the actual world. So in the actual world, I have the ability to jump 100 feet in the air.

Of course, my success in jumping 100 feet depends on quantum events turning out a certain way. But so does my success in jumping one foot in the air, and I would surely say that I have the ability to jump one foot. The only principled difference is that in the one foot case the quantum events are very likely to turn out to be cooperative.

The conclusion is paradoxical. What are we to make of it? I think it’s this. In ordinary language, if something is really unlikely, we say it’s impossible. Thus, we say that it’s impossible for me to beat Kasparov at chess. Strictly speaking, however, it’s quite possible, just very unlikely: there is enough randomness in my very poor chess play that I could easily make the kinds of moves Deep Blue made when it beat him. Similarly, when my ability to do something has extremely low reliability, we simply say that I do not have the ability.

One might think that the question of whether one is able to do something is really important for questions of moral responsibility. But if I am right in the above, then it’s not. Imagine that I could avert some tragedy only by jumping 100 feet in the air. I am no more responsible for failing to avert that tragedy than if the only way to avert it would be by squaring a circle. Yet I can jump 100 feet in the air, while no one can square a circle.

It seems, thus, that what matters for moral responsibility is not so much the answer to the question of whether one can do something, but rather answers to questions like:

  1. How reliably can one do it?

  2. How reliably does one think (or justifiably think or know) one can do it?

  3. What would be the cost of doing it?

Tuesday, January 19, 2021

Sheep in sheep's clothing

Suppose you know the following facts. In County X, about 40% of sheep wear sheep costumes. There is also the occasional trickster who puts a sheep costume on a dog, but that’s really rare: so rare that 99.9% of animals that look like sheep are sheep, most of them being ordinary sheep but a large minority being sheep dressed up as sheep.

You know you’re in County X, and you come across a field with an animal that looks like a sheep. There are three possibilities:

  1. It’s an ordinary sheep. Probability: 59.94%

  2. It’s a sheep in sheep costume. Probability: 40.06%

  3. It’s some other animal in sheep costume. Probability: 0.10%.

You’re justified in believing that (1) or (2) is the case, i.e., that the animal is a sheep. And if it turns out that you’re right, then I take it you know that it’s a sheep. You know this regardless of whether it’s an ordinary sheep or a sheep in sheep costume.

But now consider County Y which is much more like the real world. You know that in County Y, only about 0.1% of sheep wear sheep costumes. And there is the occasional trickster who puts a sheep costume on a dog. In County Y, once again, 99.9% of animals that look like sheep are sheep, and 99.9% of those are ordinary sheep without sheep’s costumes.

Now you know you’re in County Y and you come across an animal that looks like a sheep. You have three possibilities again, but with different probabilities:

  1. It’s an ordinary sheep. Probability: 99.80%

  2. It’s a sheep in sheep costume. Probability: 0.10%.

  3. It’s some other animal in sheep costume. Probability: 0.10%.

In any case, the probability that it’s a sheep of some sort is 99.9%. It seems to me that just as in County X, in County Y you know that what you’re facing is a sheep regardless of whether it’s an ordinary sheep or a sheep in sheep costume.

But if what you’re facing is a sheep dressed up as a sheep, then you are in something very much like a standard Gettier case. So in some standard Gettier cases, if you reason probabilistically, it is possible to know.

Friday, January 15, 2021

Defining supererogation

Sometimes supererogation is defined by a conjunction of a positive evaluation of performing the action and a denial of a negative evaluation of non-performance. For instance:

  1. The action is good to do but not bad not to do.

  2. The action is good to do but not wrong not to do.

  3. The action is praiseworthy but omitting it is not blameworthy.

It seems to me that all such definitions fail in cases where there are two or more actions each of which satisfies one’s obligations.

Suppose a grenade has been thrown at a group of people that includes me. There is a heavy blanket nearby. Throwing the blanket on the grenade is unlikely to save lives but has some chance of doing so, while jumping on the grenade is much more likely to save multiple lives. I am obligated to do one of the two things (there is no time to do both, of course).

I throw the blanket on the grenade. In doing so, I do something good and praiseworthy. And omission of throwing the blanket is neither bad, nor wrong, nor blameworthy, since it is compatible with my jumping on the grenade. But clearly throwing the blanket on the grenade is not supererogatory!

One might object that we should be comparing the throwing of the blanket to not doing anything at all. And if we do that, then the action of throwing the blanket does not satisfy the definitions of supererogation: for it is good to throw the blanket, but bad not to do anything at all. However, if that’s how we read (1)–(3), then jumping on the grenade isn’t supererogatory either. For while it is good to jump on the grenade, to do nothing at all is bad, wrong and blameworthy.

It is clear what goes wrong here. In a case where two or more actions satisfy one’s obligations, it can’t be that all the actions are supererogatory. The supererogatory action must go above the call of duty. It seems we need a comparative element, such as:

  1. Action A is better or more praiseworthy than some alternative that satisfies one’s obligations.

I think (4) is not good enough. For it misses the altruistic aspect of the supererogatory. Consider a case where I can choose to make some sacrifice for you to bestow some good on you, and I am morally required to make some minimal sacrifice s0. However, there is a non-linear relationship between the degree of sacrifice and the good bestowed, such that the good bestowed increases asymptotically, approaching some value v, while the degree of sacrifice can increase without bound. Once the amount of sacrifice is increased too much, the action becomes bad: it becomes imprudent and contrary to one’s obligations to oneself. But as the amount of sacrifice is increased, presumably what eventually starts happening is that before the action becomes actually bad, it simply ceases to be praiseworthy.

Let s1 indicate such a disproportionate degree of sacrifice: s1 is not praiseworthy but neither is it blameworthy or contrary to one’s obligations. Then, s0—the minimal amount of sacrifice—becomes supererogatory by (4). For s0 is praiseworthy, since it is praiseworthy to make a morally required sacrifice, and hence it is more praiseworthy than s1, since s1 is not praiseworthy. But s1 satisfies one’s obligations. So, the minimal degree of permissible sacrifice, s0, satisfies the definition of the supererogatory. But that’s surely not right.

I don’t know how to fix (4).

Thursday, January 14, 2021

Probabilistic reasoning and disjunctive Gettier cases

A disjunctive Gettier case looks like this. You have a justified belief in p, you have no reason to believe q, and you justifiedly believe the disjunction p or q. But it turns out that p is false and q is true. Then you have a justified true belief in p or q, but that belief doesn’t seem to be knowledge.

Some philosophers, like myself, accept Lottery Knowledge: we think that in a sufficiently large lottery with sufficiently few winning tickets, for any ticket n that in fact won’t win, one knows that n won’t win on the probabilistic grounds that it is very unlikely to win.

Interestingly, assuming Lottery Knowledge, in at least some disjunctive Gettier cases one has knowledge of the disjunction. For suppose that 99.8% is a sufficient probability for knowledge in lottery cases. Consider a lottery with 1000 tickets, numbered 1–1000, and one winner. I will then have a justified belief that the winning ticket is among tickets 1 through 998 (inclusive). Let this be p. Suppose that unbeknownst to me, p is false and the winning ticket is number 999. Let q be the proposition that the winning ticket is number 999.

Then I have the structure of a disjunctive Gettier case: I have a justified belief in p, I have no reason to believe q, and I justifiedly believe p or q.

Now given Lottery Knowledge, I know that ticket 1000 doesn’t win. But p or q is equivalent to the claim that ticket 1000 doesn’t win, so I know p or q.

Thus, given Lottery Knowledge, I can have a case with the structure of a disjunctive Gettier case and yet know.

Note that usually one thinks in disjunctive Gettier cases that one’s belief in the true disjunction is inferred from one’s belief in the false disjunct p. But that’s not actually how I would think about such a lottery. My credence in the false disjunct p is 0.998. But my credence in the disjunction is higher: it’s 0.999. So I didn’t actually derive the disjunction from the disjunct.

So, someone who thinks probabilistically can have knowledge in at least some disjunctive Gettier cases.

Even more interestingly, the point seems to carry over to more typical Gettier cases that are not probabilistic in nature. Consider, for instance, the standard disjunctive Gettier case. I have good evidence that Jones owns a Ford. You have no idea where Brown is. But since I accept that Jones owns a Ford, I accept that Jones owns a Ford or Brown is in Barcelona. It turns out that Jones doesn’t own a Ford, but Brown is in Barcelona. So I have a justified true belief that Jones owns a Ford or Brown is in Barcelona, but it’s not knowledge.

However, if I think about things probabilistically, my belief in the disjunction is not simply derived from my belief that Jones owns a Ford. For my credence in the disjunction is higher than my credence that Jones own a Ford: after all, no matter how unlikely it is that Brown is in Barcelona, it is still more likely that Jones owns a Ford or Brown is in Barcelona than that Jones owns a Ford.

So it seems that I have a good inference that Jones owns a Ford or Brown is in Barcelona from the high probability of the disjunction. Of course, a good deal of the probability of the disjunction comes from the probability of the false disjunct. However, that doesn’t rule out knowledge if there is Lottery Knowledge: after all, a good deal of the probability of the disjunction in our lottery case could have been seen as coming from the false disjunct that the the winning number is between 1 and 998.

Perhaps the difference is this. In the lottery case, there were alternate paths to the high probability of the true disjunction. As I told the story, it seemed like most of the probability that the winning ticket was either from 1 to 998 (p) or equal to 999 (q) came from the first disjunct. But the disjunction is equivalent to many other similar disjunctions, such as that the ticket is in the set {2, 3, ..., 999} or is equal to 1, and in the case of the latter disjunction, the high probability disjunct is true. But in the Ford/Barcelona case, there doesn’t seem to be an alternate path to the high probability of the disjunction that doesn’t depend on the high probability of the false disjunct.

But it’s not clear to me that this difference makes for a difference between knowledge and lack of knowledge.

And it’s not clear that one can’t rework the Ford/Barcelona case to make it more like the lottery case. Let’s consider one way to fill out the story about how my mistake in thinking Jones owns a Ford came about. I’ve seen Jones driving a Ford F-150 at a few minutes past midnight yesterday, and I knew that he owned that Ford because I drove him to the car dealership when he bought it five years ago. Unbeknownst to me, Fred sold the Ford yesterday and bought a Mazda. Now, it is standard practice that when people buy cars, they eventually sell them: few people keep owning the same car for life.

So, my belief that Jones owned a Ford came from my knowledge that Jones owned a Ford early in the morning yesteray and my false belief that he didn’t sell it later yesterday or today. But now we are in the realm of a lottery case. For from my point of view, the day on which Fred sells the car is something random. It’s unlikely that that day was yesterday, because there are so many other days on which he could sell the car: tomorrow, the day after tomorrow, and so on, as well as the low probability option of his never selling it.

Now consider this giant exclusive disjunction, which I know to be true in light of my knowledge that Jones hadn’t yet sold the Ford as of early morning yesterday.

  1. Jones sold the Ford yesterday and Brown is not in Barcelona, or Jones sold the Ford today and Brown is not in Barcelona, or Jones is now selling the Ford and Brown is not in Barcelona, or Jones will sell the Ford later today and Brown is not in Barcelona, or Jones will sell the Ford tomorrow and Brown is not in Barcelona, or … (ad infinitum), or Jones will never sell the Ford and Brown is not in Barcelona, or Brown is in Barcelona.

Each disjunct in (1) is of low probability, but I know some disjunct is true. This is now very much like a lottery case. Its being a lottery case means that I should—assuming the probabilities are good enough—be able to know that one of the disjuncts other than the first two is true. But if I can know that that one of the disjuncts other than the first two is true, then I should be able to know—again, assuming the probabilities are good enough—that Jones hasn’t sold the Ford yet or Brown is in Barcelona. And if I can know that, then there should be no problem about my knowing that Jones owns a Ford or Brown is in Barcelona.

So, it’s looking like I can have knowledge in typical disjunctive Gettier cases if I reason probabilistically.

Wednesday, January 13, 2021

Epistemology and the presumption of (im)permissibility

Normally, our overt behavior has the presumption of moral permissibility: an action is morally permissible unless there is some specific reason why it would be morally impermissible.

Oddly, this is not so in epistemology. Our doxastic behavior seems to come along with a presumption of epistemic impermissibility. A belief or inference is only justified when there is a specific reason for that justification.

In ethics, there are two main ways of losing the presumption of moral permissibility in an area of activity.

The first is that actions falling in that area are prima facie bad, and hence a special justification is needed for them. Violence is an example: a violent action is by default impermissible, unless we have a special reason that makes it permissible. The second family of cases is areas of action that are dangerous. When we go into a nuclear power facility or a functioning temple, we are surrounded by danger—physical or religious—and we should refrain from actions unless we have special reason to think they are safe.

Belief isn’t prima facie bad. But maybe it is prima facie dangerous? But the presumption of impermissibility is not limited to some special areas. There indeed are dangerous areas of our doxastic lives: having the wrong religious beliefs can seriously damage us psychologically and spiritually while having the wrong beliefs about nutrition and medicine can kill us. But there seem to be safe areas of our doxastic lives: whatever I believe about the last digit in the number of hairs on my head or about the generalized continuum hypothesis seems quite safe. Yet, having the unevidenced belief that the last digit in the number of hairs on my head is three is just as impermissible as having the unevidenced belief that milk cures cancer.

Perhaps it is simply that moral and epistemic normativity are not as analogous as they have seemed to some.

But there is another option. Perhaps, despite what I said, our doxastic lives are always dangerous. Here is one way to suggest this. Perhaps truth is sacred, and so dealing with truth is dangerous just as it is dangerous to be in a temple. We need reason to think that the rituals we perform are right when we are in a temple—we should not proceed by whim or by trial and error in religion—and perhaps similarly we need reasons to think that our beliefs are true, precisely because our doxastic lives always, no matter how “secular” the content, concern the sacred. Our beliefs may be practically safe, but the category of the sacred always implicates a danger, and hence a presumption of impermissibility.

I can think of two ways our doxastic lives could always concern the sacred:

  1. God is truth.

  2. All truth is about God: every truth is contingent or necessary; contingent truths tell us about what God did or permitted; necessary truths are all grounded in the nature of God.

All this also fits with an area of our moral lives where there is a presumption of impermissibility: assertion. One should only make assertions when one has reason to think they are true. Otherwise, one is lying or engaging in BS. Yet assertion is not always dangerous in any practical sense of “dangerous”: making unwarranted assertions about the number of hairs one one’s head or the general continuum hypothesis is pretty safe practically speaking. But perhaps assertion also concerns the truth, which is something sacred, and where we are dealing with the sacred, there we have spiritual danger and a presumption of impermissibility.

Tuesday, January 12, 2021

More on Bostock

In Bostock, the Supreme Court held that a refusal to hire, say, a man who is attracted to men is discrimination on the basis of sex if one wouldn’t refuse to hire a woman who is attracted to men.

The idea is that a rule is discriminatory if it precludes a man from doing something that a woman is permitted to do or vice versa.

This would have the curious consequence that various laws that seem on their face to be non-discriminatory would nonetheless be discriminatory. Here are three examples:

  • Laws against perjury and against lying to law enforcement prohibit, in certain circumstances, a man from saying “I am a woman”, but do not prohibit, in the same circumstances, a woman from saying the very same words.

  • Laws against incitement of violence will often prohibit a male speaker from yelling to a crowd: “If I am a man, go riot!” but will not prohibit a female speaker from yelling the very same words to the same crowd.

  • Libel laws make me liable for asserting “Either colleague x is a plagiarist or I am a woman”, when I know x to be innocent, but do not make my female colleagues liable for saying the very same words under the same circumstances.

These cases show that it is quite difficult to define discrimination.

Posteriors and subjective Bayesianism

Rough question: How much of a constraint does subjective Bayesianism put on the posteriors?

Let’s make the question precise. Suppose I start with some consistent and regular prior probabilities on some countable sample space, gathered some evidence E (a non-empty subset of the sample space), applied Bayesian conditionalization, and obtained a posterior probability distribution PE.

Precise question: What constraints do the above claims put on PE?

Well, here are some constraints that clearly follow from the above story:

  1. PE is a consistent probability distribution. (Bayesian conditionalization preserves the axioms of probability.)

  2. PE(E)=1. (Obvious.)

  3. If A ∩ E is non-empty, then PE(A)>0. (Follows from the regularity of the priors.)

And it turns out that the above constraints are the only ones that my initial story places on PE:

  1. Let PE be any function satisfying (1)–(3). Then there is a consistent and regular probability function P such that PE(A)=P(A|E) for all A.

Proof: Either E is all of the sample space or not. If E is all of the sample space, then let P = PE and we are done. Otherwise, let Q be some probability function that assigns a non-zero value to every point outside E and assigns zero to E. Let P = (1/2)PE + (1/2)Q.

Thus, (1)–(3) are the only constraints subjective Bayesianism places on our posteriors.

I knew that subjective Bayesianism placed very little in the way of constraint on our posteriors, but I didn’t realize just how little.

Change without a plurality of times

Assume presentism. Then Aristotle’s definition of change as the actuality of a potentiality seems to have a serious logical problem. For consider a precise statement of that definition:

  1. There is change just in case there is a potentiality P and an actuality A and A is the actuality of P.

Given presentism, quantification has to be over present items. Thus, the potentiality P and the actuality A are both present items (presumably, accidents of some substance). But if the actuality and potentiality can be simultaneous, then Aristotle’s definition of change does not logically require multiple times: one can have a moment t at which there is an actuality A of a potentiality P, and t could be the only time at which the underlying substance exists. But it seems obvious that if something changes, it exists at more than one time.

One way out of this problem is to deny presentism. I would like that, but Aristotle was probably a presentist.

A second way out is to be careful with tensing:

  1. There is change just in case there was a potentiality P and there is an actuality A and A is the actuality of P.

This makes being the actuality of a cross-time relation. Cross-time relations are awkward for a presentist, but probably unavoidable anyway, so this isn’t so terrible. However, there are other problems with (2). First, it seems that tense depends on time, and for Aristotle, time depends on change, so (2) becomes circular. Second, if we can help ourselves to tense, we can just define change as being in a state in which one previously was not.

I want to suggest a more radical way out of the problem for (1). This more radical way starts by embracing the idea that a substance can change even if it exists only at one time. One way to motivate that is to think of Newtonian physics. Suppose that the universe consists of a number of particles that come into existence at time t0. We may further suppose the state of the Newtonian universe at times after t0 is deterministically caused by the state at t0 (barring things like Norton’s dome). But this is only true if the state of the universe at t0 includes the momenta of the particles, some of which we can assume to be initially non-zero. In other words, the fact about what the momenta are has to be a fact about what the universe is like at t0, in the sense that even if God annihilated the universe right after t0, it would still be true that the particles had the momenta at t0 that they do. Thus, having a non-zero momentum at a time does not require existing at other times. But if one has non-zero momentum, then one is in motion. Hence, being in motion does not require existing at more than one time.

This sounds quite paradoxical, but I think it makes sense if we think of motion as that which explains the succession of states rather than as that which arises from the succession of states.

Next, let’s slightly tweak the English translation of Aristotle’s definition of change:

  1. Change is the actualizing of potentiality.

One can be actualizing a potentiality without ever being in a state of having actualized it. Imagine a substance that is falling, and thus on Aristotle’s account in the process of actualizing the potentiality for being in the center of the universe, and yet which never reaches the center of the universe. At every moment of its existence, that substance is striving to be in the center. That striving, that actualizing of its potential, is what makes it be in motion. It would be in motion even if it only existed for an instant.

One cannot, I take it, have actualized a potentiality while still having the potentiality. But one can be actualizing it while still having it. One is actualing it until one has actualized it, and once one has actualized it, one is no longer actualizing it.

Granted, on this view, change does not entail a plurality of times. It is possible to have a changing universe that exists only for an instant. This complicates the Aristotelian projects of grounding time in change: change is not sufficient for time. Nor does Aristotle say it is. He says that time is a kind of number for change. But a single change may not be enough for number (Aristotle thought that one is not a number: number, for him, requires plurality). Thus, the single-moment universe may have change, but not enough change to have time on Aristotle’s view.

Monday, January 11, 2021

Change and potentiality

Aristotle defines motion or, more generally, change as the actuality of potentiality.

Imagine a helicopter hovering in one location, x. Its being at the same location x at time t2 as it was at time t1 is an actualization of its potentiality at t1: namely, its potentiality to keep itself hovering in the same place by counteracting the force of gravity. Thus, by Aristotle’s definition it seems that the helicopter’s motionless hovering is motion.

Perhaps, though, we need to distinguish between potentiality and power. The helicopter, unlike a rock, has a power to stay in one place in mid-air. But neither the helicopter nor the rock has a potentiality to stay in one place, because a potentiality is necessarily for a state that does not yet obtain.

This suggests a view of potentiality like the following:

  1. An object a has a potentiality for a state F just in case the object a has a possibility of being in state F and a is not in state F.

Here, “possibility” is used in the modern sense as not excluding actuality.

The helicopter has a possibility of being at location x in the future, but since it is already at location x, that possibility is not a potentiality.

Now, let’s go back to Aristotle’s definition. When are the actuality and potentiality predicated? Given that, as we saw, a necessary condition for a potentiality is lack of the corresponding actuality, it seems they cannot be predicated at the same time. This suggests that the Aristotelian account is:

  1. An object changes provided it has a potentiality at one time and some other time actualizes that potentiality.

But now consider the simple at-at theory of change.

  1. An object changes provided that it has a state at one time and lacks it at another.

We might call (2) “Aristotelian change” and (3) “at-at change”.

The following is trivially true:

  1. Aristotelian change entails at-at change.

But what is curious is that the converse also seems to be true:

  1. At-at change entails Aristotelian change.

For suppose that an object a is in state F at one time and not in state F at another. Swapping F and non-F if needed, we may assume for simplicity it is earlier in state non-F. Let t1 be the earlier time. Since the object will later in be in state F, at t1 it has a possibility for being in state F. That possibility is a potentiality by (1). And at t2 that possibility is realized and hence is actual. Thus, at one time a has a potentiality for F and at another that potentiality is actualized. Hence, we have Aristotelian change.

So:

  1. Necessarily, at-at change occurs if and only if Aristotelian change occurs.

So what does the Aristotelian account add?

Perhaps, though, we might say that (1) is too simplistic an account of potentiality. Perhaps not every unrealized possibility is a potentiality, but only an unrealized internally-grounded possibility. For instance, I have an internally-grounded possibility of standing up. But I do not have an internally-grounded possibility of instantly doubling in mass: rather, this possibility is grounded in the power of God.

On this view, however, the Aristotelian account of change appears to be false. For suppose that I have a possibility for a non-actual state F, but that possibility is not internally-grounded. Then if that possibility comes to be realized, clearly I have changed. Thus, if God miraculously doubles my mass, I have grown more massive, that’s a change. But that change isn’t a realization of an internally-grounded possibility.

One can escape this objection by insisting that every possibility for an object has to be internally-grounded. If so, then the Aristotelian account of change applies precisely to the same cases as the at-at account does, once again, but it adds a richer claim that change is always related to an internally-grounded possibility.

Thursday, January 7, 2021

Two kinds of moral relativism

A moral relativist has a fundamental choice whether to define moral concepts in terms of moral beliefs or non-doxastic moral attitudes such as disapproval.

In my previous post, I argued that defining moral concepts in terms of moral beliefs leads is logically unacceptable.

I now want to suggest that neither option is really very appealing. Consider first this case:

  1. Bob believes he ought to turn Carl in for being a runaway slave. But his emotions and attitudes do not match that belief. He hides Carl and feels morally good about hiding Carl despite his belief. (Bob may or may not be like Huck Finn.)

A relativist who defines morality in terms of beliefs, has to say that Bob is doing wrong in hiding Carl. That seems mistaken. It seems that mere belief is less important than actual attitudes. Thus, if something is to define morality for Bob, it is his attitudes, not his mere beliefs.

So far, we have support for a relativist’s defining moral concepts in terms of non-doxastic moral attitudes. But now consider:

  1. Alice thinks of herself as a progressive, and thinks that racism is wrong. Nonetheless, her moral attitudes do not evince genuine disapproval of racist behavior, say when she is with friends who tell racist jokes.

If we define right and wrong in terms of non-doxastic moral attitudes, then our implicit biases unacceptably affect what is morally right and wrong, so that racist behavior turns out to be permissible for Alice, her beliefs to the contrary notwithstanding.

So, neither approach is satisfactory.

A vicious circularity in one kind of moral relativism

It’s just occurred to me that simple moral relativism on its face just makes no sense as a metaethical position. It holds:

  1. What it is for an action to be morally required is that one believes the action to be morally required.

But here we have an account of a property, namely moral requirement, where the account makes use of that very property.

Here’s another way to put the point. The moral relativist presumably also accepts:

  1. What it is for an action to be morally forbidden is that one believes the action to be morally forbidden.

Now, claim (1) says the same thing about the morally required as (2) says about the morally forbidden. Thus, (1) cannot be a correct account of the morally required, since the morally required and the morally forbidden are different. A correct account of X cannot say about X the same thing that a correct account of Y says about Y when X and Y are different!

In other words, the morally relativist metaethicist needs some replacement for “believes the action to be morally required/forbidden” in (1) and (2) that does not employ the concepts of the morally required and forbidden.

Related point: Here is a way to see that (1) is not the right definition of moral requirement. Suppose I am wrong about what I believe, and so I believe that I believe that A is morally required, but in fact I don’t believe that A is morally required. (Perhaps I would like to be the sort of person who believes that A is morally required, and by wishful thinking I come to believe that I believe it, but in fact my actions belie my alleged belief, and I don’t actually believe that A is morally required.) But if to be morally required is defined as to be believed to be morally required, then believing that A is believed to be morally required is believing that A is morally required. So, I cannot have a case where I am wrong in believing that I believe that A is morally required. But clearly I am fallible in my introspection!

A better move, then, for the relativist seems to be to replace beliefs with other attitudes, such as:

  1. What it is for an action to be morally required is that one have an attitude of moral disapproval towards refraining from the action.

On this version of relativism, our moral beliefs can be incorrect. For it is quite possible for our attitudes of moral disapproval to fail to match our moral beliefs. And this is especially true if (3) is the right account of moral requirement. For we can easily be self-deceived about whether in fact we exemplify an attitude of moral disapproval, and hence we can be self-deceived about whether the action is wrong. The moral relativist now owes us an account of moral disapproval that does not depend on first order moral concepts, and that’s tough, but at least we don’t have a vicious circularity like in (1).

Wednesday, January 6, 2021

Four-bit microcomputer trainer in Scratch

When I was a kid, I had a Radio Shack Microcomputer Trainer. This device was programmable in machine code, and was a 4-bit system with 112 nibbles(!) of RAM. It actually ran as a virtual machine on a more powerful TMS-1x00 4-bit processor. These days, kids learn coding with much higher level languages than machine code, including graphical languages like Scratch. So I had some fun over the break and bridged the gap between then and now by making an emulator (or maybe more precisely simulator) of the trainer in Scratch. For computations (though probably not for I/O) it runs in the browser (shift click the flag to get Turbo Mode, though) faster than the original did according to my benchmark.

[Scratch link.]

[Instructables link with usage instructions.]