Friday, August 30, 2019

Credence and belief

For years, I’ve been inclining towards the view that belief is just high credence, but this morning the following argument is swaying me away from this:

  1. False belief is an evil.

  2. High credence in a falsehood is not an evil.

  3. So, high credence is not belief.

I don’t have a great argument for (1), but it sounds true to me. As for (2), my argument is this: There is no evil in having the right priors, but having the right priors implies lots high credences in falsehoods.

Maybe I should abandon (1) instead?

Thursday, August 29, 2019

The unavoidability of misleading evidence

Three definitional assumptions:

  1. E is only evidence if there is some hypothesis H to which E makes an evidential difference, i.e., P(H|E)≠P(H).

  2. E is incomplete if and only if it is evidence such that there is a hypothesis H such that 0 < P(H|E)<1, i.e., E doesn’t make everything certain.

  3. E is misleading with respect to a hypothesis H if and only if either H is true and E is evidence against H (i.e., P(H|E)<P(H)) or H is false and E is evidence for H (i.e., P(H|E)>P(H)).

Then:

  1. Every piece of incomplete evidence is misleading (with respect to some hypothesis).

[Proof: Suppose E is incomplete evidence. Either E is or is not true. If it is not true, it is misleading, since it lowers its own probability to zero. So, suppose that E is true. Let H1 be a hypothesis such that 0 < P(H1|E)<1. Replacing H1 by its negation if necessary, we can assume H1 is true. Note that the fact that E is evidence implies that 0 < P(E)<1. Let H be the disjunctive hypothesis: ∼E or (H1&E). This is true as the second disjunct is true. Now, note that P(H1&E)<P(E) as P(H1|E)<1. Thus, (1 − P(E))P(H1&E)<(1 − P(E))P(E). Thus, P(H1&E)<P(E)P(H1&E)+(1 − P(E))P(E). Thus: P(H1|E)=P(H1&E)/P(E)<P(H1&E)+(1 − P(E)) = P(H1&E)+P(∼E)=P(H). Thus, E is evidence against H even though H is true.]

In particular, we should not take misleadingness of evidence to be an evil. Misleadingness of evidence is a normal part of reasoning with incomplete information.

Wednesday, August 28, 2019

A hybrid view of laws

The big divide about laws of nature is whether the laws are pushy or descriptive.

It seems to me that a plausible view is that some are pushy and some are descriptive. This is what I think one gets on an Aristotelian view: there are laws describing which mutually harmonious natures are instantiated, and the instantiated natures then push stuff around in lawlike ways. For instance, there may be a descriptive law that says that all particles have natures of the quantum sort (rather than, say, of the Newtonian sort), and there are pushy laws that, say, prohibit two electrons from sharing the same state.

Dutch Books and update rationality

It is often said that if you depart from correct Bayesian update, you are subject to a diachronic Dutch Book—a sequence of bets you will have to rationally agree to that is sure to make you lose—and this is supposed to indicate a lack of rationality. That may be, but I want to point out that the lack of rationality is not constituted by being subject to a Dutch Book: being subject to a Dutch Book is merely a symptom. I expect most people working this stuff know this, but perhaps it’s worth giving an explicit argument for.

Here is why. Alice, Bob and Carl are observing a coin that is either double-headed (D) or fair (F). Their prior probabilities for the two hypotheses are 1/2, and they have the reasonable and consistent priors: they assign probability 3/4 to heads showing up, and so on. The coin is flipped and the result is observed. If the coin lands tails, all three correctly update their probability for D to 0. If the coin lands lands heads, Alice, Bob and Carl each follow a different rule for updating their credence for D. Alice updates to 2/3 in accordance with Bayes’ theorem. Bob updates to 3/4 as that intuitively seems right to him. Carl, on the other hand, initiates a process in his brain which randomly updates to a uniformly chosen credence between 1/2 and 1.

Alice is not subject to a Dutch Book.

Bob is.

But Carl, once again, is not. [Proof: For any betting book, there is a non-zero chance that Carl would be rationally permitted to respond to that book in a way that it would be rationally permitted for Alice to respond. For Carl and Alice differ in their credences only in post-toss bets dependent on D in the special case that the first toss is heads, but the direction in which they differ in their credences is random: Carl has a non-zero chance of having a lower credence than Alice in D at this point and a non-zero chance of having a higher one. If at Alice’s credence of 2/3 the bet is rationally permitted to take, then either (a) for all credences lower than 2/3 it is rationally permitted to take, or (b) for all credences higher than 2/3 it is permitted to take, since the expected outcomes are linear functions of the credence. But there is a non-zero chance that Carl’s credence is lower than Alice’s and a non-zero chance that Carl’s credence is higher than Alice. Thus, there is a non-zero chance that Carl can permissibly take the bet, if Alice can permissibly take the bet. And the same argument applies if Alice can permissibly refuse the bet.]

However, Carl is not more rational than Bob, despite not being subject to a Dutch Book due to his unpredictability. Hence, not being subject to a Dutch Book is only a symptom of irrationality, not constitutive of it.

Monday, August 26, 2019

Functionalism and imperfect reliability

Suppose a naturalistic computational theory of mind is true: To have mental states of a given kind is to engage in a particular kind of computation. Now imagine a conscious computer thinking various thoughts and arranged around standard logic gates. Modify the computer to have an adjustment knob on each of its logic gates. The adjustment knob can be set to any number between 0 and 1, such that if the knob is set to set to p, then the chance (say, over a clock cycle) that the gate produces the right output is p. Thus, with the knob at 1, the gate always produces the right output, with the knob at 0, it produces the opposite output, with the knob at 0.5, it functions like a fair coin. Make all the randomness independent.

Now, let Cp be the resulting computer with all of its adjustment knobs set to p. On our computational theory of mind, C1 is a conscious computer thinking various thoughts. Now, C0.5 is not computing anything: it is simply giving random outputs. This is true even if in fact, by an extremely unlikely chance, these outputs always match the ones that C1 gives. The reason for this is that we cannot really characterize the components of C0.5 as the logic gates that they would need to be for C0.5 to be computing the same functions as C1. Something that has a probability 0.5 of producing a 1 and a probability 0.5 of producing a 0, regardless of inputs, is no more an and-gate than it is a nand-gate, say.

So, on a computational theory of mind, C0.5 is mindless. It’s not computing. Now imagine a sequence of conscious computers Cp as p ranges from 0.5 to 1. Suppose that it so happens that the corresponding “logic gates” of all of them always happen to give the same answer as the logic gates of C1. Now, for p sufficiently close to 1, any plausible computational theory of mind will have to say that Cp is thinking just as C1 is. Granted, Cp’s gates are less reliable than C1’s, but imperfect reliability cannot destroy thought: if it did, nothing physical in a quantum universe would think, and the naturalistic computational theorist of mind surely won’t want to accept that conclusion.

So, for p close to 1, we have thought. For p = 0.5, we do not. It seems very plausible that if p is very close to 0.5, we still have no thought. So, somewhere strictly between p = 0.5 and p = 1, a transition is made from no-thought to thought. It seems implausible to think that there is such a transition, and that is a count against computational theories of mind.

Moreover, because all the gates actually happen to fire in the same way in all the computers in the Cp sequence, and consciousness is, on the computational theory, a function of the content of the computation, it is plausible that for all the values of p < 1 for which Cp has conscious states, Cp has the same conscious states as C1. Either Cp does not count as computing anything interesting enough for consciousness or it counts as imperfectly reliably computing the same thing as C1 is. Thus, the transition from C0.5 to C1 is not like gradually waking up from unconsciousness. For when we gradually wake up from unconsciousness, we have an apparently continuous sequence of more and more intense conscious states. But the intensity of a conscious state is to be accounted for computationally on a computational theory of mind: the intensity is a central aspect of the qualia. Thus, the intensity has to be a function of what is being computed. And if there is only one relevant thing computed by all the Cp that are computing something conscious-making, then what we have as p goes from 0.5 to 1 is a sudden jump from zero intensity to full intensity. This seems implausible.

Axiom T for physical possibility

Here is an argument for naturalism:

  1. Only states that can be described by physics are physically possible.

  2. Non-natural states cannot be described by physics.

  3. Physical possibility satisfies Axiom T of modal logic: If something is true, then it’s physically possible.

  4. So, non-natural states are physically impossible. (1 and 2)

  5. So, non-natural states do not occur. (3 and 4)

I am inclined to think (1) is true, though it is something worth pushing back on. I think (2) is close to trivial.

That leaves me a choice: accept naturalism or deny that Axiom T applies to physical possibility.

I want to deny that Axiom T is a good axiom for physical possibility. The reason isn’t just that I think (as I do) that naturalism is actually false. The reason is that I think the axioms of physical possibility should hold as a matter of metaphysical necessity. But if Axiom T for physical possibility held as a matter of metaphysical necessity, then naturalism would be metaphysically necessary. And that is really implausible.

Yet Axiom T is very plausible. What should we do about it? Here is one potential move: Axiom T holds when we restrict our statements to ones formulated in the language of physics. This escapes the implausible conclusion that non-natural states are metaphysically impossible. But holding even this restricted axiom to be an axiom, and hence metaphysically necessary, still rules out the metaphysical possibility of certain kinds of miracles that I think should be metaphysically possible. So I think my best bet is to throw out Axiom T for physical possibility altogether. As a contingent matter of fact, it holds typically for statements formulated in the language of the correct physics. But that’s all.

Physical possibility

Here is an interesting question: How can one tell from a physics theory whether some event is physically possible according to that theory?

A sufficient condition for physical possibility is that the physics assigns a non-zero chance to it. But this is surely not a necessary condition. After all, it is possible that you will get heads on each of infinitely many tosses of an indeterministic die, while the chance of that is zero.

Plausibly, a necessary condition is that the event should be describable within the state space of the theory. Thus, the state space of classical mechanics simply cannot describe an electron being in a superposition of two position states, and hence such a superposition is physically impossible. But this necessary condition is not sufficient, as Newtonian mechanics bans various transitions that can be described within the state space of classical mechanics.

So, we have a necessary condition and a sufficient condition for physical possibility relative to a physics theory. It would be nice to have a necessary and sufficient condition.

Friday, August 23, 2019

Utility monster meat farming

Suppose that:

  1. Intense pleasure is very good in itself

  2. Consequentialism applies to non-rational animals.

Then here is a modest proposal: Have all feedlot animals outfitted with electrical stimulators of brain pleasure centers. Sufficient stimulation of pleasure centers can outweigh the pains that the animals suffer in the feedlot, and indeed can hedonically (and also with regard to desire-satisfaction) beat the pleasures of a happy life on the range. The animals may not live very long lives in that setting, but this shorter length of life could well be outweighed by the intense pleasure that they will enjoy. It seems like a win-win: there are more happy non-rational animals and we have more yummy meat for rational omnivores. It seems to me that utilitarian vegetarians whose vegetarianism is based in concern for the welfare of the animals—rather than, say, ecological worries—should support this proposal.

Perhaps, though, the repugnance some people may feel at the modest proposal gives evidence that the proposal is a reductio of the conjunction of (1) and (2). I myself deny (1): I do not think empty pleasures have any intrinsic value, even in non-rational animals. (That said, even if (1) is false, intense empty may still be very instrumentally valuable as a pain-killer which might yet provide some consideration in favor of the proposal.) I am also somewhat dubious about (2).

Thursday, August 22, 2019

Red cars and playdough

A red chunk of playdough needs to be red through and through. A red car need only be red on the outside. Peanut butter to be smooth must be smooth all the way through. But a mattress needs to only be smooth on the upper side to be smooth.

In other words, predicates like “is smooth” and “is red” apply to objects in different ways. A seemingly arbitrary decision needs to be made to how to apply them to a particular kind of object.

But perhaps this is only the case because chunks of playdough, cars, blobs of peanut butter and mattresses are not substances. Perhaps we can hope that for substances such decisions do not need to be made? But that hope is quickly dashed when we realize that a decision has to be made whether to call an electron a wave or a particle or both or neither, and that a decision has to be made which of a horse’s muscles are relevant to saying that the horse is strong (does it need to have strong eyelid muscles? tail muscles?).

Maybe when we descend to the level of applying fundamental predicates to substances, then the problem disappears. But that’s not clear. Position predicates seem to be fundamental but there is arbitrariness in deciding how to apply them to quantum objects when they are not in an eigenstate of position.

Perhaps where the arbitrariness disappears is when we consider cases where a fundamental predicate fundamentally applies to a substance. A fundamental predicate might non-fundamentally apply to a substance: thus, a dog might be negatively charged, and “is negatively charged” might be fundmental, but the dog is not fundamentally negatively charged—rather it is charged in virtue of mathematical facts about the overall distribution of fundamental charge properties throughout its body.

Tuesday, August 20, 2019

Why the Five Ways don't prove the existence of five (or more!) deities

Here is a potential problem for Aquinas’ Five Ways. Each of them proves the existence of a very special being. But do they each prove the existence of the same being?

After giving the Five Ways in Summa Theologica I, Aquinas goes on to argue that the being he proved the existence of has the attributes that are needed for it to be the God of Western monotheism. But the problem now is this: What if the attributes are not all the attributes of the same being? What if, say, the being proved with the Fourth Way is good but not simple, while the being proved with the First Way is simple but not good?

I now think I see how Aquinas avoids the multiplicity problem. He does this by not relying on Ways 3–5 in his arguments for the attributes of God, even when doing so would make the argument much simpler. An excellent example is Question 6, Article 1, “Whether God is good?” Since the conclusion of the Fourth Way is that there is a maximally good being, it would have been trivial for Aquinas to just give a back-reference to the Fourth Way. But instead Thomas gives a compressed but complex argument that “the first effective cause of all things” must be desirable and hence good. In doing so, Aquinas is working not with the Fourth Way, but the Second Way, the argument from efficient causes.

Admittedly, at other times, as in his arguments for simplicity, St. Thomas relies on God not having any potentiality, something that comes directly from the First Way’s prime mover argument.

This reduces the specter of the attributes being scattered between five beings, corresponding to the Five Ways, to a worry about the attributes being scattered between two beings, corresponding to the First and Second Ways. But the First and Second Ways are probably too closely logically connected for the latter to be a serious worry. The First Way shows that there is a being that is first in the order of the actualizing of the potentiality for change, an unchanged changer, a prime mover. The Second Way shows that there is a being that is first in the order of efficient causation. But to actualize the potentiality for change is a form of efficient causation. Thus, the first being in the order of efficient causation will also be a prime mover. So there is a simple—so simple that I don’t recall Aquinas stating it in the Summa Theologica—argument from the conclusion of the Second Way to the same being satisfying the conclusion of the First Way.

Consequently, in the arguments for the attributes of God, Aquinas needs to only work with the conclusion of the Second Way, and all the attributes he establishes, he establishes as present in any being of the sort the Second Way talks about.

That still leaves a multiplicity problem within the scope of a single Way. What if there are multiple first efficient causes (one for earth, one for the moon, and so on, say)? Here Thomas has three solutions: any first being has to be utterly simple, and only one being can be that on metaphysical grounds; any being that is pure actuality has to be perfect, and only one being can be that; and the world has a unity and harmony that requires a unified first cause rather than a plurality of first causes.

Finally, when all the attributes of God have been established, we can—though Aquinas apparently does not, perhaps because he thinks it’s too easy?—come back to Ways Three through Five and ask whether the being established by these ways is that same one God? The ultimate orderers of the world in the Fifth Way are surely to be identified with the first cause of the Second Way once that first cause is shown to be one, perfect, intelligent, and cause of all other than himself. Plausibly, the maximally good being of the Fourth Way has to be perfect, and Aquinas has given us an argument that there is only one perfect being. Finally, the being in the conclusion of the Third Way is also a first cause, and hence all that has been said about the conclusion of the Second Way applies there. So, Aquinas has the resources to solve the multiplicity problem.

All this leaves an interesting question. As I read the text, the Second Way is central, and Aquinas’ subsequent natural theology in the Summa Theologica tries to show that every being that can satisfy the conclusion of the Second Way has the standard attributes of God and there is only one such being. But could Aquinas have started with the Third Way, or the Fourth, or the Fifth, instead of the First and Second, in the arguments for the divine attributes? Would doing so be easier or harder?

Monday, August 19, 2019

Two ways to pursue y for the sake of z

The phrase

  1. x pursues y as a means to z

is ambiguous between two readings:

  1. x pursues y-as-a-means-to-z

and:

  1. x’s pursuit of y is a means to z.

Case (2) is the standard case of means-end relationships: Alice goes on the exercise bike to keep her healthy.

But (3) can be a different beast. Bob’s psychologist has told him that it would be good for him to secrete more adrenaline; maybe striving to win at tennis is the most efficient of the safe methods for secreting adrenaline available to Bob; so, Bob relentlessly pursues victory in tennis. It is not the victory, however, that releases the adrenaline in my hypothetical story: it is the pursuit of that victory. In that case, it is Bob’s pursuit of victory that is a means to (mental) health. Moreover, it could be the case that what secretes adrenaline most effectively is the non-instrumental pursuit of victory:

It looks to me like in all these cases what we have are instances of final causation, where y’s endhood is caused by z’s endhood. In case (2), it is y’s instrumental endhood that is caused by z’s endhood, while in some cases of (3), like Bob’s adrenaline-releasing pursuit of victory, it is y’s non-instrumental endhood that is caused by z’s endhood.

There can also be cases where y’s instrumental endhood is caused by z’s endhood, but y is not a means to z. For instance, we could imagine that Bob’s psychologist told him that given his peculiar motivational structure, the most efficient way for him to release adrenaline would be to strive to gain money by winning at tennis. In that case, Bob pursues winning at tennis instrumentally for the sake of gaining money, but this pursuit is finally caused by his pursuit of adrenaline. So, the victory’s instrumental endhood is finally caused by adrenaline’s endhood, but the victory is instrumental to money, not adrenaline.

Note, also, that normally a case of (2) is also a case of (3): when x pursues y-as-a-means-to-z, then x’s pursuit of y is also a means to z. But there are pathological cases where this is not so.

Instances fo (3) that are not instance of (2) look like cases of higher order reasons. But they need not be cases of reasons at all. For case (3) can be subdivided into at least two subcases:

  1. x voluntarily chooses to pursue y in order that z might be achieved by the pursuit

  2. The unchosen teleological structure of x (e.g., the nature of x) is such that x’s pursuit of y is ordered to z.

In type (a) cases, indeed z can provide a higher order reason. But in type (b) cases, there need be no reasons involved. Lion cubs pursue play in order that they might grow strong, let’s say. But growing strong doesn’t provide lion cubs with a reason to pursue play, because lion cubs are not (let us suppose) the sorts of beings that can be responsive to higher order reasons. Nonetheless, there is final causation: the end of strength causes play to be an end.

Friday, August 16, 2019

Why do the basic human goods hang together as they do?

According to prominent Natural Law theories, the human good includes a number of basic non-instrumental goods, such as health, contemplation, truth, friendship and play. Now, there is a sense in which the inclusion of some items on the list of basic goods is more puzzling than the inclusion of others. There does not seem be anything deeply mysterious about the inclusion of health, but the inclusion of play is puzzling.

Yet there is an elegant metaphysical explanation of why these goods are included in the human good, and this explanation works just as well for play as for health:

  1. The human good includes play (or health) because it is a fundamental telos in the human form to pursue play (or health).

This explanation tells us what makes it be the case that play is a basic human good. But I think it leaves something else quite unexplained. Compare to this the unhelpfulness of the answer

  1. Because its molecules have a high mean kinetic energy

to the question

  1. Why is my phone hot?

Now, in the case of my hot phone, the reason (2) is unhelpful is because when I am puzzled about my phone being hot, I am puzzled about something like the efficient cause of the phone’s heat, and (2) does not provide that.

That’s not quite what is going on the case of play. When we ask with puzzlement:

  1. Why is play a basic non-instrumental human good?

we are not looking for an efficient cause of play being a basic human good. Indeed, it is dubious that there could even be an efficient causal answer to (4), since it seems to be a necessary truth that play is a basic human good, since this is grounded in the essential teleological structure of the human form. I think that when we ask (4), we are not actually clear on what sort of an explanation we are looking for—but if the puzzlement is the kind I am thinking about, the desired explanation is not the one given by (1).

We do become a bit less puzzled about play being a basic human good once it is pointed out to us how play promotes various other human goods like health and friendship. When we ask questions like (4), a part of what we are looking for is a story of how play hangs together with the other basic goods. If, as many Natural Lawyers think, there is a greatest human good (e.g., loving knowledge of God), then we hope that a significant part of the story will tell us how the good of play fits with that greatest good.

But now we have a curious meta-question:

  1. Why is it that telling a story about how play hangs together with the other basic goods contributes to an answer to (4), given that play’s promotion of other basic goods seems to only make play be an instrumental good?

Here is another part of the story that helps with (5). Not only does engaging in play promote the other goods, but engaging play as an end in itself promotes the other goods more effectively. Playing Dominion with a friend purely instrumentally to friendship just wouldn’t promote friendship as effectively as playing in a way that appreciates the game as valuable in and of itself. Thus, a part of our story is now that it would be beneficial vis-à-vis the other goods if play were in fact to be non-instrumentally good, as then it could be pursued as an end in itself without this pursuit being a perversion of the will (it is, I take it, a perversion of the will to pursue mere means as if they were ends).

But it is still puzzling why even this enriched story is an answer to our question. The enriched story might make us wish that play were intrinsically good, but it doesn’t make play be instrinsically good. How does the enriched story help with our question, then?

I think that here is one of those places where Natural Law needs theism. It is a good thing for God to make beings whose basic goods exhibit unity in diversity. Thus, amongst the infinity of possible kinds of beings that could have been created, God chose to create beings with the human form in part because the basic goods encoded in the teleological structure of that form hang together in a beautiful way. God could have instead created beings where play was merely instrumentally good, but the teleological structure of such beings, first, wouldn’t exhibit the same valuable unity in diversity and, second, such beings would not as effectively achieve the other basic goods: for either they would be perversely pursuing a means as an end, or they would be missing out on the benefits of pursuing play as an end.

In other words, the story about how the goods hangs together provides a genuine answer to questions like (4) given God’s wise selection of the natures to be instantiated. It is difficult to see a plausible alternative story (here's an implausible one: there are no possible natures where goods don't hang together; here's another implausible one: we live in the best of all possible worlds). Thus, answering questions like (4) seems to call for theism.

Wednesday, August 14, 2019

Why did Alice make this lectern?

Converse essentiality of qualitative origins holds that if possible objects x and y have the same qualitative causal history—i.e., their initial state is qualitatively the same and the causes of that are qualitatively the same, etc.—then x = y. Kripke’s lectern argument basically makes it plausible to think that if converse essentiality of qualitative origins holds, so does essentiality of origins—the thesis that an object couldn’t have had a different qualitative causal history than it did.

If we reject converse essentiality of origins, then we have a thorny explanatory problem: When Alice took piece of wood W and shaped it into a lectern with shape S, what explains why lectern L1 rather than, say, lectern L2 resulted?

One way out of this explanatory problem is a partial occasionalism: Whenever an object comes into existence, while creatures may decide what the qualities of the object are, God causes the specific haecceity.

Another way out is to replace converse essentiality of qualitative origins with a converse essentiality of full origins thesis: if possible objects have the qualitatively and numerically (apart possibly from their own identity) causal history, then they are the same. Then when Alice takes W and shapes it into a lectern with shape S, only L1 (say) can result. But if Alice’s identical twin Barbara did it, it would have been (say) L2.

We thus seem to have three options as to the explanation of why Alice produced L1 rather than L2.

  1. converse essentiality of qualitative origins

  2. converse essentiality of full origins

  3. partially occasionalistic haecceitism.

Maybe there are other good ones.

Disjunctions and differential equations

It is plausible that:

  1. Some of the fundamental dynamic laws of nature are given by differential equations.

  2. All fundamental dynamic laws of nature provide fundamental causal explanations.

  3. Facts that involve disjunction do not enter into fundamental causal explanations.

But one cannot believe (1)–(3). For:

  1. Facts about derivatives are facts about limits.

And:

  1. Facts about limits are infinite conjunctions of infinite disjunctions of infinite conjunctions.

For the limit of f(x) as x → y equals z if and only if every neighborhood N of z there is a neighborhood M of x such that for all u ∈ M we have f(u)∈N. Universal quantification is a kind of conjunction and existential quantification is a kind of disjunction.

I am inclined to reject (1).

The present doesn't ground the past

I will run an argument against the thesis that facts about the past are grounded in the present on the basis of the intuition that that would be a problematically backwards explanation.

Suppose for a reductio:

  1. Necessarily, facts about the past are fully grounded in facts about the present.

Add the plausible premises:

  1. Necessarily, if fact C is fully grounded in some facts, the Bs, and the Bs are fully causally explained by fact A, then fact A causally explains fact C.

As an illustration, suppose that the full causal explanation of why the Nobel committee gave the Nobel prize to Bob is that Alice persuaded them to. Bob’s being a Nobel prize winner is fully grounded in his being awarded the Nobel prize by the Nobel committee. So, Alice’s persuasion fully causally explains why Bob is the Nobel prize winner.

  1. It is possible to have a Newtonian world such that:

    1. All the facts about the world at any one time are fully causally explained by the complete state of the universe at any earlier time.

    2. There are no temporally backwards causal explanations.

    3. There are at least three times.

Now, consider such a Newtonian world, and let t1 < t2 < t3 be three times (by (3c)).

Suppose that t3 is now present. Let Ui be the fact that the complete state of the universe at time ti is (or will be or was) as it is (or will be or was). Then:

  1. Fact U1 is fully grounded in some facts about the present. (By (1))

Call these facts the Bs.

  1. The Bs are fully causally explained by U2. (As (3a) holds in our assumed world)

Therefore:

  1. Fact U1 is fully causally explained by U2. (By (1))

  2. So, there is backwards causal explanation. (By (6))

  3. Contradiction! (By (7) and as (3b) holds in our assumed world)

I think we should reject (1), and either opt for eternalism or for Merricks’ version of presentism on which facts about the past are ungrounded.

Thursday, August 8, 2019

Erring on the side of moderation leads to erring on the side of extremism, at least epistemically

One might think that having a less extreme (i.e., further from 0 and 1, and closer to 1/2) credence than is justified by the evidence is pretty safe epistemically. So, if one wants to be safe, one should move one’s credences closer to 1/2: moderation is safer than extremism.

But if one is to be consistent, this doesn’t work. For instance, suppose that the evidence points to clearly independent hypotheses A and B each having probability 0.6, but in the name of safety one assigns them 0.5. Then consistency requires one to assign their conjunction 0.5 × 0.5 = 0.25, whereas the evidence pointed to their conjunction having probability 0.6 × 0.6 = 0.36. In other words, by being more moderate about A and B, one is more extreme about their conjunction.

In other words, once we have done our best in evaluating all the avilable evidence, we should go with the credence the evidence points to, rather than adding fudge factors to make our credences more moderate. (Of course, in particular cases, the existence of some kind of a fudge factor may be a part of the available evidence.)

Monday, August 5, 2019

Assertion threshold

Some people like me assert things starting with a credence like 0.95. Other people are more restrictive and only assert at a higher credence, say 0.98. Is there a general fact as to what credence one should assert at? I am not sure. It seems to me that this is an area where decent and reasonable people can differ, within some range (no one should assert at 0.55, and no one should refuse to assert at 0.999999999). Maybe what is going on here is that there is an idiolect-like phenomenon at the level of illocutionary force. And somehow we get by with these different idiolects, but with some inductive heuristics like “Alice only speaks when she is quite sure”.

Sunday, August 4, 2019

More on credences of randomly chosen propositions

For a number of years I’ve been interested in what one might call “the credence of a random proposition”. Today, I saw that once precisely formulated, this is pretty easy to work out in a special case, and it has some interesting consequences.

The basic idea is this: Fix a particular rational agent and a subject matter the agent thinks about, and then ask what can be said about the credence of a uniformly randomly chosen proposition on that subject matter. The mean value of the credence will be, of course, 1/2, since for every proposition p, its negation is just as likely to be chosen.

It has turned out that on the simplifying assumption that all the situations (or worlds) talked about have equal priors, the distribution of the posterior credence among the randomly chosen propositions is binomial, and hence approximately normal. This was very easy to show once I saw how to formulated the question. But it still wasn’t very intuitive to me as to why the distribution of the credences is approximately normal.

Now, however, I see it. Let μ be any probability measure on a finite set Ω—say, the posterior credence function on the set of all situations. Let p be a uniformly chosen random proposition, where one identifies propositions with subsets of Ω. We want to know the distribution of μ(p).

Let the distinct members (“situations”) of Ω be ω1, ..., ωn. A proposition q can be identified with a sequence q1, ..., qn of zeroes and/or ones, where qi is 1 if and only if ωi ∈ q (“q is true in situation ωi”). If p is a uniformly chosen random proposition, then p1, ..., pn will be independent identically distributed random variables with P(pi = 0)=P(pi = 1)=1/2, and p will be the set of the ωi for which pi is 1.

Then we have this nice formula:

  1. μ(p)=μ(ω1)p1 + ... + μ(ωn)pn.

This formula shows that μ(p) is the sum of independent random variables, with the ith variable taking on the possible values 0 and μ(ωi) with equal probability.

The special case in my first post today was one where the priors for all the ωi are equal, and hence the non-zero posteriors are all equal. Thus, as long as there are lots of non-zero posteriors—i.e., as long as there is a lot we don’t know—the posterior credence is by (1) a rescaling of a sum of lots of independent identically distributed Bernoulli random variables. That is, of course, a binomial distribution and approximately a normal distribution.

But what if we drop the assumption that all the situations have equal priors? Let’s suppose, for simplicity, that our empirical data precisely rules out situations ωm + 1, ..., ωn (otherwise, renumber the situations). Let ν be the prior probabilities on Ω. Then μ is directly proportional to ν on {ω1, ..., ωm} and is zero outside of it, and:

  1. μ(p)=c(ν(ω1)p1 + ... + ν(ωm)pm)

where c = 1/(ν(ω1)+.... + ν(ωm)). Thus, μ(p) is the sum of m independent but perhaps no longer identically distributed random variables. Nonetheless, the mean of μ(p) will still be 1/2 as is easy to verify. Moreover, if the ν(ωi) do not differ too radically among each other (say, are the same order of magnitude), and m is large, we will still be close to a normal distribution by the Berry-Esseen inequality and its refinements.

In other words, as long as our priors are not too far from uniform, and there is a lot we don’t know (i.e., m is large), the distribution of credences among randomly chosen propositions is approximately normal. And to get estimates on the distribution of credences, we can make use of the vast mathematical literature on sums of independent random variables. This literature is available even without the "approximate uniformity" condition on the priors (which I haven't bothered to formulate precisely).

Belief, testimony and trust

Suppose that to believe a proposition is to have a credence in that proposition above some (perhaps contextual) threshold pb where pb is bigger than 1/2 (I think it’s somewhere around 0.95 to 0.98). Then by the results of my previous post, because of the very fast decay of the normal distribution, most propositions with credence above the threshold pb have a credence extremely close to pb.

Now suppose I assert precisely when my credence is above the threshold pb. If you trusted my rationality and honesty perfectly and had no further relevant evidence, it would make sense to set your credences to mine when I tell you something. But normally, we don’t tell each other our credences. We just assert. From the fact that I assert, given perfect trust, you could conclude that my credence is probably very slightly above pb. Thus you would set your credence to slightly above pb, and in particular you would believe the proposition I asserted.

But in practice, we don’t trust each other perfectly. Thus, you might think something like this about my assertion:

If Alex was honest and a good measurer of own credences, his credence was probably a tiny bit above pb, and if I was certain of that, I’d make that be my credence. but he might not have been honest or he might have been self-deceived, in which case his credence could very well be significantly below pb, especially given the fast decay in the distribution of credences, which yields high priors for the credence being significantly below pb.

Since the chance of dishonesty or self-deceit is normally not all that tiny, your overall credence would be below pb. Note that this is the case even for people we take to be decent and careful interlocutors. Thus, in typical circumstances, if we assert at the threshold for belief, even interlocutors who think of us as ordinarily rational and honest shouldn’t believe us.

This seems to me to be an unacceptable consequence. It seems to me that if someone we take to be at least ordinarily rational and honest tells us something, we should believe it, absent defeaters. Given the above argument, it seems that the credential threshold for assertion has to be significantly higher than the credential threshold for belief. In particular, it seems, the belief norm of assertion is insufficiently strong.

Intuitively, the knowledge norm of assertion is strong enough (maybe it’s too strong). If this is right, then it follows that knowledge has a credential threshold significantly above that for belief. Then, if someone asserts, we will think that their credence is just slightly above the threshold for knowledge, and even if we discount that because of worries that even an ordinarily decent person might not be reporting their credence correctly, we will likely stay above the threshold for belief. The conclusion will be that in ordinary circumstances if someone asserts something, we will be able to believe it—but not know it.

I am not happy with this. I would like to be able to say that we can go from another’s assertion to our knowledge, in cases of ordinary degrees of trust. I could just be wrong about that. Maybe I am too credulous.

Here is a way of going beyond this. Perhaps the norms of assertion should be seen not as all-or-nothing, but as more complex:

  1. When the credence is at or below pb, we are forbidden to assert.

  2. When the credence is above pb, but close to pb, we have permission to assert, but we also have a strong defeasible reason not to assert, with the strength of that reason increasing to infinity as we get closer we are to pb.

If someone abides by these, they will be unlikely to assert a proposition whose credence is only slightly above pb, because they will have a strong reason not to. Thus, their asserting in accordance with the norms will give us evidence that their credence is not insignificantly above pb. And hence we will be able to believe, given a decent degree of trust.

Note, however, that the second norm will not apply if there is a qualifier like “I think” or “I believe”. In that case, the earlier argument will still work. Thus, we have this interesting consequence: If someone trustworthy merely says that they believe something, that testimony is still insufficient for our belief. But if they assert it outright, that is sufficient for our belief.

This line of thought arose out of conversations I had with Trent Dougherty a number of years ago and my wife more recently. I don’t know if either would endorse my conclusions, though.

The credence distribution among rational beliefs

Let’s say that there are n initially epistemically relevant possible situations, w1, ..., wn, one of which (say, w1) is the true one. We can identify the propositions about these situations with subsets of the set W of possible situations. Additionally, we have some evidence concerning the possible situations. Here is a question that interests me:

  1. What is the distribution of (posterior) credences among the propositions like?

My intuition says that most of our credences are in the middle range, between 1/4 and 3/4, and that very few will be close to 0 and 1. I would speculate that the distribution of credences would look like a bell curve (when I told him about the problem, my son also had the same speculation).

Can we make some progress on the question? I think so.

For simplicity, let’s suppose our priors are uniform on W: each possible situation is equally likely.

Our evidence basically restricts the set of initially epistemically relevant situations W to a subset e of W, presumably a subset containing the true situation. Let m = |e| be the number of elements of e.

Consider an arbitrary proposition about the situations. This proposition can be identified with a subset p of W. Then the posterior P(p|e) will equal |p ∩ e|/|e| = |p ∩ e|/m, because the priors are uniform.

So now our question is:

  1. What is the distribution of |p ∩ e|/m amongst the propositions p?

There are 2n propositions, and the possible values of |p ∩ e|/m are 0/m, 1/m, ....,m/m. We can thus just ask about the distribution of |p ∩ e| among the propositions p. Here is one easy fact:

  1. If k > m, then there are zero propositions p such that |p ∩ e|=k.

So let’s now consider 0 ≤ k ≤ m and ask how many propositions there are such that |p ∩ e|=k. This is not so hard. Such a proposition consists of a subset of e of size k and an arbitrary subset of W − e. There are C(m, k) subsets of e of size k (where C(m, k) is the binomial coefficient) and there are 2n − m subsets of W − e. Thus:

  1. There are 2n − mC(m, k) propositions p such that |p ∩ e|=k.

Since C(m, k)=0 unless 0 ≤ k ≤ m, claim (4) is true even without restricting the values of k.

Lesson learned:

  1. The distribution of |p ∩ e| is a symmetric binomial distribution on 0, ..., m.

This binomial distribution has standard deviation m1/2/2. Now, we can answer (2):

  1. The distribution of |p ∩ e|/m, i.e., the posterior probability, amongst the propositions p is a symmetric binomial distribution scaled to have range from 0 to 1, mean 1/2 and standard deviation σ = 1/2m1/2.

Since the binomial distribution is close to a normal distribution for large m (and in life m will often be large: it is the number of situations that remain epistemically relevant given our evidence), my conjecture that most credences are in the middle range and that we have a bell curve turns out correct. (By the way, I’ve been doing the math while writing this post. So I wrote down the speculation before I knew that it was correct.)

Note that in practice we can restrict our situations to concern some particular subject matter, say the order of cards in a deck, the outcomes of die throws, the possible scientific hypotheses about the the evolution of bipedality, etc. And as long as we are sufficiently fine-grained that the number of still-in-play hypotheses is large, the above result applies.

So:

  1. We would expect the vast majority of a rational agent’s credences to cluster around 1/2. A very small minority will have credences near 0 and 1, and we have fast decay in the number of propositions with a given credence as we get closer to 0 or 1.