Showing posts with label belief. Show all posts
Showing posts with label belief. Show all posts

Wednesday, May 21, 2025

Doxastic moral relativism

Reductive doxastic moral relativism is the view that an action type’s being morally wrong is nothing but an individual or society’s belief that the action type is morally wrong.

But this is viciously circular, since we reduce wrongness to a belief about wrongness. Indeed, it now seems that murder is wrong provided that it is believed that it is believed that it is believed ad infinitum.

A non-reductive biconditional moral relativism fares better. This is a theory on which (a) there is such a property as moral wrongness and (b) necessarily, an action type has that property if and only if it is believed that it does. Compare this: There is such a property as mass, and necessarily an object has mass if and only if God believes that it has mass.

There is a biconditional-explanatory version. On this theory (a) there is such a property as moral wrongness and (b) necessarily, an action type has that property if and only if, and if so then because, it is believed that it does.

While both the biconditional and biconditional-explanatory versions appear logically coherent, I think they are not particularly plausible. If there really is such a property as moral wrongness, and it does not reduce to our beliefs, then it just does not seem particularly plausible to think that it obtains solely because of our beliefs or that it obtains necessarily if and only if we believe it does. The only clear and non-gerrymandered examples we have of properties that obtain solely because of our beliefs or necessarily if and only if we believe they do are properties that reduce to our beliefs.

All this suggests to me that if one wishes to be a relativism, one should base the relativism on a different attitude than belief.

Monday, January 20, 2025

Open-mindedness and epistemic thresholds

Fix a proposition p, and let T(r) and F(r) be the utilities of assigning credence r to p when p is true and false, respectively. The utilities here might be epistemic or of some other sort, like prudential, overall human, etc. We can call the pair T and F the score for p.

Say that the score T and F is open-minded provided that expected utility calculations based on T and F can never require you to ignore evidence, assuming that evidence is updated on in a Bayesian way. Assuming the technical condition that there is another logically independent event (else it doesn’t make sense to talk about updating on evidence), this turns out to be equivalent to saying that the function G(r) = rT(r) + (1−r)F(r) is convex. The function G(r) represents your expected value for your utility when your credence is r.

If G is a convex function, then it is continuous on the open interval (0,1). This implies that if one of the functions T or F has a discontinuity somewhere in (0,1), then the other function has a discontinuity at the same location. In particular, the points I made in yesterday’s post about the value of knowledge and anti-knowledge carry through for open-minded and not just proper scoring rules, assuming our technical condition.

Moreover, we can quantify this discontinuity. Given open-mindedness and our technical condiiton, if T has a jump of size δ at credence r (e.g., in the sense that the one-sided limits exist and differ by y), then F has a jump of size rδ/(1−r) at the same point. In particular, if r > 1/2, then if T has a jump of a given size at r, F has a larger jump at r.

I think this gives one some reason to deny that there are epistemically important thresholds strictly between 1/2 and 1, such as the threshold between non-belief and belief, or between non-knowledge and knowledge, even if the location of the thresholds depends on the proposition in question. For if there are such thresholds, then now imagine cases of propositions p with the property that it is very important to reach a threshold if p is true while one’s credence matters very little if p is false. In such a case, T will have a larger jump at the threshold than F, and so we will have a violation of open-mindedness.

Here are three examples of such propositions:

  • There are objective norms

  • God exists

  • I am not a Boltzmann brain.

There are two directions to move from here. The first is to conclude that because open-mindedness is so plausible, we should deny that there are epistemically important thresholds. The second is to say that in the case of such special propositions, open-mindedness is not a requirement.

I wondered initially whether a similar argument doesn’t apply in the absence of discontinuities. Could one have T and F be openminded even though T continuously increases a lot faster than F decreases? The answer is positive. For instance the pair T(r) = e10r and F(r) =  − r is open-minded (though not proper), even though T increases a lot faster than F decreases. (Of course, there are other things to be said against this pair. If that pair is your utility, and you find yourself with credence 1/2, you will increase your expected utility by switching your credence to 1 without any evidence.)

Tuesday, January 24, 2023

Thresholds and precision

In a recent post, I noted that it is possible to cook up a Bayesian setup where you don’t meet some threshold, say for belief or knowledge, with respect to some proposition, but you do meet the same threshold with respect to the claim that after you examine a piece of evidence, then you will meet the threshold. This is counterintuitive: it seems to imply that you can know that you will have enough evidence to know something even though you don’t yet. In a comment, Ian noted that one way out of this is to say that beliefs do not correspond to sharp credences. It then occurred to me that one could use the setup to probe the question of how sharp our credences are and what the thresholds for things like belief and knowledge are, perhaps complementarily to the considerations in this paper.

For suppose we have a credence threshold r and that our intuitions agree that we can’t have a situation where:

  1. we have transparency as to our credences,

  2. we don’t meet r with respect to some proposition p, but

  3. we meet r with respect to the proposition that we will meet the threshold with respect to p after we examine evidence E.

Let α > 0 be the “squishiness” of our credences. Let’s say that for one credence to be definitely bigger than another, their difference has to be at least α, and that to definitely meet (fail to meet) a threshold, we must be at least α above (below) it. We assume that our threshold r is definitely less than one: r + α ≤ 1.

We now want this constraint on r and α:

  1. We cannot have a case where (a), (b) and (c) definitely hold.

What does this tell us about r and α? We can actually figure this out. Consider a test for p that have no false negatives, but has a false positive rate of β. Let E be a positive test result. Our best bet to generating a counterexample to (a)–(c) will be if the priors for p are as close to r as possible while yet definitely below, i.e., if the priors for p are r − α. For making the priors be that makes (c) easier to definitely satisfy while keeping (b) definitely satisfied. Since there are no false negatives, the posterior for p will be:

  1. P(p|E) = P(p)/P(E) = (rα)/(rα+β(1−(rα))).

Let z = r − α + β(1−(rα)) = (1−β)(rα) + β. This is the prior probability of a positive test result. We will definitely meet r on a positive test result just in case we have (rα)/z = P(p|E) ≥ r + α, i.e., just in case

  1. z ≤ (rα)/(r+α).

(We definitely won’t meet r on a negative test result.) Thus to get (c) definitely true, we need (3) to hold as well as the probability of a positive test result to be at least r + α:

  1. z ≥ r + α.

Note that by appropriate choice of β, we can make z be anything between r − α and 1, and the right-hand-side of (3) is at least r − α since r + α ≤ 1. Thus we can make (c) definitely hold as long as the right-hand-side of (3) is bigger than or equal to the right-hand-side of (4), i.e., if and only if:

  1. (r+α)2 ≤ r − α

or, equivalently:

  1. α ≤ (1/2)((1+6r−3r2)1/2−1−r).

It’s in fact not hard to see that (6) is necessary and sufficient for the existence of a case where (a)–(c) definitely hold.

We thus have our joint constraint on the squishiness of our credences: bad things happen if our credences are so precise as to make (6) true with respect to a threshold r for which we don’t want (a)–(c) to definitely hold. The easiest scenario for making (a)–(c) definitely hold will be a binary test with no false negatives.

We thus have our joint constraint on the squishiness of our credences: bad things happen if our credences have a level of precision equal to the right-hand-side of (6). What exactly that says about α depends on where the relevant threshold lies. If the threshold r is 1/2, the squishiness α is 0.15. That’s surely higher than the actual squishiness of our credences. So if we are concerned merely with the threshold being more-likely-than-not, then we can’t avoid the paradox, because there will be cases where our credence is definitely below the threshold and it’s definitely above the threshold that examination of the evidence will push us about the threshold.

But what’s a reasonable threshold for belief? Maybe something like 0.9 or 0.95. At r = 0.9, the squishiness needed for paradox is α = 0.046. I suspect our credences are more precise than that. If we agree that the squishiness of our credences is less than 4.6%, then we have an argument that the threshold for belief is more than 0.9. On the other hand, at r = 0.95, the squishiness needed for paradox is 2.4%. At this point, it becomes more plausible that our credences lack that kind of precision, but it’s not clear. At r = 0.98, the squishiness needed for paradox dips below 1%. Depending on how precise we think our credences are, we get an argument that the threshold for belief is something like 0.95 or 0.98.

Here's a graph of the squishiness-for-paradox α against the threshold r:

Note that the squishiness of our credences likely varies with where the credences lie on the line from 0 to 1, i.e., varies with respect to the relevant threshold. For we can tell the difference between 0.999 and 1.000, but we probably can’t tell the difference between 0.700 and 0.701. So the squishiness should probably be counted relative to the threshold. Or perhaps it should be correlated to log-odds. But I need to get to looking at grad admissions files now.

Friday, October 28, 2022

Does our ignorance always grow when we learn?

Here is an odd thesis:

  1. Whenever you gain a true belief, you gain a false belief.

This follows from:

  1. Whenever you gain a belief, you gain a false belief.

The argument for (2) is:

  1. You always have at least one false belief.

  2. You believe a conjunction if and only if you believe the conjuncts.

  3. Suppose you just gained a belief p.

  4. There is now some false belief q that you have. (By (3))

  5. Before you gained the belief p you didn’t believe the conjunction of p and q. (By (4))

  6. So, you just gained the belief in the conjunction of p and q. (By (5) and (7))

  7. The conjunction of p and q is false. (By (6))

  8. So, you just gained a false belief. (By (8) and (9))

I am not sure I accept (4), though.

Thursday, February 10, 2022

It can be rational to act as if one's beliefs were more likely true than the evidence makes them out to be

Consider this toy story about belief. It’s inconvenient to store probabilities in our minds. So instead of storing the probability of a proposition p, once we have evaluated the evidence to come up with a probability r for p, we store that we believe p if r ≥ 0.95, that we disbelieve p if r ≤ 0.05, and otherwise that we are undecided. (Of course, the “0.95” is only for the sake of an example.)

Now, here is a curious thing. Suppose I come across a belief p in my mind, having long forgotten the probability it came with, and I need to make some decision to which p is relevant. What probability should I treat p as having in my decision? A natural first guess is 0.95, which is my probabilistic threshold for belief. But that is a mistake. For the average probability of my beliefs, if I follow the above practice perfectly, is bigger than 0.95. For I don’t just believe things that have probability 0.95. I also believe things that have probability 0.96, 0.97 and even 0.999999. Intuitively, however, I would expect that there are fewer and fewer propositions with higher and higher probability. So, intuitively, I would expect the average probability of a believed proposition to be a somewhat above 0.95. How far above, I don’t know. And the average probability of a believed proposition is the probability that if I pick a believed proposition out of my mental hat, it will be true.

So even though my threshold for belief is 0.95 in this toy model, I should treat my beliefs as if they had a slightly higher probability than that.

This could provide an explanation for why people can sometimes treat their beliefs as having more evidence than they do, without positing any irrationality on their part (assuming that the process of not storing probabilities but only storing disbelieve/suspend/belief is not irrational).

Objection 1: I make mistakes. So I should take into account the fact that sometimes I evaluated the evidence wrong and believed things whose actual evidential probability was less than 0.95.

Response: We can both overestimate and underestimate probabilities. Without evidence that one kind of error is more common than the other, we can just ignore this.

Objection 2: We have more fine-grained data storage than disbelieve/suspend/believe. We confidently disbelieve some things, confidently believe others, are inclined or disinclined to believe some, etc.

Response: Sure. But the point remains. Let’s say that we add “confidently disbelieve” and “confidently believe”. It’ll still be true that we should treat the things in the “believe but not confidently” bin as having slightly higher probability than the threshold for “believe”, and the things in the “confidently believe” bin as having slightly higher probability than the threshold for “confidently believe”.

Tuesday, January 18, 2022

A physicalist argument for proper functions in biology

  1. We have beliefs.

  2. A belief is a mental state with the proper function of reflecting reality.

  3. Our mental states are biological states. (Follows from standard physicalism)

  4. So, some biological states have a proper function.

Tuesday, December 21, 2021

Divine simplicity and divine knowledge of contingent facts

One of the big puzzles about divine simplicity which I have been exploring is that of God’s knowledge of contingent facts. A sloppy way to put the question is:

  1. How can God know p in one world and not know p in another, even though God is intrinsically the same in both worlds?

But that’s not really a question about divine simplicity, since the same is often true for us. Yesterday you knew that today the sun would rise. Yet there is a possible world w2 which up to yesterday was exactly the same as our actual world w1, but due to a miracle or weird quantum stuff, the sun did not rise today in w2. Yesterday, you were intrinsically the same in w1 and w2, but only in w1 did you know that today the sun would rise. For, of course, you can’t know something that isn’t true.

So perhaps the real question is:

  1. How can God believe p in one world and not believe p in another, even though God is intrinsically the same in both worlds?

I wonder, however, if there isn’t a possibility of a really radical answer: it is false that God believes p in one world and not in another, because in fact God doesn’t have any beliefs in any world—he only knows.

In our case, belief seems to be an essential component of knowledge. But God’s knowledge is only analogical to our knowledge, and hence it should not be a big surprise if the constitutive structure of God’s knowledge is different from our knowledge.

And even in our case, it is not clear that belief is an essential component of knowledge. Anscombe famously thought that there was such a thing as intentional knowledge—knowledge of what you are intentionally doing—and it seems that on her story, the role played in ordinary knowledge by belief was played by an intention. If she is right about that, then an immediate lesson is that belief is not an essential component of knowledge. And in fact even the following claim would not be true:

  1. If one knows p, then one believes or intends p.

For suppose that I intentionally know that I am writing a blog post. Then I presumably also know that I am writing a blog post on a sunny day. But I don’t intentionally know that I am writing a blog post on a sunny day, since the sunniness of the day is not a part of the intention. Instead, my knowledge is based in part on the intention to write a blog post and in part on the belief that it is a sunny day. Thus, knowledge of p can be based on belief that p, intention that p, or a complex combination of belief and intention. But once we have seen this, then we should be quite open to a lot of complexity in the structure of knowledge.

Of course, Anscombe might be wrong about there being such a thing as knowledge not constituted by belief. But her view is still intelligible. And its very intelligibility implies a great deal of flexibility in the concept of knowledge. The idea of knowledge without belief is not nonsense in the way that the idea of a fork without tines is.

The same point can be supported in other ways. We can imagine concluding that we have no beliefs, but we have other kinds of representational states, such as credences, and that we nonetheless have knowledge. We are not in the realm of tineless forks here.

Now, it is true that all the examples I can think of for other ways that knowledge could be constituted in us besides being based on belief still imply intrinsic differences given different contents (beyond the issues of semantic externalism due to twinearthability). But the point is just that knowledge is flexible enough concept, that we should be open to God having something analogous to our knowledge but without any contingent intrinsic state being needed. (One model of this possibility is here.)

Wednesday, September 8, 2021

Reasons from the value of true belief

Two soccer teams are facing off, with a billion fans watching on TV. Brazil has a score of 2 and Belgium has a score of 0, and there are 15 minutes remaining. The fans nearly unanimously think Brazil will win. Suddenly, there is a giant lightning strike, and all electrical devices near the stadium fail, taking the game off the air. Coincidentally, during the glitch, Brazil’s two best players get red cards, and now Belgium has a very real chance to win if they try hard.

But the captain of the Brazilian team yells out this argument to the Belgians: “If you win, you will make a billion fans have a false belief. A false belief is bad, and when you multiply the badness by billion, the result is very bad. So, don’t win!”
Great hilarity ensues among the Belgians and they proceed to trounce the Brazilians.

The Belgians are right to laugh: the consideration that the belief of a billion fans will be falsified by their effort carries little to no moral weight.

Why? Is it that false belief carries little to no disvalue? No. For suppose that now the game is over. At this point, the broadcast teams have a pretty strong moral reason to try to get back on the air in order to inform the billion fans that they were mistaken about the result of the game.

In other words, we have a much stronger reason to shift people’s beliefs to match reality than to shift reality to match people’s beliefs. Yet in both cases the relevant effect on the good and bad in the world can be the same: there is less of the bad of false beliefs and more of the good of true beliefs. An immediate consequence of this is that consequentialism about moral reasons is false: the weight of moral reasons depends on more than the value of the consequences.

It is often said that belief has a mind-to-world direction of fit. It is interesting that this not only has repercussions for the agent’s own epistemic life, but for the moral life of other parties. We have much more reason to help others to true belief by affecting their beliefs than by affecting the truth and falsity of the content of the beliefs.

Do the Belgians have any moral reason to lose, in light of the fact that losing will make the fans have correct belief? I am inclined to think so: producing a better state of affairs is always worthwhile. But the force of the reason is exceedingly small. (Nor do the numbers matter: the reason’s force would remain exceedingly small even if there are trillions of fans because Earth soccer was famous through the galaxy.)

There is a connection between the good and the right, but it is quite complex indeed.

Monday, July 26, 2021

Divine simplicity and knowledge of contingent truth

I think the hardest problem for divine simplicity is the problem of God’s contingent beliefs. In our world, God believes there are horses. In a horseless world, God doesn’t believe there are horses. Yet according to divine simplicity, God has the same intrinsic features in both the horsey and the horseless worlds.

There is only one thing the defender of simplicity can say: God’s contingent beliefs are not intrinsic features of God. The difficult task is to make this claim easier to believe.

It’s worth noting that our beliefs are partly extrinsic. Consider a world just like ours, but where a mischievous alien did some genetic modification work to make cows that look and behave just like horses to the eyes of humans before modern science, and where humans thought and talked about them just as in our world they talked about horses. If a 14th century Englishman in the fake-horse world sincerely said he believed he owned a “horse”, he would be expressing a different belief from a 14th century Englishman in our world who uttered the same sounds, since “horse” in the fake-horse world doesn’t refer to horses but to genetically modified cows. Their beliefs about rideable animals would be different, but inside their minds, intrinsically, there need be no difference between their thought processes.

But it is difficult to stretch this story to the case of God, since it relies on observational limitations. Moreover, it is hard to extend the story to more major differences. If instead of fake horses, the alien produced tauntauns, no doubt the minds of the people in that world would be intrinsically different in thinking about riding tauntauns from our minds when think about riding horses (even if accidentally their English speakers used “horse” to denote a tauntaun).

While our beliefs are partly extrinsic, God’s contingent beliefs are radically extrinsic according to divine simplicity. There are no intrinsic differences in God no matter how radical the differences in belief are.

This feels hard to accept. Still, once we have accepted that beliefs can be partly extrinsic, it is difficult to mount a principled argument against radical extrinsicness of divine belief. All we really have is that this extrinsicness is counterintuitive—but given God’s radical difference from creatures, we should expect God to be counterintuitive in many (infinitely many!) ways.

But I want to share a thought that has helped me be more accepting of the radical extrinsicness thesis about divine belief. There is something awkward in talking of God’s having beliefs. The much more natural way to talk is of God’s having knowledge. But knowledge is way more extrinsic in us than belief is. For you to know something, that something has to be true. So what you know depends very heavily on the external world. You know that your car is in the garage in part precisely in virtue of the fact that your car is in the garage. If your car weren’t in the garage, you wouldn’t have this knowledge.

In us, belief and knowledge are separable. Belief is much more of an intrinsic state, while knowledge is much more of an extrinsic one. When we know something outside ourselves, what makes it be the case that we know it is both a state of belief and a state of the external world. This separation makes error possible: it is possible to have the belief without the external world matching up.

But in a being that is epistemically perfect, there is no possibility of belief without knowledge. I want to suggest the plausibility of this thesis: in a being that epistemically perfect, there is not even a metaphysical separation between knowledge and belief. For such a being, to believe is to know. But knowledge of contingent external states of affairs is significantly extrinsic. So if to believe for such a being is to know, then we would expect beliefs about contingent external states of affairs to be significantly extrinsic as well.

In other words, the extrinsicness of belief that divine simplicity requires matches up with an extrinsicness that is quite plausible given considerations of the perfection of divine epistemology.

Monday, June 21, 2021

Self-locating beliefs in the Trinity

Here is a difficulty for the doctrine of the Trinity that I don’t remember coming across before:

  1. The Father and the Son have the numerically same divine mind.

  2. If x and y have the numerically same divine mind, then x and y have the same divine beliefs.

  3. The Father has an “I am the Father” divine belief.

  4. So, the Son has an “I am the Father” divine belief. (1–3)

  5. An “I am the Father” divine belief in the Son would be false.

  6. There are no false divine beliefs.

  7. So, the Son has no “I am the Father” divine belief. (5–6)

  8. Contradiction!

Here, premise (1) follows from the heuristic that what there are two of in Christ, there is one of in the Trinity: there are two minds in Christ, so one mind in the Trinity. Non-heuristically, if there are two minds in Christ—the human and the divine mind—the mind must be a function of the nature, and as there is one divine nature in the Trinity, there is one mind in the Trinity.

There is a quick way out of the paradox: Restrict premise (2) to propositional beliefs rather than de se or self-locating beliefs. The belief that would be expressed in English by “I am the Father” is a de se or self-locating belief. There are corresponding propositional beliefs, such as the belief that the Father is the Father and the Son is the Son, but these are unproblematically had in common by the Father, the Son and the Holy Spirit.

However, while this quick way gets one out of the argument, it nonetheless leaves raises the difficult question of how it is the Father knows de se that he is the Father and the Son knows de se that he is not the Father, while yet there is one mind.

The solution had better be in terms of the relations between the divine persons, for there is no difference between the persons of the Trinity except the relational. I am reminded here of Thomas’s discussion of creation and the Trinity:

And therefore to create belongs to God according to His being, that is, His essence, which is common to the three Persons. Hence to create is not proper to any one Person, but is common to the whole Trinity.

Nevertheless the divine Persons, according to the nature of their procession, have a causality respecting the creation of things. For as was said above, when treating of the knowledge and will of God, God is the cause of things by His intellect and will, just as the craftsman is cause of the things made by his craft. Now the craftsman works through the word conceived in his mind, and through the love of his will regarding some object. Hence also God the Father made the creature through His Word, which is His Son; and through His Love, which is the Holy Ghost. And so the processions of the Persons are the type of the productions of creatures inasmuch as they include the essential attributes, knowledge and will.

Thus, each divine person is fully the Creator, but is fully the Creator in a way that takes into account the relationship that defines the person in the Trinity: the Father creates in a Fatherly way, the Son as the Logos through which creation is done, and the Spirit as the Love in which creation is inspired. What makes it be the case that the Father creates in a Fatherly way is just that the Father creates and he stands in the relations that constitute him as Father; what makes it be the case that the Son creates in a Filial way is just that the Son creates and he stands in the relations that constitute him as Son; and similarly for the Holy Spirit.

We might thus imagine the following story. There is a state F of the divine mind such that the Father’s Fatherly instantiation of F constitutes F into a belief that he (de se) is the Father. The Son instantiates the numerically same state F in a Filial way. But while a Fatherly instantiation of F is correctly described in English as constituting an “I am the Father” belief, a Filial instantiation of F is not aptly so described. Perhaps, a Filial instantiation of F is aptly described as a believing of “I am the Son of the one who is the Father.” Thus, the de se beliefs of the persons of the Trinity are constituted by mental states common to the Trinity and the relations constituting the persons.

Wednesday, January 13, 2021

Epistemology and the presumption of (im)permissibility

Normally, our overt behavior has the presumption of moral permissibility: an action is morally permissible unless there is some specific reason why it would be morally impermissible.

Oddly, this is not so in epistemology. Our doxastic behavior seems to come along with a presumption of epistemic impermissibility. A belief or inference is only justified when there is a specific reason for that justification.

In ethics, there are two main ways of losing the presumption of moral permissibility in an area of activity.

The first is that actions falling in that area are prima facie bad, and hence a special justification is needed for them. Violence is an example: a violent action is by default impermissible, unless we have a special reason that makes it permissible. The second family of cases is areas of action that are dangerous. When we go into a nuclear power facility or a functioning temple, we are surrounded by danger—physical or religious—and we should refrain from actions unless we have special reason to think they are safe.

Belief isn’t prima facie bad. But maybe it is prima facie dangerous? But the presumption of impermissibility is not limited to some special areas. There indeed are dangerous areas of our doxastic lives: having the wrong religious beliefs can seriously damage us psychologically and spiritually while having the wrong beliefs about nutrition and medicine can kill us. But there seem to be safe areas of our doxastic lives: whatever I believe about the last digit in the number of hairs on my head or about the generalized continuum hypothesis seems quite safe. Yet, having the unevidenced belief that the last digit in the number of hairs on my head is three is just as impermissible as having the unevidenced belief that milk cures cancer.

Perhaps it is simply that moral and epistemic normativity are not as analogous as they have seemed to some.

But there is another option. Perhaps, despite what I said, our doxastic lives are always dangerous. Here is one way to suggest this. Perhaps truth is sacred, and so dealing with truth is dangerous just as it is dangerous to be in a temple. We need reason to think that the rituals we perform are right when we are in a temple—we should not proceed by whim or by trial and error in religion—and perhaps similarly we need reasons to think that our beliefs are true, precisely because our doxastic lives always, no matter how “secular” the content, concern the sacred. Our beliefs may be practically safe, but the category of the sacred always implicates a danger, and hence a presumption of impermissibility.

I can think of two ways our doxastic lives could always concern the sacred:

  1. God is truth.

  2. All truth is about God: every truth is contingent or necessary; contingent truths tell us about what God did or permitted; necessary truths are all grounded in the nature of God.

All this also fits with an area of our moral lives where there is a presumption of impermissibility: assertion. One should only make assertions when one has reason to think they are true. Otherwise, one is lying or engaging in BS. Yet assertion is not always dangerous in any practical sense of “dangerous”: making unwarranted assertions about the number of hairs one one’s head or the general continuum hypothesis is pretty safe practically speaking. But perhaps assertion also concerns the truth, which is something sacred, and where we are dealing with the sacred, there we have spiritual danger and a presumption of impermissibility.

Friday, August 30, 2019

Credence and belief

For years, I’ve been inclining towards the view that belief is just high credence, but this morning the following argument is swaying me away from this:

  1. False belief is an evil.

  2. High credence in a falsehood is not an evil.

  3. So, high credence is not belief.

I don’t have a great argument for (1), but it sounds true to me. As for (2), my argument is this: There is no evil in having the right priors, but having the right priors implies lots high credences in falsehoods.

Maybe I should abandon (1) instead?

Sunday, August 4, 2019

Belief, testimony and trust

Suppose that to believe a proposition is to have a credence in that proposition above some (perhaps contextual) threshold pb where pb is bigger than 1/2 (I think it’s somewhere around 0.95 to 0.98). Then by the results of my previous post, because of the very fast decay of the normal distribution, most propositions with credence above the threshold pb have a credence extremely close to pb.

Now suppose I assert precisely when my credence is above the threshold pb. If you trusted my rationality and honesty perfectly and had no further relevant evidence, it would make sense to set your credences to mine when I tell you something. But normally, we don’t tell each other our credences. We just assert. From the fact that I assert, given perfect trust, you could conclude that my credence is probably very slightly above pb. Thus you would set your credence to slightly above pb, and in particular you would believe the proposition I asserted.

But in practice, we don’t trust each other perfectly. Thus, you might think something like this about my assertion:

If Alex was honest and a good measurer of own credences, his credence was probably a tiny bit above pb, and if I was certain of that, I’d make that be my credence. but he might not have been honest or he might have been self-deceived, in which case his credence could very well be significantly below pb, especially given the fast decay in the distribution of credences, which yields high priors for the credence being significantly below pb.

Since the chance of dishonesty or self-deceit is normally not all that tiny, your overall credence would be below pb. Note that this is the case even for people we take to be decent and careful interlocutors. Thus, in typical circumstances, if we assert at the threshold for belief, even interlocutors who think of us as ordinarily rational and honest shouldn’t believe us.

This seems to me to be an unacceptable consequence. It seems to me that if someone we take to be at least ordinarily rational and honest tells us something, we should believe it, absent defeaters. Given the above argument, it seems that the credential threshold for assertion has to be significantly higher than the credential threshold for belief. In particular, it seems, the belief norm of assertion is insufficiently strong.

Intuitively, the knowledge norm of assertion is strong enough (maybe it’s too strong). If this is right, then it follows that knowledge has a credential threshold significantly above that for belief. Then, if someone asserts, we will think that their credence is just slightly above the threshold for knowledge, and even if we discount that because of worries that even an ordinarily decent person might not be reporting their credence correctly, we will likely stay above the threshold for belief. The conclusion will be that in ordinary circumstances if someone asserts something, we will be able to believe it—but not know it.

I am not happy with this. I would like to be able to say that we can go from another’s assertion to our knowledge, in cases of ordinary degrees of trust. I could just be wrong about that. Maybe I am too credulous.

Here is a way of going beyond this. Perhaps the norms of assertion should be seen not as all-or-nothing, but as more complex:

  1. When the credence is at or below pb, we are forbidden to assert.

  2. When the credence is above pb, but close to pb, we have permission to assert, but we also have a strong defeasible reason not to assert, with the strength of that reason increasing to infinity as we get closer we are to pb.

If someone abides by these, they will be unlikely to assert a proposition whose credence is only slightly above pb, because they will have a strong reason not to. Thus, their asserting in accordance with the norms will give us evidence that their credence is not insignificantly above pb. And hence we will be able to believe, given a decent degree of trust.

Note, however, that the second norm will not apply if there is a qualifier like “I think” or “I believe”. In that case, the earlier argument will still work. Thus, we have this interesting consequence: If someone trustworthy merely says that they believe something, that testimony is still insufficient for our belief. But if they assert it outright, that is sufficient for our belief.

This line of thought arose out of conversations I had with Trent Dougherty a number of years ago and my wife more recently. I don’t know if either would endorse my conclusions, though.

Wednesday, May 1, 2019

The Bayesian false belief pandemic

Suppose that a credence greater than 95% suffices to count as a belief, and that you are a rational agent who tossed ten fair coins but did not see the results. Then you have at least 638 false beliefs about coin toss outcomes.

To see this, for simplicity, suppose first that all the coins came up heads. Let Tn be the proposition that the nth coin is tails. Then the disjunction of five or more of the Tn has probability 96%, and so you believe every disjunction of five or more of the Tn. Each such belief is false, because all the coins will in fact come up heads. There are 638 (pairwise logically inequivalent) disjunctions of five or more of the Tn. So, you have at least 638 false beliefs here (even if we are counting up to logical equivalence).

Things are slightly more complicated if not all the coins come up heads, but exactly the same conclusion is still true: you have 638 disjunctions of five or more false single-coin-outcome beliefs.

But it seems that nothing went wrong in the coin toss situation: everything is as it should be. There is no evil present. So, it seems, reasonable false belief is not an evil.

I am not sure what to make of this conclusion, since it also seems to me that it is the telos of our beliefs to correctly represent reality, and a failure to do that seems an evil.

Perhaps the thing to say is this: the belief itself is bad, but having a bad belief isn’t always intrinsically bad for the agent? This seems strange, but I think it can happen.

Consider a rather different case. I want to trigger an alarm given the presence of radiation above a certain threshold. I have a radiation sensor that has practically no chance of being triggered when the radiation is below the threshold but has a 5% independent failure rate when the radiation is above the threshold. And a 5% false negative rate is not good enough for my application. So I build a device with five independent sensors, and have the alarm be triggered if any one sensor goes off. My false negative rate goes down to 3 ⋅ 10−7. Suppose now four sensors are triggered and the fifth is not. The device is working correctly and triggers the alarm, even though one sensor has failed. The failure of the sensor is bad for the sensor but not bad for the device.

Another move is to say that there is an evil present in the false belief case, but it’s tiny.

And yet another move is to deny that one should have a belief when the credence rises above a threshold.

Wednesday, March 6, 2019

Another dilemma?

Following up on my posts (this and this) regarding puzzles generated by moral uncertainty, here is another curious case.

Dr. Alice Kowalska believes that a steroid injection will be good for her patient, Bob. However, due to a failure of introspection, she also believes that she does not believe that a steroid injection will be beneficial to Bob. Should she administer the steroid injection?

In other words: Should Dr. Kowalska do what she thinks is good for her patient, or should she do what she thinks she thinks is good for her patient?

The earlier posts pushed me in the direction of thinking that subjective obligation takes precedence over objective obligation. That would suggest that she should do what she thinks she thinks is good for her patient.

But doesn’t this seem mistaken? After all, we don’t want Dr. Kowalska to be gazing at her own navel, trying to figure out what she thinks is good for the patient. We want her to be looking at the patient, trying to figure out what is good for the patient. So, likewise, it seems that her action should be guided by what she thinks is good for the patient, not what she thinks she thinks is good for the patient.

How, though, to reconcile this with the action-guiding precedence that the subjective seems to have in my previous posts? Maybe it’s this. What should be relevant to Dr. Kowalska is not so much what she believes, but what her evidence is. And here the case is underdescribed. Here is one story compatible with what I said above:

  1. Dr. Kowalska has lots of evidence that steroid injections are good for patients of this sort. But her psychologist has informed her that because of a traumatic experience involving a steroid injection, she has been unable to form the belief that naturally goes with this evidence. However, Dr. Kowalska’s psychologist is incompetent, and Dr. Kowalska indeed has the belief in question, but trusts her psychologist and hence thinks she does not have it.

In this case, it doesn’t matter whether Dr. Kowalska believes the injection would be good for patient. What matters is that she has lots of evidence, and she should inject.

Here is another story compatible with the setup, however:

  1. Dr. Kowalska knows there is no evidence that steroid injections are good for patients of this sort. However, her retirement savings are invested in a pharmaceutical company that specializes in these kinds of steroids, and wishful thinking has led to her subconsciously and epistemically akratically forming the belief that these injections are beneficial. Dr. Kowalska does not, however, realize that she has formed this subconscious belief.

In this case, intuitively, again it doesn’t matter that Dr. Kowalska has this subconscious belief. What matters is that she knows there is no evidence that the injections are good for patients of this sort, and given this, she should not inject.

If I am right in my judgments about 1 and 2, the original story left out crucial details.

Maybe we can tell the original story simply in terms of evidence. Maybe Dr. Kowalska on balance has evidence that the injection is good, while at the same time on balance having evidence that she does not on balance have evidence that the injection is good. I am not sure this is possible, though. The higher order evidence seems to undercut the lower order evidence, and hence I suspect that as soon as she gained evidence that she does not on balance have evidence, it would be the case that on balance she does not have evidence.

Here is another line of thought suggesting that what matters is evidence, not belief. Imagine that Dr. Kowalska and Dr. Schmidt both have the same evidence that it is 92% likely that the injections would be beneficial. Dr. Schmidt thereupon forms the belief that the injections would be beneficial, but Dr. Kowalska is more doxastically cautious and does not form this belief. But there is no disagreement between them as to the probabilities on the evidence. Then I think there should be no disagreement between them as to what course of action should be taken. What matters is whether 92% likelihood of benefit is enough to outweigh the cost, discomfort and side-effects, and whether the doctor additionally believes in the benefit is quite irrelevant.

Monday, January 28, 2019

Lying to prevent great evils

Consider this argument:

  1. It is permissible to lie to prevent great evils.

  2. Not believing in God is a great evil.

  3. So, it is permissible to lie to get people to believe in God (e.g., by offering false testimony to miracles).

But the conclusion is absurd. So we need to reject (1) or (2). I think (2) is secure. Thus we should reject (1).

I suppose one could try to calibrate some great level E of evil such that it is permissible to lie (a) to prevent evils at levels greater than E but (b) not to prevent evils lesser than E. I am sceptical that one can do this in a plausible way, given that not believing in God is indeed a great evil, since it makes it very difficult to achieve the primary goal of human life.

Perhaps a more promising way out of the argument is to formulate some subject-specific principle, such as that it is wrong to lie in religious matters or for religious ends. But it is hard to do this plausibly.

It seems better to me to just deny (1), and be an absolutist about lying: lying is always wrong.

Friday, January 25, 2019

Nonsummativism about group belief

Here is a quick argument that a group can believe something no individual does. You hire a team of three consultants to tell you whether a potential employee, Alice, is smart and honest. The team takes on the task. The team leader first leads a discussion as to which of the other two team members is best qualified to investigate which attribute, and unanimous agreement is reached on that question. Both of these then investigate and come to a decision. The team leader writes “Alice is” on a piece of paper, and then passes the piece of paper around to the second team member, who writes down the attribute she investigated or its negation, depending on what she found, followed by “and”. The leader then passes the piece of paper to the third team member, who writes down the attribute they investigated or its negation, followed by a period, without reading (and hence being biased by) what was written already. Job done, the leader without reading folds the paper in half and hands it to you, saying: “Here’s what we think.”

You open the paper and read the verdict of the consulting team: “Alice is smart and not honest.” The team agrees unanimously that the division of labor was the right way to produce an epistemically responsible group verdict, but nobody on the consulting team believes or even knows the verdict. The team leader has no opinions on Alice: she delegated the opinions to the intelligence and integrity experts. The intelligence expert has no view on Alice’s integrity and vice versa.

One could say that the team doesn’t believe its verdict. But to issue a verdict that one does not believe is to fail in sincerity. But there need be no failure in the above procedures.

(My own view is that when we say the team “believes” something, we are using “believes” in an analogical sense. But the points stand.)

Thursday, December 13, 2018

Group "belief"

Even though nobody thinks Strong AI has been achieved, we attribute beliefs to computer systems and software:

  • Microsoft Word thinks that I mistyped that word.

  • Google knows where I’ve been shopping.

The attribution is communicatively useful and natural, but is not literal.

It seems to me, however, that the difference in kind between the beliefs of computers and the beliefs of persons is no greater than the difference in kind between the beliefs of groups and the beliefs of persons.

Given this, the attribution of beliefs to groups should also not be taken to be literal.

Friday, November 30, 2018

Believing of God that he exists

One formulation of Schellenberg’s argument from hiddenness depends on the premise:

(4) If for any capable finite person S and time t, God is at t open to being in a personal relationship with S at t, then for any capable finite person S and time t, it is not the case that S is at t nonresistantly in a state of nonbelief in relation to the proposition that God exists.

Schellenberg argues that God is always open to personal relationships if he exists, and that there are people nonresistantly in a state of nonbelief to the proposition that God exists, and so God doesn’t exist.

I want to worry about a logical problem behind (4). Schellenberg attempts to derive (4) from a principle he calls Not Open that says, with some important provisos that won’t matter for this post, that “if a person A … is … in a state of nonbelief in relation to the proposition that B exists” but B could have gotten A to believe that B exists, “then it is not the case that B is … open … to having a personal relationship with A”.

It seems that Schellenberg gets (4) by substituting “God” for “B” in Not Open. But “the proposition that B exists” creates a hyperintensional context for “B”, and hence one cannot blithely substitute equals for equals, or even necessarily coextensive expressions, in Not Open.

Compare: If I have a personal relationship with Clark Kent, I then automatically have a personal relationship with Superman, even if I do not believe the proposition that Superman exists, because Superman and Clark Kent are in fact the same person. It is perhaps necessary for a personal relationship with Superman is that I believe of Superman that he exists, but I need not believe it of him under the description “Superman”.

So it seems to me that the only thing Schellenberg can get from Not Open is something like:

(4*) If for any capable finite person S and time t, God is at t open to being in a personal relationship with S at t, then for any capable finite person S and time t, it is not the case that S is at t nonresistantly in a state where he does not believe of God that he (or it) exists.

Now, to believe of x that it exists is to believe, for some y such that in fact y = x, that y exists.

But then all that’s needed to believe of God that he exists is to believe in the existence of something that is in fact coextensive with God. For instance, suppose an atheist believes that her mother is the being that loves her most. Then she presumably believes that the being that loves her most exists. In doing so, she believes of the being that loves her most that it exists. But in fact, assuming theism is true, the being that loves her most is God. So she believes of God that it (or he) exists.

At this point it is really hard to find non-controversial cases of the relevant kind of nonbelief that (4*) expresses. By “non-controversial”, I mean cases that do not presuppose the non-existence of God. For if God does in fact exist, he falls under many descriptions: “The being who loves me most”, “The existent being that Jean Vanier loves the most”, “The most powerful conscious being active on earth”, etc.

It is true that Schellenberg needs only one case. So even if it is true, on the assumption that God exists, that the typical atheist or agnostic believes of God that he exists, perhaps there are some people who don’t. But they will be hard to find—most atheists, I take it, think there is someone who loves them most (or loves them most in some particular respect), etc. I think the most plausible cases of examples are small children and the developmentally challenged. But those aren’t the cases Schellenberg’s argument focuses on, so I assume that’s not the line he would want to push.

The above shows that the doxastic prerequisite for a personal relationship with B is not just believing of B that it exists, since that’s too easy to get. What seems needed (at least if the whole doxastic line is to get off the ground—which I am not confident it does) is to believe of B that it exists and to believe it under a description sufficiently relevant to the relationship. For instance, suppose Alice falsely believes that her brother no longer exists, and suppose that not only does Alice’s brother still exist but he has been working out in secret and is now the fastest man alive. Alice believes that the fastest man alive exists, and mistakenly thinks he is Usain Bolt rather than her brother. So she does count as believing of her brother that he exists, but because she believes this under the description “the fastest man alive”, a description that she wrongly attaches to Bolt, her belief doesn’t help her have a relationship with her brother.

So probably (4*) should be revised to:

(4**) If for any capable finite person S and time t, God is at t open to being in a personal relationship with S at t, then for any capable finite person S and time t, it is not the case that S is at t nonresistantly in a state where he does not believe of God that he (or it) exists, under a description relevant to his personal relationship with God.

This doesn’t destroy the hiddenness argument. But it does make the hiddenness argument harder to defend, for one must find someone who does not believe in anything that would be coextensive with God if God exists under a description that would be relevant to a personal relationship with God. But there are, plausibly, many descriptions of God that would be so relevant.

A different move is to say that there can be descriptions D that in fact are descriptions precisely of x but some cases of believing that D exists are not cases of believing of x that it exists. Again, one will need to introduce some relevance criterion for the descriptions, though.

[Note added later: This was, of course, written before the revelations about Jean Vanier's abusiveness. I would certainly have chosen a different example if I were writing this post now.]

Friday, October 26, 2018

Groups and roles

I’ve had a grad student, Nathan Mueller, do an independent study in social epistemology in the hope of learning from him about the area (and indeed, I have learned much from him), so I’ve been thinking about group stuff once a week (at least). Here’s something that hit me today during our meeting. There is an interesting disanalogy between individuals and groups. Each group is partly but centrally defined by a role, with different groups often having different defining roles. The American Philosophical Association has a role defined by joint philosophical engagement, while the Huaco Bowmen have a role defined by joint archery. But this is not the case for individuals. While individuals have roles, the only roles that it is very plausible to say that they are partly and centrally defined by are general roles that all human beings have, roles like human being or child of God.

This means that if we try to draw analogies between group and individual concepts such as belief or intention, we should be careful to draw the analogy between the group concept and the concept as it applies not just to an individual but to an individual-in-a-role. Thus, the analogy is not between, say, the APA believing some proposition and my believing some proposition, but between the APA believing some proposition and my believing that proposition qua father (or qua philosopher or qua mathematician).

If this is right, then it suggests an interesting research program: Study the attribution of mental properties to individuals-in-roles as a way of making progress on the attribution of analogous properties to groups. For instance, there are well-founded worries in the social epistemology literature about simple ways of moving from the belief of the members of the group to the belief of the group (e.g., attributing to the group any belief held by the majority of the members). These might be seen to parallel the obvious fact that one cannot move from my believing p to my believing p qua father (or qua mathematician). And perhaps if we better understand what one needs to add to my believing p to get that I believe p qua father, this addition will help us understand the group case.

(I should say, for completeness, that my claim that the only roles that human beings are partly and centrally defined by are general roles like human being is controversial. Our recent graduate Mengyao Yan in her very interesting dissertation argues that we are centrally defined by token roles like child of x. She may even be right about the specific case of descent-based roles like child of x, given essentiality of origins, but I do not think it is helpful to analyze the attribution of mental properties to us in general in terms of us having these roles.)