Tuesday, October 16, 2018

Two tenure track jobs at Baylor

We have two tenure-track jobs at Baylor. Both are open, but we have different preferred (but not required) specializations for them:

  • Job 1: roughly, LEMM and its history

  • Job 2: roughly, non-LEMM and its history

Yet another reason we need social epistemology

Consider forty rational people each individually keeping track of the ethnicities and virtue/vice of the people they interact with and hear about (admittedly, one wonders why a rational person would do that!). Even if there is no statistical connection—positive or negative—between being Polish and being morally vicious, random variation in samples means that we would expect two of the forty people to gain evidence that there is a statistically significant connection—positive or negative—between being Polish and being morally vicious at the p = 0.05 level. We would, further, intuitively expect that one in the forty would come to conclude on the basis of their individual data that there is a statistically significant negative connection between Polishness and vice and one that there is a statistically significant positive connection.

It seems to follow that for any particular ethnic or racial or other group, at the fairly standard p = 0.05 significance level, we would expect about one in forty rational people to have a rational racist-type view about any particular group’s virtue or vice (or any other qualities).

If this line of reasoning is correct, it seems that it is uncharitable to assume that a particular racist’s views are irrational. For there is a not insignificant chance that they are just one of the unlucky rational people who got spurious p = 0.05 level confirmation.

Of course, the prevalence of racism in the US appears to be far above the 1/40 number above. However, there is a multiplicity of groups one can be a racist about, and the 1/40 number is for any one particular group. With five groups, we would expect that approximately 5/40=1/8 (more precisely 1 − (39/40)5) of rational people to get p = 0.05 confirmation of a racist-type hypothesis about one of the groups. That’s still presumably significantly below the actual prevalence of racism.

But in any case this line of reasoning is not correct. For we are not individual data gatherers. We have access to other people’s data. The widespread agreement about the falsity of racist-type claims is also evidence, evidence that would not be undercut by a mere p = 0.05 level result of one’s individual study.

So, we need social epistemology to combat racism.

Sunday, October 14, 2018

A simple reductive theory of consciousness

I think it is possible for one mind to have multiple spheres of consciousness. One kind of case is diachronic: there need be no unity of consciousness between my awareness at t1 and my awareness at t2. Split brain patients provide synchronic example. (I suppose in both cases one can question whether there is really only one mind, but I’ll assume so.)

What if, then, it turned out that we do not actually have any unconscious mental states? Perhaps what I call “unconscious mental states” are actually conscious states that exist in a sphere of consciousness other than the one connected to my linguistic productions. Maybe it is the sphere of consciousness connected to my linguistic productions that I identify as the “conscious I”, but both spheres are equally mine.

An advantage of such a view would be that we could then accept the following simple reductive account of consciousness:

  • A conscious state just is a mental state.

Of course, this is only a partial reduction: the conscious is reduced to the mental. I am happy with that, as I doubt that the mental can be reduced to the non-mental. But it would be really cool if the mystery of the conscious could be reduced.

However, the above story still doesn’t fully solve the problem of consciousness. For it replaces the puzzle as to what makes some of my mental states conscious and others unconscious with the puzzle of what makes a plurality of mental states co-conscious, i.e., a part of the same sphere of consciousness. Perhaps this problem is more tractable than the problem of what makes a state conscious was, though?

Friday, October 12, 2018

Scepticism about culpability

I rarely take myself to know that someone is culpable for some particular wrongdoing. There are three main groups of exception:

  1. my own wrongdoings, so many of which I know by introspection to be culpable

  2. cases where others give me insight into their culpability through their testimony, their expressions of repentance, etc.

  3. cases where divine revelation affirms or implies culpability (e.g., Adam and David).

In type 2 cases, I am also not all that confident, because unless I know a lot about the person, I will worry that they are being unfair to themselves.

I am amazed that a number of people have great confidence that various infamous malefactors are culpable for their grave injustices. Maybe they are, but it seems easier to believe in culpability in the case of more minor offenses than greater ones. For the greater the offense, the further the departure from rationality, and hence the more reason there is to worry about something like temporary or permanent insanity or just crazy beliefs.

I don’t doubt that most people culpably do many bad things, and even that most people on some occasion culpably do something really bad. But I am sceptical of my ability to know which of the really bad things people do they are culpable for.

The difficulty with all this is how it intersects with the penal system. Is there maybe a shallower kind of culpability that is easier to determine and that is sufficient for punishment? I don’t know.

Being mistaken about what you believe

Consider:

  1. I don’t believe (1).

Add that I am opinionated on what I believe:

  1. For each proposition p, I either believe that I believe p or believe that I do not believe p.

Finally, add:

  1. My beliefs are closed under entailment.

Now I either believe (1) or not. If I do not believe (1), then I don’t believe that I don’t believe (1), by closure. But thus, by (2), I do believe that I do believe (1). Hence in this case:

  1. I am mistaken about what I do or do not believe.

Now suppose I do believe (1). Then I believe that I don’t believe (1), by closure and by what (1) says. So, (4) is still true.

Thus, we have an argument that if I am opinionated on what I believe and my beliefs are closed under entailment, then I am mistaken as to what I believe.

(Again, we need some way of getting God out of this paradox. Maybe the fact that God’s knowledge is non-discursive helps.)

Wednesday, October 10, 2018

Socratic perfection is impossible

Socrates thought it was important that if you didn't know something, you knew you didn't know it. And he thought that it was important to know what followed from what. Say that an agent is Socratically perfect provided that (a) for every proposition p that she doesn't know, she knows that she doesn't know p, and (b) her knowledge is closed under entailment.

Suppose Sally is Socratically perfect and consider:

  1. Sally doesn’t know the proposition expressed by (1).

If Sally knows the proposition expressed by (1), then (1) is true, and so Sally doesn’t know the proposition expressed by (1). Contradiction!

If Sally doesn’t know the proposition expressed by (1), then she knows that she doesn’t know it. But that she doesn’t know the proposition expressed by (1) just is the proposition expressed by (1). So Sally doesn’t know the proposition expressed by (1). So Sally knows the proposition expressed by (1). Contradiction!

So it seems it is impossible to have a Socratically perfect agent.

(Technical note: A careful reader will notice that I never used closure of Sally’s knowledge. That’s because (1) involves dubious self-reference, and to handle that rigorously, one needs to use Goedel’s diagonal lemma, and once one does that, the modified argument will use closure.)

But what about God? After all, God is Socratically perfect, since he knows all truths. Well, in the case of God, knowledge is equivalent to truth, so (1)-type sentences just are liar sentences, and so the problem above just is the liar paradox. Alternately, maybe the above argument works for discursive knowledge, while God’s knowledge is non-discursive.

Tuesday, October 9, 2018

Epistemic scores and consistency

Scoring rules measure the distance between a credence and the truth value, where true=1 and false=0. You want this distance to be as low as possible.

Here’s a fun paradox. Consider this sentence:

  1. At t1, my credence for (1) is less than 0.1.

(If you want more rigor, use Goedel’s diagonalization lemma to remove the self-reference.) It’s now a moment before t1, and I am trying to figure out what credence I should assign to (1) at t1. If I assign a credence less than 0.1, then (1) will be true, and the epistemic distance between 0.1 and 1 will be large on any reasonable scoring rule. So, I should assign a credence greater than or equal to 0.1. In that case, (1) will be false, and I want to minimize the epistemic distance between the credence and 0. I do that by letting the credence be exactly 0.1.

So, I should set my credence to be exactly 0.1 to optimize epistemic score. Suppose, however, that at t1 I will remember with near-certainty that I was setting my credence to 0.1. Thus, at t1 I will be in a position to know with near-certainty that my credence for (1) is not less than 0.1, and hence I will have evidence showing with near-certainty that (1) is false. And yet my credence for (1) will be 0.1. Thus, my credential state at t1 will be probabilistically inconsistent.

Hence, there are times when optimizing epistemic score leads to inconsistency.

There are, of course, theorems on the books that optimizing epistemic score requires consistency. But the theorems do not apply to cases where the truth of the matter depends on your credence, as in (1).

Monday, October 8, 2018

Evidentialism, and self-defeating and self-guaranteeing beliefs

Consider this modified version of William James’ mountaineer case: The mountaineer’s survival depends on his jumping over a crevasse, and the mountaineer knows that he will succeed in jumping over the crevasse if he believes he will succeed, but doesn’t know that he will succeed as he doesn’t know whether he will come to believe that he will succeed.

James used his version of the case to argue that pragmatic reasons can legitimately override lack of epistemic reasons.

But what is interesting to me in my variant is the way it provides a counterexample to evidentialism. Evidentialists say that you epistemically should form your beliefs only on the basis of evidence. But notice that although the belief that he will succeed at the jump needs to be formed in the absence of evidence for its truth, as soon as it is formed, the belief itself becomes its own evidence to the point that it turns into knowledge. The belief is self-guaranteeing. So there seems to be nothing to criticize epistemically about the formation of the belief, even though the formation is independent of evidence. In fact, it seems, there is a good epistemic reason to believe, since by believing the mountaineer increases the stock of his knowledge.

Moreover, we can even make the case be one where the evidence on balance points against the proposition. Perhaps the mountaineer has attempted, in safer circumstances, to get himself to believe that he can make such a jump, and seven times out of ten he has failed at both self-induction of belief, and also at the jump. But in the remaining three times out of ten, he succeeded at both. So, then, the mountaineer has non-conclusive evidence that he won’t manage to believe that he will succeed (and that he won’t succeed). If he comes to believe that he will succeed, he comes to believe this against the evidence—but, still, in doing, he increases his stock of knowledge, since the belief, once believed, is self-guaranteeing.

(This phenomenon of self-guaranteeing belief reminds me of things that Kierkegaard says about faith, where faith itself is a miracle that hence is evidence for its truth.)

Interestingly, we might also be able to construct cases of well-evidenced but self-defeating beliefs. Consider a jeweler who has noticed that she is successful at cutting a diamond if and only if she believes she will be unsuccessful. Her theory is that belief in her success makes her insufficiently careful. Over time, she has learned to suspend judgment in her success, and hence to be successful. But now she reflects on her history, and she finds herself with evidence that he will be successful in cutting the next diamond. Yet if she believes on this evidence, this will render her overconfident, and hence render the belief false!

This is related to the examples in this paper on lying.

So perhaps what the evidentialist needs to say is that you epistemically may believe p if and only if the evidence says that if you believe p, p is true?

Friday, October 5, 2018

"The" natural numbers

Benacerraf famously pointed out that there are infinitely many isomorphic mathematical structures that could equally well be the referrent of “the natural numbers”. Mathematicians are generally not bothered by this underdetermination of the concept of “the natural numbers”, precisely because the different structures are isomorphic.

What is more worrying are the infinitely many elementarily inequivalent mathematical structures that, it seems, could count as “the natural numbers”. (This becomes particularly worrying given that we’ve learned from Goedel that these structures give rise to different notions of provability.)

(I suppose this is a kind of instance of the Kripke-Wittgenstein puzzles.)

In response, here is a start of a story. Those claims about the natural numbers that differ in truth value between models are vague. We can then understand this vagueness epistemically or in some more beefy way.

An attractive second step is to understand it epistemically, and then say that God connects us up with his preferred class of equivalent models of the naturals.

Color-remapping

I’ve often wondered what would happen if one wore color-remapping glasses. Would the brain adapt, and everything would soon start looking normal, much as it does when you wear vision-inverting prisms. Well, it turns out that there is an interesting two-subject (the investigators) study using LCD glasses linked to a camera and did a color-space rotation. They found interesting results, but found that over six days there was no adaptation: stop signs still looked blue, the sky still looked green, and broccoli was red. This is not conclusive, since six days might just not be long enough. I would love to see the results of a longer study.

The philosophical relevance, of course, is to inverted-spectrum thought experiments.

Thursday, October 4, 2018

Panrepresentationalism

Panpsychists, as the term is commonly understood, think everything is conscious. An attractive but underexplored view is that everything nonderivatively represents. This was Leibniz's view, I suspect. One can add to this a reduction of experience to representation, but one does not have to.

Tuesday, October 2, 2018

When God doesn't act for some reason

Here’s an argument for a thesis that pushes one closer to omnirationality.

  1. God is responsible for all contingent facts about his will.

  2. No one is responsible for anything that isn’t an action (perhaps internal) done for a reason or the result of such an action.

  3. If God doesn’t act on a reason R that he could have acted on, that’s a contingent fact about his will.

  4. So, if God doesn’t act on a reason R, then either (a) God couldn’t have acted on R, or (b) God’s not acting on R itself has a reason S behind it.