Sunday, October 14, 2018

A simple reductive theory of consciousness

I think it is possible for one mind to have multiple spheres of consciousness. One kind of case is diachronic: there need be no unity of consciousness between my awareness at t1 and my awareness at t2. Split brain patients provide synchronic example. (I suppose in both cases one can question whether there is really only one mind, but I’ll assume so.)

What if, then, it turned out that we do not actually have any unconscious mental states? Perhaps what I call “unconscious mental states” are actually conscious states that exist in a sphere of consciousness other than the one connected to my linguistic productions. Maybe it is the sphere of consciousness connected to my linguistic productions that I identify as the “conscious I”, but both spheres are equally mine.

An advantage of such a view would be that we could then accept the following simple reductive account of consciousness:

  • A conscious state just is a mental state.

Of course, this is only a partial reduction: the conscious is reduced to the mental. I am happy with that, as I doubt that the mental can be reduced to the non-mental. But it would be really cool if the mystery of the conscious could be reduced.

However, the above story still doesn’t fully solve the problem of consciousness. For it replaces the puzzle as to what makes some of my mental states conscious and others unconscious with the puzzle of what makes a plurality of mental states co-conscious, i.e., a part of the same sphere of consciousness. Perhaps this problem is more tractable than the problem of what makes a state conscious was, though?

Friday, October 12, 2018

Scepticism about culpability

I rarely take myself to know that someone is culpable for some particular wrongdoing. There are three main groups of exception:

  1. my own wrongdoings, so many of which I know by introspection to be culpable

  2. cases where others give me insight into their culpability through their testimony, their expressions of repentance, etc.

  3. cases where divine revelation affirms or implies culpability (e.g., Adam and David).

In type 2 cases, I am also not all that confident, because unless I know a lot about the person, I will worry that they are being unfair to themselves.

I am amazed that a number of people have great confidence that various infamous malefactors are culpable for their grave injustices. Maybe they are, but it seems easier to believe in culpability in the case of more minor offenses than greater ones. For the greater the offense, the further the departure from rationality, and hence the more reason there is to worry about something like temporary or permanent insanity or just crazy beliefs.

I don’t doubt that most people culpably do many bad things, and even that most people on some occasion culpably do something really bad. But I am sceptical of my ability to know which of the really bad things people do they are culpable for.

The difficulty with all this is how it intersects with the penal system. Is there maybe a shallower kind of culpability that is easier to determine and that is sufficient for punishment? I don’t know.

Being mistaken about what you believe

Consider:

  1. I don’t believe (1).

Add that I am opinionated on what I believe:

  1. For each proposition p, I either believe that I believe p or believe that I do not believe p.

Finally, add:

  1. My beliefs are closed under entailment.

Now I either believe (1) or not. If I do not believe (1), then I don’t believe that I don’t believe (1), by closure. But thus, by (2), I do believe that I do believe (1). Hence in this case:

  1. I am mistaken about what I do or do not believe.

Now suppose I do believe (1). Then I believe that I don’t believe (1), by closure and by what (1) says. So, (4) is still true.

Thus, we have an argument that if I am opinionated on what I believe and my beliefs are closed under entailment, then I am mistaken as to what I believe.

(Again, we need some way of getting God out of this paradox. Maybe the fact that God’s knowledge is non-discursive helps.)

Wednesday, October 10, 2018

Socratic perfection is impossible

Socrates thought it was important that if you didn't know something, you knew you didn't know it. And he thought that it was important to know what followed from what. Say that an agent is Socratically perfect provided that (a) for every proposition p that she doesn't know, she knows that she doesn't know p, and (b) her knowledge is closed under entailment.

Suppose Sally is Socratically perfect and consider:

  1. Sally doesn’t know the proposition expressed by (1).

If Sally knows the proposition expressed by (1), then (1) is true, and so Sally doesn’t know the proposition expressed by (1). Contradiction!

If Sally doesn’t know the proposition expressed by (1), then she knows that she doesn’t know it. But that she doesn’t know the proposition expressed by (1) just is the proposition expressed by (1). So Sally doesn’t know the proposition expressed by (1). So Sally knows the proposition expressed by (1). Contradiction!

So it seems it is impossible to have a Socratically perfect agent.

(Technical note: A careful reader will notice that I never used closure of Sally’s knowledge. That’s because (1) involves dubious self-reference, and to handle that rigorously, one needs to use Goedel’s diagonal lemma, and once one does that, the modified argument will use closure.)

But what about God? After all, God is Socratically perfect, since he knows all truths. Well, in the case of God, knowledge is equivalent to truth, so (1)-type sentences just are liar sentences, and so the problem above just is the liar paradox. Alternately, maybe the above argument works for discursive knowledge, while God’s knowledge is non-discursive.

Tuesday, October 9, 2018

Epistemic scores and consistency

Scoring rules measure the distance between a credence and the truth value, where true=1 and false=0. You want this distance to be as low as possible.

Here’s a fun paradox. Consider this sentence:

  1. At t1, my credence for (1) is less than 0.1.

(If you want more rigor, use Goedel’s diagonalization lemma to remove the self-reference.) It’s now a moment before t1, and I am trying to figure out what credence I should assign to (1) at t1. If I assign a credence less than 0.1, then (1) will be true, and the epistemic distance between 0.1 and 1 will be large on any reasonable scoring rule. So, I should assign a credence greater than or equal to 0.1. In that case, (1) will be false, and I want to minimize the epistemic distance between the credence and 0. I do that by letting the credence be exactly 0.1.

So, I should set my credence to be exactly 0.1 to optimize epistemic score. Suppose, however, that at t1 I will remember with near-certainty that I was setting my credence to 0.1. Thus, at t1 I will be in a position to know with near-certainty that my credence for (1) is not less than 0.1, and hence I will have evidence showing with near-certainty that (1) is false. And yet my credence for (1) will be 0.1. Thus, my credential state at t1 will be probabilistically inconsistent.

Hence, there are times when optimizing epistemic score leads to inconsistency.

There are, of course, theorems on the books that optimizing epistemic score requires consistency. But the theorems do not apply to cases where the truth of the matter depends on your credence, as in (1).

Monday, October 8, 2018

Evidentialism, and self-defeating and self-guaranteeing beliefs

Consider this modified version of William James’ mountaineer case: The mountaineer’s survival depends on his jumping over a crevasse, and the mountaineer knows that he will succeed in jumping over the crevasse if he believes he will succeed, but doesn’t know that he will succeed as he doesn’t know whether he will come to believe that he will succeed.

James used his version of the case to argue that pragmatic reasons can legitimately override lack of epistemic reasons.

But what is interesting to me in my variant is the way it provides a counterexample to evidentialism. Evidentialists say that you epistemically should form your beliefs only on the basis of evidence. But notice that although the belief that he will succeed at the jump needs to be formed in the absence of evidence for its truth, as soon as it is formed, the belief itself becomes its own evidence to the point that it turns into knowledge. The belief is self-guaranteeing. So there seems to be nothing to criticize epistemically about the formation of the belief, even though the formation is independent of evidence. In fact, it seems, there is a good epistemic reason to believe, since by believing the mountaineer increases the stock of his knowledge.

Moreover, we can even make the case be one where the evidence on balance points against the proposition. Perhaps the mountaineer has attempted, in safer circumstances, to get himself to believe that he can make such a jump, and seven times out of ten he has failed at both self-induction of belief, and also at the jump. But in the remaining three times out of ten, he succeeded at both. So, then, the mountaineer has non-conclusive evidence that he won’t manage to believe that he will succeed (and that he won’t succeed). If he comes to believe that he will succeed, he comes to believe this against the evidence—but, still, in doing, he increases his stock of knowledge, since the belief, once believed, is self-guaranteeing.

(This phenomenon of self-guaranteeing belief reminds me of things that Kierkegaard says about faith, where faith itself is a miracle that hence is evidence for its truth.)

Interestingly, we might also be able to construct cases of well-evidenced but self-defeating beliefs. Consider a jeweler who has noticed that she is successful at cutting a diamond if and only if she believes she will be unsuccessful. Her theory is that belief in her success makes her insufficiently careful. Over time, she has learned to suspend judgment in her success, and hence to be successful. But now she reflects on her history, and she finds herself with evidence that he will be successful in cutting the next diamond. Yet if she believes on this evidence, this will render her overconfident, and hence render the belief false!

This is related to the examples in this paper on lying.

So perhaps what the evidentialist needs to say is that you epistemically may believe p if and only if the evidence says that if you believe p, p is true?

Friday, October 5, 2018

"The" natural numbers

Benacerraf famously pointed out that there are infinitely many isomorphic mathematical structures that could equally well be the referrent of “the natural numbers”. Mathematicians are generally not bothered by this underdetermination of the concept of “the natural numbers”, precisely because the different structures are isomorphic.

What is more worrying are the infinitely many elementarily inequivalent mathematical structures that, it seems, could count as “the natural numbers”. (This becomes particularly worrying given that we’ve learned from Goedel that these structures give rise to different notions of provability.)

(I suppose this is a kind of instance of the Kripke-Wittgenstein puzzles.)

In response, here is a start of a story. Those claims about the natural numbers that differ in truth value between models are vague. We can then understand this vagueness epistemically or in some more beefy way.

An attractive second step is to understand it epistemically, and then say that God connects us up with his preferred class of equivalent models of the naturals.

Color-remapping

I’ve often wondered what would happen if one wore color-remapping glasses. Would the brain adapt, and everything would soon start looking normal, much as it does when you wear vision-inverting prisms. Well, it turns out that there is an interesting two-subject (the investigators) study using LCD glasses linked to a camera and did a color-space rotation. They found interesting results, but found that over six days there was no adaptation: stop signs still looked blue, the sky still looked green, and broccoli was red. This is not conclusive, since six days might just not be long enough. I would love to see the results of a longer study.

The philosophical relevance, of course, is to inverted-spectrum thought experiments.

Thursday, October 4, 2018

Panrepresentationalism

Panpsychists, as the term is commonly understood, think everything is conscious. An attractive but underexplored view is that everything nonderivatively represents. This was Leibniz's view, I suspect. One can add to this a reduction of experience to representation, but one does not have to.

Tuesday, October 2, 2018

When God doesn't act for some reason

Here’s an argument for a thesis that pushes one closer to omnirationality.

  1. God is responsible for all contingent facts about his will.

  2. No one is responsible for anything that isn’t an action (perhaps internal) done for a reason or the result of such an action.

  3. If God doesn’t act on a reason R that he could have acted on, that’s a contingent fact about his will.

  4. So, if God doesn’t act on a reason R, then either (a) God couldn’t have acted on R, or (b) God’s not acting on R itself has a reason S behind it.

Thursday, September 27, 2018

Learning without change in beliefs

There are books of weird mathematical things (e.g., functions with strange properties) to draw on for the sake of generating counterexamples to claims. This post is in the spirit of an entry in one of these books, but it’s philosophy, not mathematics.

Surprising fact: You can learn something without gaining or losing any beliefs.

For suppose proposition q in fact follows from proposition p, and at t1 you have an intellectual experience as of seeing q to follow from p. On the basis of that experience you form the justified and true belief that q follows from p. This belief would be knowledge, but alas the intellectual experience came from a chemical imbalance in the brain rather than from one’s mastery of logic. So you don’t know that q follows from p.

Years later, you consider q and p again, and you once again have an experience of q following from p. This time, however, the experience does come from your mastery of logic. This time you see, and not just think you see, that q follows from p. Your belief is now overdetermined: there is a Gettiered path to it and a new non-Gettiered path to it. The new path makes the belief be knowledge. But to gain knowledge is to learn.

But this gain of knowledge need not be accompanied by the loss of any beliefs. For instance, the new experience of q following from p doesn’t yield a belief that your previous experience was flawed. Nor need there by any gain of beliefs. For while you might form the second order belief that you see q following from p, you need not. You might just see that q follows from p, and form merely the belief that q follows from p, without forming any belief about your inner state. After all, this is surely more the rule than the exception in the case of sensory perception. When I see my colleague in the hallway, I will often form the belief that she is in the hallway rather than the self-regarding belief that I see her in the hallway. (Indeed, likely, small children and most non-human animals never form the “I see” belief.) And surely this phenomenon is not confined to the case of sensory perception. At least, it is possible to have intellectual perceptions where we form only the first-order belief, and form no any self-regarding second-order belief.

So, it is possible to learn something without gaining or losing beliefs.

In fact, plausibly, the original flawed experience could have been so clear that we were fully certain that q follows from p. In that case, the new experience not only need not change any of our beliefs, but need not even change our credences. The credence was 1 before, and it can’t go up from there.

OK, so we have a counterexample. Can we learn anything from it?

Well, here are two things. One might use the story to buttress the idea that even knowledge of important matters—after all, the relation between q and p might be important—is of little value. For it seems of very little value to gain knowledge when it doesn’t change how one thinks about anything. One might also use it to argue that either understanding doesn’t require knowledge or that understanding doesn’t have much value. For if understanding does require knowledge, then one could set up a story where by learning that q follows from p one gains understanding—without that learning resulting in any change in how one thinks about things. Such a change seems of little worth, and hence the understanding gained is of little worth.

Tuesday, September 25, 2018

Faith and belief

Christians are called to have faith in Jesus Christ.

The Old Testament, however, is big on not putting our faith in anything other than God.

Thus, someone who has faith in Jesus Christ but does not believe that Jesus Christ is God is risking violating a central principle of the Old Testament.

Moreover, faith in Jesus requires submission to Jesus. But Jesus wants his followers to obey the central principles of the Old Testament.

Thus, for someone aware of these observations, it is not possible to have faith in Jesus Christ without believing that he is God. This is a serious problem for accounts of faith that claim that a Christian need not have any doctrinal beliefs.

Friday, September 21, 2018

Lottery cases and Bayesianism

Here’s a curious thing. The ideal Bayesian reasons about all contingent cases just as she reasons about lottery cases. If the reasoning doesn’t yield knowledge in lottery cases (i.e., if the ideal Bayesian can’t know that she won’t win the lottery), it doesn’t yield knowledge in any contingent cases. So, if the ideal Bayesian doesn’t know in lottery cases, she doesn’t know in any contingent cases. So she knows in lottery cases, I say.

Wednesday, September 19, 2018

Gettier and lottery cases

I am going to give some cases that support this thesis.

  1. If you can know that you won’t win the lottery, then in typical Gettier cases you are in a position to know, in the sense that there is a line of statistical reasoning such that if you engage in it, then you know.

There are two conclusions you could draw. You might think, as I do, that you know you won’t win the lottery (assuming that in fact you won’t win). In that case, (1) will offer something rather cool: it will tell you that if you reason in a statistical sort of way, you can get knowledge where instead you would have had mere justified true belief. If knowledge is valuable, that will be a good reason to reason in that statistical way rather than the more usual way.

Or you might think the consequent of (1) is absurd, and conclude that you don’t know you won’t win the lottery.

Start with the colorful lottery case. In this case, there are 1000 lottery tickets, each in a sealed envelope. Of these, 990 are printed on white paper, and they are all losing tickets. The remaining 10 tickets are printed on red paper, and 9 of them are winning tickets. You’re handed a sealed envelope. So, you have a 99.1% probability of losing. I will assume that that’s good enough for knowledge. (If not, tweak the numbers. More on that issue later.) Suppose you come to believe you will lose on the basis of this argument:

  1. You have a ticket.

  2. If you have a ticket, it’s very likely that it’s a white losing ticket or a red losing ticket.

  3. So, very likely it’s a losing ticket.

  4. So (ampliatively) it’s a losing ticket.

Suppose, further, that in fact you’re right—it is a losing ticket. Then assuming you know in lottery cases, your belief that you will lose is knowledge. And I further submit that it doesn’t matter for knowledge whether your ticket is actually printed on white or red paper. All that matters for knowledge is that it is in fact a losing ticket. The reason it doesn’t matter is that the color of the tickets is just noise in the story. Clearly if your ticket is white, you knew you’d lose. But you also know it if it’s red. For regardless of which losing ticket you have, there is always some property (perhaps a disjunctive one) such that your losing ticket and all the nine winning tickets have that. That that property is redness doesn’t seem to matter at all.

So, I take it that if you can know that you won’t win the lottery, then in the colorful lottery case you know you won’t win when in fact you have a losing ticket—even if that losing ticket is red.

Now let’s move to Gettier cases. Take a standard Gettier case where you see what looks like a sheep in the field, and you come to believe that it is a sheep, and so you conclude that there is a sheep in the field. But in fact what you see isn’t a sheep but a dog standing in front of a sheep. So, you have a justified true belief that there is a sheep, but you don’t know it. But suppose that instead of reasoning through the claim that you see a sheep, you reason as follows:

  1. You seem to see a sheep.

  2. If you seem to see a sheep, it’s very likely that you see a sheep in the field or there is an unseen sheep in the field.

  3. So, very likely there is a sheep in the field.

  4. So (ampliatively) there is a sheep in the field.

This seems to be perfectly good reasoning. It clearly would be good reasoning if you have “you see a sheep in the field” in the consequent of (7). But adding the “unseen sheep” disjunct only makes the reasoning better. Moreover, this reasoning is exactly parallel to the colorful lottery case. So just as in the colorful lottery case you know that you have a losing ticket regardless of whether you have a white ticket or a red losing ticket, likewise in this case, you know that there is a sheep in the field regardless of whether you see a sheep in the field or there is an unseen sheep in the field.

So, it seems that although reasoning via the claim that what you see is a sheep would not lead to knowledge—Gettier is right about that—you had an alternate statistical form of reasoning (6)–(9) that would have given you knowledge.

If knowledge is valuable—something I doubt—that’s reason to prefer such statistical forms of reasoning.

Objection 1: In lottery cases you only know when the probabilities are overwhelming while in ordinary perceptual knowledge cases you know with much smaller probabilities. Thus, perhaps, you only know in lottery cases when the chance of winning is something like one in a million, but in perceptual cases you can know even if there is a 1% chance of the observations going wrong. If so, then standard Gettier cases may not be parallel to those (colorized) lottery cases where there is knowledge, because the “very likely” in (3) will have to be much stronger for knowledge than in (7).

Response: If you can know in perceptual cases at, say, 99% probability but need a much higher probability in lottery cases, you have the absurd consequence that you can say things like: “I think it’s a lot more likely that I lost the lottery than that I see a sheep. But I know that I see a sheep and I don’t know that I lost the lottery.” Now, this sort of reasoning will not appeal to everyone. But I think it is one of the main intuitions behind the thought that you know in lottery cases at all. And my argument was predicated on knowing in lottery cases.

Moreover, even if there is the sort of difference that the objection claims, we still get the very interesting result that in some Gettier cases—say, ones that involve highly reliable perception—we can gain knowledge if we switch to statistical reasoning.

Objection 2: You don’t know you will lose in the colorful lottery case when in fact you have a red losing ticket but you do know when in fact you have a white ticket.

Response: If that case doesn’t convince you, consider this variant one. A doctor has a treatment for a condition that on some, but relatively rare, occasions goes away on its own. The treatment is highly effective. Most of the time it fixes the condition. The doctor reasons:

  1. You will get the treatment.

  2. If you will get the treatment, very likely either you will recover because of it or you will recover spontaneously.

  3. So, very likely you will recover.

  4. So (ampliatively) you will recover.

And this reasoning seems to yield knowledge—at least if we admit knowledge of chancy future things as friends of knowledge in lottery cases must—as long as you will in fact recover, and will yield knowledge regardless of whether you recover because of the treatment or spontaneously.

Monday, September 17, 2018

Non-propositional conveyance

One sometimes hears claims like:

  1. There are things that can be conveyed through X (poetry, novels, film, art, music, etc.) that cannot be conveyed propositionally.

But what kind of a thing are those things? Facts? Not quite. For while some of the “things that can be conveyed … that cannot be conveyed propositionally” are in fact real and true, some are not. Leni Riefenstahl’s Triumph of the Will and Fritz Lang’s M are both good candidates for conveying “things … that cannot be conveyed propositionally”. But Triumph in doing so conveys falsehoods about the Nazi Party while M conveys truths about the human condition. But facts just are. So, the “things” are not just facts.

What I said about Triumph and M is very natural. But if we take it literally, the “things” must then be the sorts of things that can be true or false. But the primary bearers of truth are propositions. So when we dig deeper, (1) is undermined. For surely we don’t want to say that Triumph and M convey propositions that cannot be conveyed propositionally.

Perhaps, though, this was too quick. While I did talk of truth and falsehood initially, perhaps I could have talked of obtaining and not obtaining. If I did that, then maybe the “things” would have turned out to be states of affairs (technically, of the abstract Plantinga sort, not of the Armstrong sort). But I think there is good reason to prefer propositions to states of affairs here. First, it is dubious whether there are impossible states of affairs. But not only can X convey things that aren’t so, it can also convey things that couldn’t be so. A novel or film might convey ethical stuff that not only is wrong, but couldn’t be right. Second, what is conveyed is very fine-grained, and it seems unlikely to me that states of affairs are fine-grained enough. The right candidate seems to be not only propositions, but Fregean propositions.

But (1) still seems to be getting at something true. I think (1) is confusing “propositionally” with “by means of literalistic fact-stating affirmative sentences”. Indeed:

  1. There are things that can be conveyed through X (poetry, novels, film, art, music, etc.) that cannot be conveyed by means of literalistic fact-stating affirmative sentences.

(Note the importance of the word “conveyed”. If we had “expressed”, that might be false, because for any of the “things”, we could stipulate a zero-place predicate, say “xyzzies”, and then express it with “It xyzzies.” But while that sentence manages to express the proposition, it doesn’t convey it.)