Wednesday, March 28, 2018

A responsibility remover

Suppose soft determinism is true: the world is deterministic and yet we are responsible for our actions.

Now imagine a device that can be activated at a time when an agent is about to make a decision. The device reads the agent’s mind, figures out which action the agent is determined to choose, and then modifies the agent’s mind so the agent doesn’t make any decision but is instead compelled to perform the very action that they would otherwise have chosen. Call the device the Forcer.

Suppose you are about to make a difficult choice between posting a slanderous anonymous accusation about an enemy of yours that will go viral and ruin his life and not posting it. It is known that once the message is posted, there will be no way to undo the bad effects. Neither you nor I know how you will choose. I now activate the Forcer on you, and it makes you post the slander. Your enemy’s life is ruined. But you are not responsible for ruining it, because you didn’t choose to ruin it. You didn’t choose anything. The Forcer made you do it. Granted, you would have done it anyway. So it seems you have just had a rather marvelous piece of luck: you avoided culpability for a grave wrong and your enemy’s life is irreparably ruined.

What about me? Am I responsible for ruining your enemy’s life? Well, first, I did not know that my activation of the Forcer would cause this ruin. And, second, I knew that my activation of the Forcer would make no difference to your enemy: she would have been ruined given the activation if and only if she would have been ruined without it. So it seems that I, too, have escaped responsibility for ruining your enemy’s life. I am, however, culpable for infringing on your autonomy. However, given how glad you are of your enemy’s life being ruined with your having any culpability, no doubt you will forgive me.

Now imagine instead that you activated the Forcer on yourself, and it made you post the slander. Then for exactly the same reasons as before, you aren’t culpable for ruining your enemy’s life. For you didn’t choose to post the slander. And you didn’t know that activating the Forcer would cause this ruin, while you did know that the activation wouldn’t make any difference to your enemy—the effect of activating the Forcer on yourself would not affect whether the message would be posted. Moreover, the charge of infringing on autonomy has much less force when you activated the Forcer yourself.

It is true that by activating the Forcer you lost something: you lost the possibility of being praiseworthy for choosing not to post the slander. But that’s a loss that you might judge worthwhile.

So, given soft determinism, it is in principle possible to avoid culpability while still getting the exact same results whenever you don’t know prior to deliberation how you will choose. This seems absurd, and the absurdity gives us a reason to reject the compatibility of determinism and responsibility.

But the above story can be changed to worry libertarians, too. Suppose the Forcer reads off its patient’s mind the probabilities (i.e., chances) of the various choices, and then randomly selects an action with the probabilities of the various options exactly the same as the patient would have had. Then in acting the Forcer, it can still be true that you didn’t know how things would turn out. And while there is no longer a guarantee that things would turn out with the Forcer as they would have without it, it is true that activating the Forcer doesn’t affect the probabilities of the various actions. In particular, in the cases above, activating the Forcer does nothing to make it more likely that your enemy would be slandered. So it seems that once again activating the Forcer on yourself is a successful way of avoiding responsibility.

But while that is true, it is also true that if libertarianism is true, regular activation of the Forcer will change the shape of one’s life, because there is no guarantee that the Forcer will decide just like you would have decided. So while on the soft determinist story, regular use of the Forcer lets one get exactly the same outcome as one would otherwise have had, on the libertarian version, that is no longer true. Regular use of the Forcer on libertarianism should be scary—for it is only a matter of chance what outcome will happen. But on compatibilism, we have a guarantee that use of the Forcer won’t change what action one does. (Granted, one may worry that regular use of the Forcer will change one’s desires in ways that are bad for one. If we are worried about that, we can suppose that the Forcer erases one’s memory of using it. That has the disadvantage that one may feel guilty when one isn’t.)

I don’t know that libertarians are wholly off the hook. Just as the Forcer thought experiment makes it implausible to think that responsibility is compatible with determinism, it also makes it implausible to think that responsibility is compatible with there being precise objective chances of what choices one will make. So perhaps the libertarian would do well to adopt the view that there are no precise objective chances of choices (though there might be imprecise ones).

Tuesday, March 27, 2018

Closure for credence thresholds is atypical

In an earlier post, I speculated about thresholds and closure without doing any calculations. Now it’s time to do some calculations.

The Question: If you have two propositions that meet a credential threshold, how likely is it that their conjunction does as well? I.e., how likely is closure to hold for pairs of propositions meeting the threshold?

Model 1: Take a probability space with N points. Assign a credence to each of the N points by uniformly choosing a random number in some fixed range, and then normalizing so total probability is 1. Now among the 2N (up to equivalence) propositions about points in the probability space, choose two at random subject to the constraint that they both meet the threshold condition. Check if their conjunction meets the threshold condition. Repeat. The source code is here (MIT license).

The Results: With thresholds ranging from 0.85 to 0.95, as N increases, the probability of the conjunction meeting the threshold goes down. At N = 16, for all three thresholds, it is below 0.5. At N = 24, for all three thresholds, it is below 0.21. In other words, for randomly chosen propositions, we can expect closure to be atypical.

Note: The original model allows the two random propositions to turn out to be the same one. Otherwise, for N such that 1/N < t0, where t0 is the threshold, the probability of closure could be undefined as it might be impossible to generate two distinct propositions that meet the closure condition. Dropping the condition that allows the two random propositions to be the same will only make the probability of closure smaller. Here (also MIT license) is the modified code that does this. The results are here.

Final Remarks: This suggests that if the justification condition for knowledge is expressed in terms of a credence threshold, closure for knowledge will be atypical: i.e., for a random pair of propositions one knows, it will be unlikely that one will know the conjunction. Of course, it could be that the other conditions for knowledge, besides justification, will affect this, by making closure somewhat more likely. But I don’t have reason to think it will make an enormous difference. So, if one thinks closure should be typical, one shouldn’t think that justification is described by a credence threshold.

I go the other way: I think justification is described by a credence threshold, and now I think that closure is unlikely to be typical.

A limitation in the above models is that the propositions we normally talk about are not randomly chosen from the 2N propositions describing the probability space.

Monday, March 26, 2018

Thresholds and credence

Suppose we have some doxastic or epistemic status—say, belief or knowledge—that involves a credence threshold, such as that to count as believing p, you need to assign a credence of, say, at least 0.9 to p. I used to think that propositions that meet the threshold are apt to have credences distributed somewhat uniformly between the threshold or 1. But now I think this may be completely wrong.

Toy model: A perfectly rational agent has a probability space with N options and assigns equal credence to each option. There are 2N propositions (up to logical equivalence) that can be formed concerning the N options, e.g., “option 1 or option 2 or option 3”, one for each subset of the N options.

Given the toy model, for a threshold that is not too close to 0.5, and for a moderately large N (say, 10 or more), most of the 2N propositions that meet the threshold condition meet it just barely. The reason for that is this. A proposition can be identified with a subset of {1, ..., N}. The probability of the proposition is k/N where k is the number of elements in the subset. For any integer k between 0 and N, the number of propositions that have probability k/N will then be the binomial coefficient k!(N − k)!/N!. But when we look at this as a function of k, it will have roughly a normal distribution with standard deviation σ = N1/2/2 and center at N/2, and that distribution decays very fast, so most of the propositions that have probability at least k/N will have probability pretty close to k/N if k/N − 1/2 is significantly bigger than 1/N1/2.

I should have some graphs here, but it’s a really busy week.

Friday, March 23, 2018

Conjunctions and thresholds

Consider some positive epistemic or doxastic concept E, say knowledge or belief. Suppose that (maybe for a fixed context) E requires a credence threshold t0: a proposition only falls under E when the credence is t0 or higher.

Unless the non-credential stuff really, really cooperates, we wouldn’t expect to have closure under conjunction for all cases of E. For if p and q are cases of E that just barely satisfy the credential threshold condition, we wouldn’t expect their conjunction to satisfy it.

Question: Do we have any right to expect closure under conjunction typically, at least with respect to the credential condition? I.e., if p and q are randomly chosen distinct cases of E, is it reasonable to expect that their conjunction falls above the threshold?

Simple Model: The credences of our Es can fall anywhere between t0 and 1. Let’s suppose that the distribution of the credences is uniform between t0 and 1. Suppose, two, that distinct Es are statistically independent, so that the probability of the conjunction is the product of the probabilities.

Then there is a simple formula for the probability that the conjunction of randomly chosen distinct Es satisfy the credential threshold condition: (p0log p0 + (1 − p0))/(1 − p0)2. (Fix one credence between p0 and 1, and calculate the probability that the other credence satisfies the condition; then integrate from p0 and 1 and divide by 1 − p0.) We can plug some numbers in.

  • At threshold 0.5, probability of conjunction above threshold: 0.61

  • At threshold 0.75, probability of conjunction above threshold: 0.55

  • At threshold 0.9, probability of conjunction above threshold: 0.52

  • At threshold 0.95, probability of conjunction above threshold: 0.51

  • At threshold 0.99, probability of conjunction above threshold: 0.502

And the limit as threshold approaches 1 is 1/2.

So, it’s more likely than not that the conjunction satisfies the credential threshold, but on the other hand the probability is not high enough that we can say that it’s typically the conjunction satisfies the threshold.

But the model has two limitations which will affect the above.

Limitation 1: Intuitively, propositions with positive epistemic or doxastic status are more likely to have a credence closer to the low end of the [t0, 1] interval, rather than being uniformly distributed over it. This is going to make the probability of the conjunction meeting the threshold be lower than the Simple Model predicts.

Limitation 2: Even without being coherentists, we would expect that our doxastic states to “hang together”. Thus, typically, we would expect that if p and q are propositions that have a credence significantly above 1/2, then typically p and q will have a positive statistical correlation (with respect to credences), so that P(p ∧ q)>P(p)P(q), rather than being independent. This means that the Simple Model underestimates the how often the conjunction is above the threshold. In the extreme case that all our doxastic states are logically equivalent, the conjunction will always meet the threshold condition. In more typical cases, the correlation will be weaker, but we would still expect a significant credential correlation.

So it may well be that even if one takes into account Limitation 1, taking into account Limitation 2 will allow one to say that typically conjunctions of Es meet the threshold condition.

Acknowledgment: I am grateful to John Hawthorne for a discussion of closure and thresholds.

Thursday, March 22, 2018

Necessary Existence, now in the US

Josh Rasmussen's and my Necessary Existence book is now released in the US.

Wednesday, March 21, 2018

Bohmianism and God

Bohmian mechanics is a rather nice way of side-stepping the measurement problem by having a deterministic dynamics that generates the same experimental predictions as more orthodox interpretations of Quantum Mechanics.

Famously, however, Bohmian mechanics suffers from having to make the quantum equilibrium hypothesis (QEH) that the initial distribution of the particles matches the wavefunction, i.e., that the initial particle density is given by (at least approximately) |ψ|2. In other words, Bohmian mechanics requires the initial conditions to be fine-tuned for the theory to work, and we can then think of Bohmian mechanics as deterministic Bohmian dynamics plus QEH.

Can we give a fine-tuning argument for the existence of God on the basis of the QEH, assuming Bohmian dynamics? I think so. Given the QEH, nature becomes predictable at the quantum level, and God would have good reason to provide such predictability. Thus if God were to opt for Bohmian dynamics, he would be likely to make QEH true. On the other hand, in a naturalistic setting, QEH seems to be no better than an exceedingly lucky coincidence. So, given Bohmian dynamics, QEH does support theism over naturalism.

Theism makes it possible to be an intellectually fulfilled Bohmian. But I don’t know that we have good reason to be Bohmian.

Tuesday, March 20, 2018

Pruss and Rasmussen, Necessary Existence

Josh Rasmussen's and my Necessary Existence (OUP) book is out, both in Europe and in the US. I wish the price was much lower. The authors don't have a say over that, I think.

The great cover was designed by Rachel Rasmussen (Josh's talented artist wife).

Monday, March 19, 2018

"Before I formed you in the womb I knew you" (Jeremiah 1:5)

  1. Always: If x (objectually) knows y, then y exists (simpliciter). (Premise)

  2. Before I came into existence, it was true that God (objectually) knows me. (Premise)

  3. Thus, before I came into existence, it was true that I exist (simpliciter). (1 and 2)

  4. If 3, then eternalism is true. (Premise)

  5. Thus, eternalism is true. (3 and 4)

A variant of this argument uses “has a rightly ordered love for” in place of “(objectually) knows”.

Thursday, March 15, 2018

Something that has no reasonable numerical epistemic probability

I think I can give an example of something that has no reasonable (numerical) epistemic probability.

Consider Goedel’s Axiom of Constructibility. Goedel proved that if the Zermelo-Fraenkel (ZF) axioms are consistent, they are also consistent with Constructibility (C). We don’t have any strong arguments against C.

Now, either we have a reasonable epistemic probability for C or we don’t.

If we don’t, here is my example of something that has no reasonable epistemic probability: C.

If we do, then note that Goedel showed that ZF + C implies the Axiom of Choice, and hence implies the existence of non-measurable sets. Moreover, C implies that there is a well-ordering W on the universe of all sets that is explicitly definable in the language of set theory.

Now consider some physical quantity Q where we know that Q lies in some interval [x − δ, x + δ], but we have no more precise knowledge. If C is true, let U be the W-smallest non-measurable subset of [x − δ, x + δ].

Assuming that we do have a reasonable epistemic probability for C, here is my example of something that has no reasonable epistemic probability: C is false or Q is a member of U.

Logical closure accounts of necessity

A family of views of necessity (e.g., Peacocke, Sider, Swinburne, and maybe Chalmers) identifies a family F of special true statements that get counted as necessary—say, statements giving the facts about the constitution of natural kinds, the axioms of mathematics, etc.—and then says that a statement is necessary if and only if it can be proved from F. Call these “logical closure accounts of necessity”. There are two importantly different variants: on one “F” is a definite description of the family and on the other “F” is a name for the family.

Here is a problem. Consider:

  1. Statement (1) cannot be proved from F.

If you are worried about the explicit self-reference in (1), I should be able to get rid of it by a technique similar to the diagonal lemma in Goedel’s incompleteness theorem. Now, either (1) is true or it’s false. If it’s false, then it can be proved from F. Since F is a family of truths, it follows that a falsehood can be proved from truths, and that would be the end of the world. So it’s true. Thus it cannot be proved from F. But if it cannot be proved from F, then it is contingently true.

Thus (1) is true but there is a possible world w where (1) is false. In that world, (1) can be proved from F, and hence in that world (1) is necessary. Hence, (1) is false but possibly necessary, in violation of the Brouwer Axiom of modal logic (and hence of S5). Thus:

  1. Logical closure accounts of necessity require the denial of the Brouwer Axiom and S5.

But things get even worse for logical closure accounts. For an account of necessity had better itself not be a contingent truth. Thus, a logical closure account of necessity if true in the actual world will also be true in w. Now in w run the earlier argument showing that (1) is true. Thus, (1) is true in w. But (1) was false in w. Contradiction! So:

  1. Logical closure accounts of necessity can at best be contingently true.

Objection: This is basically the Liar Paradox.

Response: This is indeed my main worry about the argument. I am hoping, however, that it is more like Goedel’s Incompleteness Theorems than like the Liar Paradox.

Here's how I think the hope can be satisfied. The Liar Paradox and its relatives arise from unbounded application of semantic predicates like “is (not) true”. By “unbounded”, I mean that one is free to apply the semantic predicates to any sentence one wishes. Now, if F is a name for a family of statements, then it seems that (1) (or its definite description variant akin to that produced by the diagonal lemma) has no semantic vocabulary in it at all. If F is a description of a family of statements, there might be some semantic predicates there. For instance, it could be that F is explicitly said to include “all true mathematical claims” (Chalmers will do that). But then it seems that the semantic predicates are bounded—they need only be applied in the special kinds of cases that come up within F. It is a central feature of logical closure accounts of necessity that the statements in F be a limited class of statements.

Well, not quite. There is still a possible hitch. It may be that there is semantic vocabulary built into “proved”. Perhaps there are rules of proof that involve semantic vocabulary, such as Tarski’s T-schema, and perhaps these rules involve unbounded application of a semantic predicate. But if so, then the notion of “proof” involved in the account is a pretty problematic one and liable to license Liar Paradoxes.

One might also worry that my argument that (1) is true explicitly used semantic vocabulary. Yes: but that argument is in the metalanguage.

Tuesday, March 13, 2018

A third kind of moral argument

The most common kind of moral argument for theism is that theism better fits with there being moral truths (either moral truths in general, or some specific kind of moral truths, like that there are obligations) than alternative theories do. Often, though not always, this argument is coupled with a divine commmand theory.

A somewhat less common kind of argument is that theism better explains how we know moral truths. This argument is likely to be coupled with an evolutionary debunking argument to argue that if naturalism and evolution were true, our moral beliefs might be true, and might even be reliable, but wouldn’t be knowledge.

But there is a third kind of moral argument that one doesn’t meet much at all in philosophical circles—though I suspect it is not uncommon popularly—and it is that theism better explains why we have moral beliefs. The reason we don’t meet this argument much in philosophical circles is probably that there seems to be very plausible evolutionary explanations of moral beliefs in terms of kin selection and/or cultural selection. Social animals as clever as we are benefit as a group from moral beliefs to discourage secret anti-cooperative selfishness.

I want to try to rescue the third kind of moral argument in this post in two ways. First, note that moral beliefs are only one of several solutions to the problem of discouraging secret selfishness. Here are three others:

  • belief in karmic laws of nature on which uncooperative individuals get very undesirable reincarnatory outcomes

  • belief in an afterlife judgment by a deity on which uncooperative individuals get very unpleasant outcomes

  • a credence around 1/2 to an afterlife judgment by a deity on which uncooperative individuals get an infinitely bad outcome (cf. Pascal’s Wager).

These three options make one think that cooperativeness is prudent, but not that it is morally required. Moreover, they are arguably more robust drivers of cooperative behavior than beliefs about moral requirement. Admittedly, though, the first two of the above might lead to moral beliefs as part of a theory about the operation of the karmic laws or the afterlife judgment.

Let’s assume that there are important moral truths. Still, P(moral beliefs | naturalism) is not going to exceed 1/2. On the other hand, P(moral beliefs | God) is going to be high, because moral truths are exactly the sort of thing we would expect God to ensure our belief in (through evolutionary means, perhaps). So, the fact of moral belief will be evidence for theism over naturalism.

The second approach to rescuing the moral argument is deeper and I think more interesting. Moreover, it generalizes beyond the moral case. This approach says that a necessary condition for moral beliefs is being able to have moral concepts. But to have moral concepts requires semantic access to moral properties. And it is difficult to explain on contemporary naturalistic grounds how we have semantic access to moral properties. Our best naturalistic theories of reference are causal, but moral properties on contemporary naturalism (as opposed to, say, the views of a Plato or an Aristotle) are causally inert. Theism, however, can nicely accommodate our semantic access to moral properties. The two main theistic approaches to morality ground morality in God or in an Aristotelian teleology. Aristotelian teleology allows us to have a causal connection to moral properties—but then Aristotelian teleology itself calls for an explanation of our teleological properties that theism is best suited to give. And approaches that ground morality in God give God direct semantic access to moral properties, which semantic access God can extend to us.

This generalizes to other kinds of normativity, such as epistemic and aesthetic: theism is better suited to providing an explanation of how we have semantic access to the properties in question.

Conscious computers and reliability

Suppose the ACME AI company manufactures an intelligent, conscious and perfectly reliable computer, C0. (I assume that the computers in this post are mere computers, rather than objects endowed with soul.) But then a clone company manufactures a clone of C1 out of slightly less reliable components. And another clone company makes a slightly less reliable clone of C2. And so on. At some point in the cloning sequence, say at C10000, we reach a point where the components produce completely random outputs.

Now, imagine that all the devices from C0 through C10000 happen to get the same inputs over a certain day, and that all their components do the same things. In the case of C10000, this is astronomically unlikely, as the super-unreliable components of the C10000 produce completely random outputs.

Now, C10000 is not computing. Its outputs are no more the results of intelligence than the copy of Hamlet typed by the monkeys is the result of intelligent authorship. By the same token, C10000 is not conscious on computational theories of consciousness.

On the other hand, C0’s outputs are the results of intelligence and C0 is conscious. The same is true for C1, since if intelligence or consciousness required complete reliability, we wouldn’t be intelligent and conscious. So somewhere in the sequence from C0 to C10000 there must be a transition from intelligence to lack thereof and somewhere (perhaps somewhere else) a transition from consciousness to lack thereof.

Now, intelligence could plausibly be a vague property. But it is not plausible that consciousness is a vague property. So, there must be some precise transition point in reliability needed for computation to yield consciousness, so that a slight decrease in reliability—even when the actual functioning is unchanged (remember that the Ci are all functioning in the same way)—will remove consciousness.

More generally, this means that given functionalism about mind, there must be a dividing line in measures of reliability between cases of consciousness and ones of unconsciousness.

I wonder if this is a problem. I suppose if the dividing line is somehow natural, it’s not a problem. I wonder if a natural dividing line of reliability can in fact be specified, though.

Monday, March 12, 2018

The usefulness of having two kinds of quantifiers

A central Aristotelian insight is that substances exist in a primary way and other things—say, accidents—in a derivative way. This insight implies that use of a single existential quantifier ∃x for both substances and forms does not cut nature at the joints as well as it can be cut.

Here are two pieces of terminology that together not only capture the above insight about existence, but do a lot of other (but closely related) ontological work:

  1. a fundamental quantifier ∃u over substances.

  2. for any y, a quantifier ∃yx over all the (immediate) modes (tropes) of y.

We can now define:

  • a is a substance iff ∃u(u = a)

  • b is a (immediate) mode of a iff ∃ax(x = b)

  • f is a substantial form of a substance a iff a is a substance and ∃ax(x = f): substantial forms are immediate modes of substances

  • b is a (first-level) accident of a substance a iff u is a substance ∃axxy(y = b & y ≠ x): first-level accidents are immediate modes of substantial forms, distinct from these forms (this qualifier is needed so that God wouldn’t coount as having any accidents

  • f is a substantial form iff ∃uux(x = f)

  • b is a (first-level) accident iff ∃uuxxy(y = b).

This is a close variant on the suggestion here.

Friday, March 9, 2018

A regress of qualitative difference

According to heavyweight Platonism, qualitative differences arise from differences between the universals being instantiated. There is a qualitative difference between my seeing yellow and your smelling a rose. This difference has to come from the difference between the universals seeing yellow (Y) and smelling a rose (R). But one doesn’t get a qualitative difference from being related in the same way to numerically but not qualitatively different things (compare: being taller than Alice is not qualitatively different from being taller than Bea if Alice and Bea are qualitatively the same—and in particular, of the same height). Thus, if the qualitative difference between my seeing yellow and your smelling a rose comes from being related by instantiation to different things, namely Y and R, then this presupposes that the two things are themselves qualitatively different. But this qualitative difference between Y and R depends on Y and R exemplifying different—and indeed qualitatively different—properties. And so on, in a regress!

Intrinsic attribution

  1. If heavyweight Platonism is true, all attribution of attributes to a subject is grounded in facts relating the subject to abstracta.

  2. Intrinsic attribution is never grounded in facts relating a subject to something distinct from itself.

  3. There are cases of intrinsic attribution with a non-abstract subject.

  4. If heavyweight Platonism is true, each case of intrinsic attribution to a non-abstract subject is grounded in facts relating that object to something other than itself. (By 1 and 2)

  5. So, if heavyweight Platonism is true, there are no cases of intrinsic attribution to a non-abstract subject. (2 and 4)

  6. So, heavyweight Platonism is not true. (By 2 and 5)

Here, however, is a problem with 3. All cases of attribution to a creature are grounded in the creature’s participation in God. Hence, no creature is a subject of intrinsic attribution. And God’s attributes are grounded in a relation between God and the Godhead. But by divine simplicity, God is the Godhead. Since the Godhead is abstract, God is abstract (as well as being concrete) and hence God does not provide an example of intrinsic attribution with a non-abstract subject.

I still feel that there is something to the above argument. Maybe the sense in which a creature’s attributes are grounded in the creature’s participation in God is different from the sense of grounding in 2.

Friday, March 2, 2018

Wishful thinking

Start with this observation:

  1. Commonly used forms of fallacious reasoning are typically distortions of good forms of reasoning.

For instance, affirming the consequent is a distortion of the probabilistic fact that if we are sure that if p then q, then learning q is some evidence for p (unless q already had probability 1 or p had probability 0 or 1). The ad hominem fallacy of appeal to irrelevant features in an arguer is a distortion of a reasonable questioning of a person’s reliability on the basis of relevant features. Begging the question is, I suspect, a distortion of an appeal to the obviousness of the conclusion: “Murder is wrong. Look: it’s clear that it is!”

Now:

  1. Wishful thinking is a commonly used form of fallacious reasoning.

  2. So, wishful thinking is probably a distortion of a good form of reasoning.

I suppose one could think that wishful thinking is one of the exceptions to rule (1). But to be honest, I am far from sure there are any exceptions to rule (1), despite my cautious use of “typically”. And we should avoid positing exceptions to generally correct rules unless we have to.

So, if wishful thinking is a distortion of a good form of reasoning, what is that good form of reasoning?

My best answer is that wishful thinking is a distortion of correct probabilistic reasoning on the basis of the true claim that:

  1. Typically, things go right.

The distortion consists in the fact that in the fallacy of wishful thinking one is reasoning poorly, likely because one is doing one or more of the following:

  1. confusing things going as one wishes them to go with things going right,

  2. ignoring defeaters to the particular case, or

  3. overestimating the typicality mentioned in (4).

Suppose I am right about (4) being true. Then the truth of (4) calls out for an explanation. I know of four potential explanations of (4):

  1. Theism: God creates a good world.

  2. Optimalism: everything is for the best.

  3. Aristotelianism: rightness is a matter of lining up with the telos, and causal powers normally succeed at getting at what they are aiming at.

  4. Statisticalism: norms are defined by what is typically the case.

I think (iv) is untenable, so that leaves (i)-(iii).

Now, optimalism gives strong evidence for theism. First, theism would provide an excellent explanation for optimalism (Leibniz). Second, if optimalism is true, then there is a God, because that’s for the best (Rescher).

Aristotelianism also provides evidence for theism, because it is difficult to explain naturalistically where teleology comes from.

So, thinking through the fallacy of wishful thinking provides some evidence for theism.

Thursday, March 1, 2018

Superpositions of conscious states

Consider this thesis:

  1. Reality is never in a superposition of two states that differ with respect to what, if anything, observers are conscious of.

This is one of the motivators for collapse interpretations of quantum mechanics. Now, suppose that S is an observable that describes some facet of conscious experience. Then according to (1), reality is always in some eigenstate of S.

Suppose that at the beginning t0 of some interval I of times, reality is in eigenstate ψ0. Now, suppose that collapse does not occur during I. By continuity considerations, then, over I reality cannot evolve to a state orthogonal to ψ0 without passing through a state that is a superposition of ψ0 and something else. In other words, over a collapse-free interval of time, the conscious experience that is described by S cannot change if (1) is true.

What if collapse happens? That doesn’t seem to help. There are two plausible options. Either collapses are temporally discrete or temporally dense. If they are temporally dense, then by the quantum Zeno effect with probability one we have no change with respect to S. If they are temporally discrete, then suppose that t1 is the first time after t0 at which collapse causes the system to enter a state ψ1 orthogonal to ψ0. But for collapse to be able to do that, the state would have had to have assigned some weight to ψ1 prior to the collapse, while yet assigning some weight to ψ0, and that would violate (1).

(There might also be some messy story where there are some temporally dense and some temporally isolated collapse. I haven’t figured out exactly what to say about that, other than that it is in danger of being ad hoc.)

So, whether collapse happens or not, it seems that (1) implies that there is no change with respect to conscious experience. But clearly the universe changes with respect to conscious experience. So, it seems we need to reject (1). And this rejection seems to force us into some kind of weird many-worlds interpretation on which we have superpositions of incompatible experiences.

There are, however, at least two places where this argument can be attacked.

First, the thesis that conscious experience is described by observables understood (implicitly) as Hermitian operators can be questioned. Instead, one might think that conscious states correspond to subsets of the Hilbert space, subsets that may not even be linear subspaces.

Second, one might say that (1) is false, but nothing weird happens. We get weirdness from the denial of (1) if we think that a superposition of, say, seeing a square and seeing a circle is some weird state that has a seeing-a-square aspect and a seeing-a-circle aspect (this is weird in different ways depending on whether you take a multiverse interpretation). But we need not think that. We need not think that if a quantum state ψ1 corresponds to an experience E1 and a state ψ2 corresponds to an experience E2, then ψ = a1ψ1 + a2ψ2 corresponds to some weird mix of E1 and E2. Perhaps the correspondence between physical and mental states in this case goes like this:

  1. when |a1| ≫ |a2|, the state ψ still gives rise to E1

  2. when |a1| ≪ |a2|, the state ψ gives rise to E2

  3. when a1 and a2 are similar in magnitude, the state ψ gives rise to no conscious experience at all (or gives rise to some other experience, perhaps one related to E1 and E2, or perhaps one that is entirely unrelated).

After all, we know very little about which conscious states are correlated with which physical states. So, it could be that there is always a definite conscious state in the universe. I suppose, though, that this approach also ends up denying that we should think of conscious states as corresponding in the most natural way to the eigenvectors of a Hermitian operator.