Thursday, October 29, 2015

A weakly-fallibilist evidentialist can't be an evidential Bayesian

The title is provocative, but the thesis is less provocative (and in essence well-known: Hawthorne's work on the deeply contingent a priori is relevant) once I spell out what I stipulatively mean by the terms. By evidential Bayesianism, I mean the view that evidence should only impact our credences by conditionalization. By evidentialism, I mean the view that high credence in contingent matters should not be had except by evidence (most evidentialists make a stronger claims). By weak fallibilism, I mean that sometimes a correctly functioning epistemic agent appropriately would have high credence on the basis of non-entailing evidence. These three theses cannot all be true.

For suppose that they are all true, and I am a correctly functioning epistemic agent who has appropriate high credence in a contingent matter H, and yet my total evidence E does not entail H. By evidentialism, my credence comes from the evidence. By evidential Bayesianism, if P measures my prior probabilities, then P(H|E) is high. But it is a theorem that P(H|E) is less than or equal to P(EH), where the arrow is a material conditional. So the prior probability of EH is high. This conditional is not necessary as E does not etnail H. Hence, I have high prior credence in a contingent matter. Prior probabilities are by definition independent of my total evidence. So evidentialism is violated.

Tuesday, October 27, 2015

Edge cases, the moral sense and evolutionary debunking

It has been argued that if we are the product of unguided evolution, we would not expect our moral sense to get the moral facts right. I think there is a lot to those arguments, but let's suppose that they fail, so that there really is a good evolutionary story about how we would get a reliable moral sense.

There is, nonetheless, still a serious problem for the common method of cases as used in analytic moral philosophy. Even when a reliable process is properly functioning, its reliability and proper function only yield the expectation of correct results in normal cases. A process can be reliable and properly functioning and still quite unreliable in edge cases. Consider, for instance, the myriad of illusions that our visual system is prone to even when properly functioning. And yet our visual system is reliable.

This wouldn't matter much if ethical inquiry restricted itself to considering normal cases. But often ethical inquiry proceeds by thinking through hypothetical cases. These cases are carefully crafted to separate one relevant feature from others, and this crafting makes the cases abnormal. For instance, when arguing against utilitarianism, one considers such cases as that of the transplant doctor who is able to murder a patient and use her organs to save three others, and we carefully craft the case to rule out the normal utilitarian arguments against this action: nobody can find out about the murder, the doctor's moral sensibilities are not damaged by this, etc. But we know from how visual illusions work that often a reliable cognitive system concludes by heuristics rather than algorithms designed to function robustly in edge cases as well.

Now one traditional guiding principle in ethical inquiry, at least since Aristotle, has been to put a special weight on the opinions of the virtuous. However, while an agent's being virtuous may guarantee that her moral sense is properly functioning--that there is no malfunction--typical cognitive systems will give wrong answers in edge cases even when properly functioning. The heuristics embodied in the visual system that give rise to visual illusions are a part of the system's proper functioning: they enable the system to use fewer resources and respond faster in the more typical cases.

We now see that there is a serious problem for the method of cases in ethics, even if the moral sense is reliable and properly functioning. Even if we have good reason to think that the moral sense evolved to get moral facts right, we should not expect it to get edge case facts right. In fact, we would expect systematic error in edge cases, even among the truly virtuous. At most, we would expect evolution to impose a safety feature which ensures that failure in edge cases isn't too catastrophic (e.g., so that someone who is presented with a very weird case doesn't conclude that the right solution is to burn down her village).

Yet it may not be possible to do ethics successfully without the method of cases, including far-out cases, especially now that medical science is on the verge of making some of these cases no longer be hypothetical.

I think there are two solutions that let one keep the method of cases. The first is to say that we are not the product of unguided evolution, but that we are designed to have consciences that, when properly functioning (as they are in the truly virtuous), are good guides not just in typical cases but in all the vicissitudes of life, including those arising from future technological progress. This might still place limits on the method of cases, but the limits will be more modest. The second is to say that our moral judgments are at least partly grounded in facts about what our moral judgment would say were it properly functioning--this is a kind of natural law approach. (Of course, if one drops the "properly functioning" qualifier, we get relativism.)

Monday, October 26, 2015

Reverse engineering conscience

I was thinking about the method of cases in ethics, and it made me think of what we do when we apply the method as a reverse engineering of conscience. Reverse engineering of software has been one of the most fun things in my life. When I reverse engineer software, in order to figure out what the software does (e.g., how it stores data in an undocumented file format), I typically employ anywhere between one and three of the following methods:

  1. Observe the outputs in the ordinary course of operation.
  2. Observe the outputs given carefully crafted inputs.
  3. Look under the hood: disassemble the software, trace through the execution, do experiments with modifying the software, etc.
In ethics, there are obvious analogues to (1) and (2): looking at what our conscience says about actual cases that come up in our lives and looking at what our conscience says when fed carefully crafted imaginary cases. Reverse engineering of conscience suffers from two difficulties. The first is that method (3) is largely unavailable. The second is that conscience malfunctions more often than production software typically does, and does so in systematic ways. We can control for the second by reverse engineering the conscience of virtuous people (assuming we have--as I think we do--some independent access to who is virtuous).

But now suppose that this all works, that we really do succeed in reverse engineering conscience, and find out by what principles a properly functioning conscience decides whether an action is right or wrong. Why think this gives us anything of ethical interest? If we have a divine command theory, we have a nice answer: The same being whose commands constitute rightness and wrongness made that conscience, and it is plausible to think that he made it in order to communicate his commands to us. Perhaps more generally theistic theories other than divine command can give us a good answer, in that the faculty of conscience is designed by a being who cares immensely about right behavior. Likewise, if we have a natural law theory, we also have a nice answer: The faculty of conscience is part of our nature, and our nature defines what is right and wrong for us.

But what if conscience is simply the product of unguided evolution? Then by reverse engineering conscience we would not expect to find out anything other than facts about what kinds of behavior-guiding algorithms help us to pass on our genes.

So if all we do in the method of cases is this kind of reverse engineering, then outside of a theistic or natural law context we really should eschew use of the method in ethics.

Divorce, remarriage and communion

I've been thinking a bit about one of the key issues of the recent Synod on the Family, whether Catholics who have divorced and attempted remarriage without an annulment should be allowed to receive communion. As I understand the disagreement (I found this quite helpful), it's not really about the nature of marriage.

The basic case to think about is this:

Jack believes himself to be married to Jill, and publicly lives with her as husband and wife. But the Church knows, although Jack does not, that Jack is either unmarried or married to Suzy.
Should Jack be allowed to receive communion? After all, Jack is committing adultery (if he is actually married to Suzy) or fornication (if he's not actually married to Suzy) with Jill, and that's public wrongdoing. However, Jack is deluded into thinking that he's actually married to Jill. So Jack isn't aware that he's committing adultery or fornication. Jack may or may not be innocent in his delusion. If he is innocent in his delusion, then he is not culpably sinning ("formally sinning", as we Catholics say) in his adultery or fornication.

This is a hard question. On the one hand, given the spiritual benefits of the Eucharist, the Church should strive to avoid denying communion to an innocent person, and Jack might be innocent. On the other hand, letting Jack receive communion reinforces his delusion of being married to Jill, making him think that all is well with this aspect of his life, and committing adultery and fornication is good neither for Jack nor for Jill, even if they are ignorant of the fact that their relationship is adulterous or fornicatory.

One thing should be clear: this is not a clear case. There really are serious considerations in both directions, considerations fully faithful to the teaching of Scripture and Tradition that adultery and fornication are gravely wrong and that one should not receive communion when one is guilty of grave wrong.

One may think that the above way of spinning the case is not a fair reflection of real-world divorce and remarriage cases. What I said above makes it sound like Jack has hallucinated a wedding with Jill and may have amnesia about a wedding with Suzy. And indeed it is a difficult and far from clear pastoral question what to do with congregants who are suffering from hallucinations and amnesia. But in the real-life cases under debate, Jack really does remember exchanging vows with Suzy, and yet he has later exchanged other vows, in a non-Catholic ceremony, with Jill. Moreover, Jack knows that the Church teaches things that imply that he isn't really married to Jill. Does this make the basic case clear?

Well, to fill out the case, we also need to add the further information that the culture, at both the popular and elite levels, is telling Jack that he is married to Jill. And Jack thinks that the Church is wrong and the culture is right. I doubt we can draw a bright line between cases of mental aberration and those of being misled by the opinions of others. We are social animals, after all. (If "everyone" were to tell me that I never had a PhD thesis defense, I would start doubting my memories.)

At the same time, the cultural point plays in two opposite ways. On the one hand, it makes it more likely that Jack's ignorance is not culpable. On the other hand, it makes it imperative--not just for Jack and Jill's sake, but now also that of many others--not to act in ways that reinforce the ignorance and delusion. Moreover, the issue for Jack's spiritual health isn't just about his relationship with Jill. If Jack puts more weight in the culture than in Catholic teaching, Jack has other problems, and may need a serious jolt. But even that's not clear: that jolt might push him even further away from where he should be.

So I don't really know what the Church should do, and I hope the Holy Spirit will guide Pope Francis to act wisely.

In any case, I think my point stands that this isn't really about the nature of marriage. One can have complete agreement that adultery and fornication are wrong and that Jack isn't married to Jill, without it being clear what to do.

Friday, October 23, 2015


I have a strong theoretical commitment to:

  1. To feel pain is to perceive something as if it were bad.
  2. Veridical perception is non-instrumentally good.
On the other hand, I also have the strong intuition that:
  1. Particularly intense physical pain is always non-instrumentally bad.
Thus, (1) and (2) commit me to veridical pains being non-instrumentally good. But (3) commits me to particularly intense physical pain, whether veridical or not, being non-instrumentally bad. This has always been very uncomfortable for me, though not as uncomfortable as intense physical pain is.

But today I realized that there is no real contradiction between (1), (2) and (3). Rather than deriving a contradiction from (1)-(3), what we should conclude is:

  1. No instance of particularly intense physical pain is veridical.
And I don't have a very strong intuition against (4). And here is a story supporting (4). We systematically underestimate spiritual goods and bads, while we systematically overestimate physical goods and bads. Arguably, the worst of the physical bads is death, and yet both Christianity and ancient philosophy emphasize that we overestimate the badness of death. It is not particularly surprising that our perceptions suffer from a similar overestimation, and in particular that they typically present physical bads as worse than they are. If so, then it could well be that no merely physical bad is so bad as to be accurately represented by a particularly intense physical pain.

One difficulty is that the plausibility of my position depends on how one understands "particularly intense". If one has a high enough standard for that, then (4) is plausible, but it also becomes plausible that pains that just fall short of the standard still are non-instrumentally bad. If one has a lower standard for "particularly intense", then (4) becomes less plausible. I am hoping that there is a sweet spot (well, actually, a miserable spot!) where the position works.

Thursday, October 22, 2015

Countably infinite fair lotteries and sorting

There is nothing essential new here, but it is a particularly vivid way to put an observation by Paul Bartha.

You are going to receive a sequence of a hundred tickets from an countably infinite fair lottery. When you get the first ticket, you will be nearly certain (your probability will be 1 or 1 minus an infinitesimal) that the next ticket will have a bigger number. When you get the second, you will be nearly certain that the third will be bigger than it. And so on. Thus, throughout the sequence you will be nearly certain that the next ticket will be bigger.

But surely at some point you will be wrong. After all, it's incredibly unlikely that a hundred tickets from a lottery will be sorted in ascending order. To make the point clear, suppose that the way the sequence of tickets is picked is as follows. First, a hundred tickets are picked via a countably infinite fair lottery, either the same lottery, in which case they are guaranteed to be different, or independent lotteries, in which case they are nearly certain to be all different. Then the hundred tickets are shuffled, and you're given them one by one. Nonetheless, the above argument is unaffected by the shuffling: at each point you will be nearly certain that the next ticket you get will have a bigger number, there being only finitely many options for that to fail and infinitely many for it to succeed, and with all the options being equally likely.

Yet if you take a hundred numbers and shuffle them, it's extremely unlikely that they will be in ascending order. So you will be nearly certain of something, and yet very likely wrong in a number of the cases. And even while you are nearly certain of it, you will be able to go through this argument, see that in many of the judgments that the next number is bigger you will be wrong, and yet this won't affect your near certainty that the next number is bigger.

Russian roulette

Intuitively, imposing a game of Russian roulette on an innocent victim is constitutive of twice as much moral depravity when there are two bullets in the six-shooter as when there is only one. If so, then a one-bullet game of Russian roulette will carry about a sixth of the moral depravity of a six-bullet game, and hence about a sixth of the depravity of plain murder.

I am not so sure, though. The person imposing the game of Russian roulette is, I shall suppose, intending a conditional:

  1. If the bullet ends up in the barrel, the victim will die.
And then the above intuition suggests that the moral depravity in intending such a conditional is proportional to the probability of the antencedent. But consider other impositions of conditionals. Take, for instance, the mob boss who orders:
  1. If you can't pay the mayor off, get rid of him.
The amount of moral depravity that this order constitutes does not appear to be proportional to the probability of the mayor's rectitude (either in actual fact or as judged by the boss). If the underling is unable to bribe the mayor and kills him, the mob boss seems to be guilty of murder. But moral depravity should not depend on what happens after one's action--that would give too much scope for moral luck. So the depravity in giving the order is tantamount to murder, plus an additional dollop of depravity in corrupting public officials.

Perhaps, though, this judgment about the moral depravity of issuing order (2) is based on the thought that the kind of person who issues this order doesn't care much if the probability of integrity is 0.001 or 0.1 or 1. But the person who intends (1) may well care about the probability that the bullet ends up in the barrel. So perhaps the mob boss response doesn't quite do the job.

Here's another thought. It is gravely wrong to play Russian roulette with a single-bullet and a revolver with six thousand chambers. It doesn't seem that the moral depravity of this is a thousandth of the moral depravity of "standard" Russian roulette. And it sure doesn't sound like the moral depravity goes down by a factor of ten as the number of chambers goes up by a factor of ten.

Here, then, is an alternate suggestion. The person playing Russian roulette, like the mob boss, sets her heart on the death of an innocent person under certain circumstances. This setting of one's heart on someone's death is constitutive of a grave moral depravity, regardless of how likely the circumstances are. It could even be that this is wrong even when I know the circumstances won't obtain. For instance, it would be morally depraved to set one's heart on killing the Tooth Fairy if she turns out to exist, even when one knows that she doesn't exist. There is then an additional dollop of depravity proportional to the subjective probability that the circumstances obtain. That additional dollop comes from the risk one takes that someone will die and the risk one takes that one will become an actual murder. As a result, very roughly (in the end, the numerical evaluations are very much a toy model), the moral depravity in willing a conditional like (1) and (2) is something like:

  • A + pB
where p is the probability of the antecedent, and both A and B are large.

Wednesday, October 21, 2015

Art, perceptual deployment and triviality

Works of art are designed to be observed through a particular perceptual apparatus deployed in a particular way. A music CD may be shiny and pretty to the eye, but this is orthogonal to the relevant aesthetic qualities which are meant to be experienced through the ear. A beautiful painting made for a trichromat human would be apt to look ugly to people with the five color pigments (including ultraviolet!) of a pigeon. A sculpture is meant to be observed with visible light, rather than x-rays, and a specific set of points of view are intended--for instance, most sculptures are meant to be looked at from the outside rather than the inside (the inside of a beautiful statue can be ugly). So when we evaluate the aesthetic qualities of a work of art, we evaluate a pair: "the object itself" and the set of intended deployments of perception. But "perception" here must be understood broadly enough to include language processing. The same sequence of sounds can be nonsense in one language, an exquisite metaphor in another, and trite in a third. And once we include language processing, it's hard to see where to stop in the degree of cognitive update to be specified in the set of deployments of perception (think, for instance, about the background knowledge needed to appreciate many works).

Furthermore, for every physical object, there is a possible deployment of a possible perceptual apparatus that decodes the object into something with the structure of the Mona Lisa or of War and Peace. We already pretty much have the technology to make smart goggles that turn water bottles in the visual field into copies of Michelangelo's David, and someone could make sculptures designed to be seen only through those goggles. (Indeed, the first exhibit could just be a single water bottle.) And if one insists that art must be viewable without mechanical aids--an implausible restriction--one could in principle genetically engineer a human who sees in such a way.

Thus any object could be beautiful, sublime or ugly, when paired with the right set of deployments of perceptual apparatus, including of cognitive faculties. This sounds very subjectivistic, but it's not. For the story is quite compatible with there being a non-trivial objective fact about which pairs of object and set of perceptual deployments exhibit which aesthetic qualities.

Still, the story does make for trivialization. I could draw a scribble on the board and then specify: "This scribble must be seen through a perceptual deployment that makes it into an intricate work of beautiful symmetry." On the above view, I will have created a beautiful work of art relative to the intended perceptual deployments. But I will have outsourced all of the creative burden onto the viewer who will need to, say, design distorting lenses that give rise to a beautiful symmetry when trained on the scribble. That's like marketing a pair of chopsticks as a device that is guaranteed to rid one's home of mosquitoes if the directions are followed, where the directions say: "Catch mosquito with chopsticks, squish, repeat until done." One just isn't being helpful.

Tuesday, October 20, 2015

Final causation

A standard picture of final causation is this. A has a teleological directedness to engaging in activity F for the sake of producing B. Then B explains A's engaging in F by final causation. This picture is mistaken for one or two reasons. First, suppose that an interfering cause prevents B from arising from activity F. The existence of an interfering cause at this point does nothing to make it less explicable why F occurred. But it destroys the explanation in terms of B, since there is no B. Second, and more speculatively, it is tokens of things and events that enter into explanations, but teleology typically involves types not tokens. Thus, if B enters into the explanation of F, it will be a token, but then A's engagement in F won't be directed at B, but at something of such-and-such a type. In other words, we shouldn't take the "for the sake of" in statements like "A engaged in F for the sake of ___" to be an explanation. For if it's to be an explanation, the blank will need to be filled out with a particular token, say "B", but true "for the sake of" claims (at least in paradigmatic cases) have the right hand side filled in with an instance of a type, say "a G".

I think there is something in the vicinity of final causation, but it's not a weird backwards causation. Rather, in some cases A engagement in F produces B in a way that is a fulfillment of a teleological directedness in A. In that case the engagement in F to produce B in fulfillment of a teleology in A is explained by that teleology. In less successful cases--say, ones where an interfering cause is present--we can at least say that A's engagement in F is explained by that teleology. In these less successful cases, there is in one way less to be explained--success is absent and hence does not need to be explained--but there is still a teleological explanation (and there will also be an explanation of lack of success, due to interfering causes). But in any case, there is no backwards-looking causation.

Monday, October 19, 2015

Being trusting

This is a followup on the preceding post.

1. Whenever the rational credence of p is 0.5 on some evidence base E, at least 50% of human agents who assign a credence to p on E will assign a credence between 0.25 and 0.75.

2. The log-odds of the credence assigned by human agents given an evidence base can be appropriately modeled by the log-odds of the rational credence on that evidence base plus a normally distributed error whose standard deviation is small enough to guarantee the truth of 1.

3. Therefore, if I have no evidence about a proposition p other than that some agent assigned credence r on her evidence base, I should assign a credence at least as far from 0.5 as F(r), where:

  • F(0.5) = 0.5
  • F(0.6) = 0.57
  • F(0.7) = 0.64
  • F(0.8) = 0.72
  • F(0.9) = 0.82
  • F(0.95) = 0.89
  • F(0.98) = 0.95
  • F(0.99) = 0.97

4. This is a pretty trusting attitude.

5. So, it is rational to be pretty trusting.

The trick behind the argument is to note that (1) and (2) guarantee that the standard deviation of the normally distributed error on the log-odds is less than 1.63, and then we just do some numerical integration (with Derive) to compute the expected value of the rational credence.

Correcting Bayesian calculations

Normally, we take a given measurement is a sample of a bell-curve distribution centered on the true value. But we have to be careful. Suppose I report to you the volume of a cubical cup. What the error distribution is like depends on how I measured it. Suppose I weighed the cup before and after filling it with water. Then the error might well have the normal distribution we associate with the error of a scale. But suppose instead I measure the (inner) length of one of the sides of the cup, and then take the cube of that length. Then the measurement of the length will be normally distributed, but not the measurement of the volume. Suppose that what I mean by "my best estimate" of a value is the mathematical expectation of that value with respect to my credences. Then it turns out that my best estimate of the volume shouldn't be the cube of the side length, but rather it should be L3+3Lσ2, where L is the side-length and σ is the standard deviation in the side-length measurements. Intuitively, here's what happens. Suppose I measure the side length at 5 cm. Now, it's equally likely that the actual side length is 4 cm as that it is 6 cm. But 43=64 and 63=216. The average of these two equally-likely values is 140, which is actually more than 53=125. So if by best-estimate I mean the estimate that is the mathematical expectation of the value with respect to my credences, the best-estimate for the volume should be higher than the cube of the best-estimate for the side-length. (I'm ignoring complications due to the question whether the side-length could be negative; in effect, I'm assuming that the σ is quite a bit smaller than L.)

There is a very general point here. Suppose that by the best estimate of a quantity I mean the mathematical expectation of that quantity. Suppose that the quantity y I am interested in is given by the formula y=f(x) where x is something I directly measure and where my measurement of x has a symmetric error distribution (error of the same magnitude in either direction are equally likely). Then if f is a strictly convex function, then my best estimate for y should actually be bigger than f(x): simply taking my best estimate for x and applying f will underestimate y. On the other hand, if f is strictly concave, then my best estimate for y should be smaller than f(x).

But now let's consider something different: estimating the weight of evidence. Suppose I make a bunch of observations and update in a Bayesian way on the basis of them to arrive at a final credence. Now, it turns out that when you formulate Bayes' theorem in terms of the log-odds-ratio, it becomes a neat additive theorem:

  • posterior log-odds-ratio = prior log-odds-ratio + log-likelihood-ratio.
[If p is the probability, the log-odds ratio is log (p/(1−p)). If E is the evidence and H is the hypothesis, the log-likelihood-ratio is log (P(E|H)/P(E|~H)).] As we keep on repeating adding new evidence into the mix, we keep on adding new log-likelihood-ratios to the log-odds-ratio. Assuming competency in doing addition, there are two or three sources of error--sources of potential divergence between my actual credences and the rational credences given the evidence. First, I could have stupid priors. Second, I could have the wrong likelihoods. Third, perhaps, I could fail to identify the evidence correctly. Given the additivity between these errors, it's not unreasonable to think that error in the log-odds-ratio will be approximately normally distributed. (All I will need for my argument is that it has a distribution symmetric around some value.)

But as the case of the cubical cup shows, it does not follow that the error in the credence will be normally distributed. If x is the log-odds-ratio and p is the probability or credence, then p=ex/(ex+1). This is a very pretty function. It is concave for log-odds-ratios bigger than 0, corresponding to probabilities bigger than 1/2, and convex for log-odds-ratios smaller than 0, corresponding to probabilities less than 1/2, though it is actually fairly linear over a range of probabilities from about 0.3 to 0.7.

We can now calculate an estimate of the rational credence by applying the function ex/(ex+1) to the log-odds-ratio. This will be equivalent to the standard Bayesian calculation of the rational credence. But as we learn from the cube case, we don't in general get the best estimate of a quantity y that is a mathematical function of another quantity x by measuring x with normally distributed error and computing the corresponding y. When the function in question is convex, my best estimate for y will be higher than what I get in this way. When the function is concave, I should lower it. Thus, as long as we are dealing with small normal error in the log-odds-ratio, when we are dealing with probabilities bigger than around 0.7, I should lower my credence from that yielded by the Bayesian calculation, and when we are dealing with probabilities smaller than around 0.3, I should raise my credence relative to the Bayesian calculation. When my credence is between 0.3 and 0.7, to a decent approximation I can stick to the Bayesian credence, as the transformation function between log-odds-ratios and probabilities is pretty linear there.

How much difference does this correction to Bayesianism make? That depends on what the actual normally distributed error in log-odds-ratios is. Let's make up some numbers and plug into Derive. Suppose my standard deviation in log-odds-ratio is 0.4, which corresponds to an error of about 0.1 in probabilities when around 0.5. Then the correction makes almost no difference: it replaces a Bayesian's calculation of a credence 0.01 with a slightly more cautious 0.0108, say. On the other hand, if my log-odds-ratio standard deviation is 1, which corresponds with a variation of probability of around plus or minus 0.23 when centered on 0.5, then the correction changes a Bayesian's calculation of 0.01 to the definitively more cautious 0.016. But if my log-odds-ratio standard deviation is 2, corresponding to a variation of probability of 0.38 when centered on 0.5, then the correction changes a Bayesian's calculation of 0.01 to 0.04. That's a big difference.

There is an important lesson here. When I am badly unsure of the priors and/or likelihoods, I shouldn't just run with my best guesses and plug them into Bayes' theorem. I need to correct for the fact that my uncertainty about priors and/or likelihoods is apt to be normally (or at least symmetrically about the right value) distributed on the log-odds scale, not on the probability scale.

This could be relevant to the puzzle that some calculations in the fine-tuning argument yield way more confirmation than is intuitively right (I am grateful to Mike Rota for drawing my attention to the last puzzle, in a talk he gave at the ACPA).

A puzzle about testimony

You weigh a bag of marbles on a scale that you have no information about the accuracy of, and the scale says that the bag weighs 342 grams. If you have no background information about the bag of marbles, your best estimate of the weight of the bag is 342 grams. It would be confused to say: "I should discount for the unreliability of the scale and take my best estimate to be 300 grams." For if one has no information about the scale's accuracy, one should not assume that the scale is more likely to overestimate than to underestimate by a given amount. So far so good. Now, suppose that instead of your using the scale, you give me the bag, I hold it in my hand, and say: "That feels like 340 grams." Again, your best estimate of the weight will now be 340 grams. You don't know whether I am apt to overestimate or underestimate, so it's reasonable to just go with what I said.

But now consider a different case. You have no background information about my epistemic reliability and you have no evidence regarding a proposition p, but I inform you that I have some relevant evidence and I estimate the weight of that evidence at 0.8. It seems that the same argument as before should make you estimate the weight of the evidence available to me at 0.8. But that's all the evidence available right now to either of us, so you should thus assign a credence of 0.8 to p. But the puzzle is that this is surely much too trusting. Given no information about my reliability, you would surely discount, maybe assigning a credence of 0.55 (but probably not much less). Yet, doesn't the previous argument go through? I could be overestimating the weight of the evidence. But I could also be underestimating it. By discounting the probability, you are overestimating the probability of the denial of p, and that's bad.

There is, however, a difference between the weight of evidence and the weight of marbles. The weight of marbles can be any positive real number. And if we take really seriously the claim that there is no background information about the marbles, it could be a negative number as well. So we can reasonably say that I or the scale could equally be mistaken in the upward or the downward direction. However, if we know anything about probabilities, we know that they range between 0 and 1. So my estimate of 0.8 has more possibilities of being an overestimate than of being an underestimate. It could, for instance, be too high by 0.3, with the correct estimate of the weight of my evidence being 0.5, but it couldn't be too low by 0.3 for then the correct estimate would be 1.1. We can, thus, block the puzzling argument for trust. Though that doesn't mean the conclusion of the argument is wrong.

Friday, October 16, 2015

Musings on mathematics, logical implication and metaphysical entailment

I intuitively find the following picture very plausible. On the one hand, there are mathematical claims, like the Banach-Tarski Theorem or Euclid's Theorem on the Infinitude of the Primes. These are mysterious (especially the former!), and tempt one to some sort of non-realism. On the other hand, there are purely logical claims, like the claim that the ZFC axioms logically entails the Banach-Tarski Claim or that the Peano Axioms logically entail the Infinitude of the Primes. Pushed further, this intuition leads to something like logicism, which we all know has been refuted by Goedel. But I want to note that the whole picture is misleading. What does it mean to say that p logically entails q? Well, there are two stories. One is that every model of p is a model of q. That's a story about models, which are mathematical entities (sets or classes). Claims about models are mathematical claims in their own right, claims in principle just as tied to set-theoretic axioms as the Banach-Tarski Theorem. The other reading is that there is a proof from p to q. But proofs are sequences of symbols, and sequences of symbols are mathematical objects, and facts about the existence or non-existence of proofs are once again mathematical facts, tied to axioms and subject to the foundational worries that other mathematical facts are. So the idea that there is some radical difference between first-order mathematical claims and claims about what logically entails what, such that the latter is innocent of deep philosophy of mathematics issues (like Platonism), is untenable.

Interestingly, however, what I said is no longer true if we replace logical entailment with metaphysical entailment. The claim that the ZFC axioms metaphysically entail the Banach-Tarski Claim is not a claim of mathematics per se. So one could make a distinction between the mysterious claims of mathematics and the unmysterious claims of metaphysical entailment--if the latter are unmysterious. (They are unmysterious if one accepts the causal theory of them.)

This line of thought suggests an interesting thing: the philosophy of mathematics may require metaphysical entailment.

Tuesday, October 13, 2015

The afterlife and horrendously evil universes

  1. The moral intuitions of people who are a constituent part of a horrendously evil whole shouldn't be trusted absent significant evidence for the trustworthiness of these intuitions independent of these intuitions.
  2. Our moral intuitions should be trusted.
  3. We do not have significant evidence for the trustworthiness of these intuitions independent of these intuitions.
  4. We are constituent parts of the universe.
  5. A universe in which good persons permanently cease to exist is horrendously evil. (Think of the incredible unrepeatable value of persons.)
  6. So, we do not permanently cease to exist.
This is loosely based on an insight by Gabriel Marcel about the connection between 2, 5 and 6.

Monday, October 12, 2015

Virtue epistemology and Bayesian priors

I wonder if virtue epistemology isn't particularly well-poised to solve the problem of prior probabilities. To a first approximation, you should adopt those prior probabilities that a virtuous agent would in a situation with no information. This is perhaps untenable, because maybe it's impossible to have a virtuous agent in a situation with no information (maybe one needs information to develop virtue). If so, then, to a second approximation, you should adopt those prior probabilities that are implicit in a virtuous agent's epistemic practices. Obviously a lot of work is needed to work out various details. And I suspect that the result will end up being kind-relative, like natural law epistemology (of which this might be a species).

Exploring the moon with Minetest

I had a really good time at the ACPA over the weekend, but I also had some spare time at the airport, on the plane and in the hotel, so I entertained myself by finishing off a lunar mod for Minetest (a free Minecraft-like game, which works a lot better than Minecraft on old hardware) that uses real-world data from NASA's LRO spacecraft to generate lunar terrain. To make this more Minetest-y, I flatten the moon out into two pancakes. And I added some sky background textures from SpaceEngine. (The Enterprise isn't a part of this mod, but is generated with the script for RaspberryJamMod).

Sunday, October 11, 2015

An Aristotelian argument for a causal principle

Start with these assumptions:

  1. Laws of nature are grounded in the powers of things. (I.e., Aristotelian picture of laws.)
  2. Space can be infinite.
  3. Newtonian physics is metaphysically possible.
There is a somewhat handwaving argument that if (1)-(3) are true, then an object cannot come into existence ex nihilo for no cause at all, and hence we have a causal princople.

Here's why. Say that a gridpoint in a Newtonian three dimensional space is a point with coordinates (x,y,z) where x,y and z are integers (in some fixed unit system).

Given (1)-(3) and assuming that objects can pop into existence ex nihilo, it should possible to start with a universe of finite total mass and then for a Newtonian particle of equal non-zero mass to simultaneously pop into existence at all and only those gridpoints (x,y,z) where z is positive, with nothing popping into existence elsewhere. Here's why. At each gridpoint, the object should be able to pop into existence. But objects that pop into existence causelessly at one location in space would be doing so in complete oblivion of what happens at other gridpoints. There should be total logical independence between all the poppings into existence. If so, then any combination of poppings or non-poppings should be able to happen at the gridpoints, and in particular, it should possible to have particles of equal mass pop into existence at the gridpoints with positive z-coordinates but nowhere else. But if this happened, then each particle would experience an infinite force in the direction of the z-axis (this follows from Newton's shell theorem and some approximation work), which would result in an infinite acceleration, which is absurd.

A relativistic version of this argument would require that spacetime can be infinite, so we could arrange the particles popping into existence along a single backwards light-cone.

There is a more general point here. The above example will remind regular readers of an argument I recently gave for causal finitism. I think many paradox-based arguments for causal finitism can be turned into arguments for causal principles in something like the above way. If this is right, this is very cool, because we can get both premises of a Kalaam Cosmological Argument out of the paradoxes then.

Friday, October 9, 2015

Prudential rationality

Prudential rationality is about what an agent should do in the light of what is good or bad for the agent. Prudential or self-interested rationality is a major philosophical topic and considered fairly philosophically fundamental. Why? There are many (infinitely many) other categories of goods and bads, and for each category it makes sense to ask what one should do in the light of that category. For instance, family rationality is concerned with what an agent should do in the light of what is good or bad for people in the agent's family; leftward rationality is concerned with the good and bad for the people located to the agent's left; nearest-neighbor rationality with the good or bad for the person other than the agent whose center of mass is closest to the agent's center of mass; green-eye rationality with the good or bad for green-eyed people; and descendant rationality with the good or bad for one's descendants. Why should prudential rationality get singled out as a topic?

It's true that in terms of agent-relative categories, the agent is particularly natural. But the agent's descendants is also a quite natural agent-relative category.

This question reminds me of this thought (inspired by Nancy Cartwright's work). Physicists study things that don't exist. They study the motion of objects in isolated gravitational systems, in isolated quantum systems, and so on. But there are no isolated systems, and any system includes a number of other forces. It is, however, sometimes useful to study the influences that particular forces would have on their own.

However, in the end what we want to predict in physics is how real things move. And they move in the light of all the forces. And likewise in action theory we want to figure out how real people should act. And they should act in the light of all the goods and bads. We get useful insight into how and why real things move by studying how they would move if they were isolated or if only one force was relevant. We likewise get useful insight into how and why real people should act by studying what actions would be appropriate if they were isolated or if only one set of considerations were relevant. As a result we have people who study prudential rationality and people who study epistemic rationality.

It is nonetheless crucial not to forget that the study of how one should act in the light of a subset of the goods and bads is not a study of how one should act, but only a study of how one would need to act if that subset were all that's relevant, just as the study of gravitational systems is not a study of how things move, but only of how things would move if gravity were all that's relevant.

That said, I am not sure prudential rationality is actually that useful to study. Its main value is that it restricts the goods and bads to one person, thereby avoiding the difficult problem of balancing goods and bads between persons (and maybe even non-persons). But that value can be had by studying not prudential or self-interested rationality, but one-recipient rationality, where one studies how one should act in the light of the goods and bads to a single recipient, whether that recipient is or is not the agent.

It might seem innocent to make the simplifying assumption that the single recipient is the agent. But I think that doing this has a tendency to hide important questions that become clearer when we do not make this assumption. For instance, when one studies risk-averseness, we lose sight of the crucially important question of whose risk-averseness is relevant: the agent's or the recipient's? Presumably both, but they need to interact in a subtle and important way. To study risk-averseness in the special case where the recipient is the agent risks losing sight of something crucial in the phenomenon, just as one loses a lot of structure when instead of studying a mathematical function of two variables, say, f(x,y)=sin x cos y, one studies merely how that function behaves in the special case where the variables are equal. Although one does simplify by not studying the interaction between the agent's and the recipient's risk-averseness, one does so at the cost of confusing the two and not knowing which aspect of one's results is due to the risk-averseness of the person qua agent and which part is due to the risk-averseness of the person qua recipient.

Similarly, when one is interested--as decision theorists centrally are--in decision-making under conditions of uncertainty, it is important to distinguish between the relevance of the uncertainty of the person qua agent and the uncertainty of the person qua recipient. When we do that, we might discover a structure that was hidden in the special case where the agent and recipient are the same. For instance, we may discover that with respect to means the agent's uncertainty is much more important than the recipient's, but with respect to ends the recipient's uncertainty is very important.

To go back to the gravitational analogy, it's very useful to consider the gravitational interaction between particles x and y. But we lose an enormous amount of structure when we restrict our attention to the case where x=y. We would do better to make the simplifying assumption that we're considering two different particles, and then think of the one-particle case as a limiting case. Likewise for rationality. While we do need to study simplified cases, we need to choose the cases in a way that does not lose too much structure.

Of course, if we have an Aristotelian theory on which all one's actions are fundamentally aimed at one's own good, then what I say above will be unhelpful. For in that case, prudential rationality does capture the central structure of rationality. But such theories are simply false.

Wednesday, October 7, 2015

Fundamental chaos

By fundamental chaos I mean a violation of the Principle of Sufficient Reason, a situation that occurs for no cause at all, something brute. What would we expect fundamental chaos to look like? Suppose that for no cause at all a world made of a variety of blocks came into existence. Intuitively, we'd expect it to look something like the first image.

But there is no reason why it wouldn't instead look like the second image. After all, by hypothesis, there is no reason for it to look one way than another.

One might think that because most world look messy, we would expect the brutish world to look messy. But there are two problems with this argument. The technical problem is that while in my two images, the worlds were created out of a finite variety of blocks within a finite universe, in reality there are infinitely many possible arrangements, and there are just as many neat-looking as messy-looking ones (after all, there are infinitely many worlds that look like the dragon world, differing in fine-scale details of what's inside the dragon).

But more seriously, even if there are more messy than neat worlds, it only follows that we should expect a messy world if the worlds are all equally probable. But when the worlds come about for no cause at all, in violation of the Principle of Sufficient Reason, there are no probabilities for the worlds, and so we cannot say that they are equally probable.

What this means is that the chaos hypothesis must be refuted a priori, not a posteriori. We need the Principle of Sufficient Reason.

Monday, October 5, 2015

The Principles of Indifference and Sufficient Reason

The Principle of Indifference says that we should assign equal probabilities to outcomes that are on par. Why? The thing to say is surely: "Well, there is no reason for one outcome to be more likely than another." But the equal probability of outcomes only follows from this remark if the Principle of Sufficient Reason holds so that when there is no reason for something, it doesn't happen. So it seems that the Principle of Indifference presupposes some version of the Principle of Sufficient Reason.

Personal identity, memory and fission

Suppose that (a) memory connections are constitutive of personal identity and (b) fission of memories destroys a person. If one accepts (a), then (b) is very plausible, so (a) is the crucial assumption.

Now consider this case:

  • At 4 pm, due to trauma, Sam suffers complete and irreversible amnesia with respect to events between 2 pm and 4 pm.

Then the 5 pm Sam has first-person memories of the 1 pm Sam, and it seems thus that:

  1. The 5 pm Sam is identical with the 1 pm Sam.
But the 3 pm Sam also has first-person memories of the 1 pm Sam, and by the same token:
  1. The 3 pm Sam is identical with the 1 pm Sam.
By symmetry and transitivity:
  1. The 3 pm Sam is identical with the 5 pm Sam.
There is as yet no absurdity here. There is, after all, a chain of memory connections between the 3 pm Sam and the 5 pm Sam, though the connections don't run in the same direction (3 pm Sam remembers 1 pm Sam who is remembered by 5 pm Sam). But I think there is a tension between (3) and (b), the claim about fission. For now imagine a different case:
  • At 2 pm, Sam's memories are copied into a spare brain, call it Bissam, and Bissam immediately time travels forward to 4 pm. (Forward time travel does not seem metaphysically problematic.) At 4 pm, Sam is killed.
This is clearly a case of fission, and so the 1 pm Sam no longer exists at 5 pm. But in terms of the structure of memories, this case is exactly the same as the initial amnesia case. The 5 pm Bissam remembers (or quasi-remembers, if we want to nitpick) the 1 pm Sam but not the 3 pm Sam. Likewise, in the original story, the 5 pm Sam remembers the 1 pm Sam but not the 3 pm Sam. In both stories the 3 pm Sam remembers the 1 pm Sam. So it seems that in both cases the 5 pm person and the 3 pm person are the results of the fission of the pre-2 pm person. Well, almost. Bissam exists for an instant at 2 pm while the memories are copied into him. But that isn't essential. We could imagine the copying process works such that the memories are only fully seated once Bissam arrives at 4 pm.

So the memory theorist who thinks that fission kills a person should think that total amnesia with respect to a short time period also kills one.

But if that's right, then we don't survive those nights where we do not remember our dreams upon waking up. For the dreaming person has memories (skill memories at least; but also temporarily inaccessible episodic memories) of the person who went to bed. But the waking person doesn't have memories of the dreaming person, though she does have memories of the person who went to bed. So the person who went to bed fissions into the person who dreams and the person who wakes up.

This means that the memory theorist shouldn't think that fission kills. (Another standard argument for this conclusion: If fission kills and identity is constituted by memory, then you can be killed by having your brain scanned and the data put into another brain; but you can't be killed by a process that doesn't affect your body.) But if fission doesn't kill, then it seems that the best view is that in cases of fission there have always been two persons. And that leads to various absurdities, too.

Friday, October 2, 2015

Thinking derivatively

In a lovely recent paper, Andrew Bailey has argued for the priority principle, that we think our thoughts not derivatively from another entity's thinking them--for instance, we don't think our thoughts derivatively from a brain's thinking them, or from a soul's thinking them, or a temporal part's thinking them, etc. I think this is all correct, but it seems to me that arguments of this sort don't go as far as they at first sight seem (this isn't a disagreement with anything Bailey writes).

For it has long appeared to me that the philosopher who is inclined to say that our thinking is derivative from the activity of a proper part of us should deny that the relevant activity of the proper part is thinking. Instead, there is some activity of the proper part which we might call "thinking*", and our thinking is derivative from the part's thinking*. For instance, a materialist might say that our brains think* (and analogously believe*, choose*, etc.), and that to think is to be a maximal organic whole that has a part that thinks*. This seems exactly right. For even if materialism is true, our brains don't think, but we think with our brains. And what our brains do when we think with them isn't thinking, and the materialist shouldn't say it is. (Likewise, when I nail something with a hammer, it is I who nail and not the hammer that nails. Nonetheless, the hammer does something which we can call nailing*, and I nail with a hammer if and only if I stand in the right kind of complex relationship to a hammer's nailing*.)

Once we see things this way, it unfortunately undercuts various arguments that otherwise I would be quite fond of. For instance, Trenton Merricks has a wonderfully clever argument against temporal parts on the grounds that if I have many temporal parts, then I don't know my age, since most of my present temporal parts are younger than me, and yet they think the same thoughts about age as I do. But wonderful as this argument is, and correct as its conclusion is (I don't have any proper temporal parts), the temporal part theorist should (though generally doesn't) say that my proper temporal parts have no opinions as to age, but only opinions*.

Could one strengthen Bailey's priority principle and say that my mental properties are fundamental? That's too strong. Plausibly some mental properties are not fundamental but are grounded in others. Maybe, though, we can say that some mental property is fundamental? That sounds right to me, but it's hard to argue for.

Thursday, October 1, 2015

Two ways of violating causal finitism

Causal finitism holds that the causal history of anything is finite. On purely formal grounds (and assuming the Axiom of Choice--or at least Dependent Choice), it turns out that there are exactly two ways that a world could violate causal finitism:

  1. The world contains an infinite regress.
  2. Some effect is caused by infinitely many causes.
Historically, a lot of attention has been paid to the first option, with arguments back and forth on whether this is possible. Not much attention has been paid to the second. But notice that in an infinite universe with Newtonian physics, we do have a type (2) violation of causal finitism, in that the motion of each object is instantaneously gravitationally caused by the pull of infinitely many objects. I suppose that insofar as that sort of a world seems logically possible, that's an argument against causal finitism, though not a decisive one.

But perhaps the last observation can be turned into an argument for causal finitism. For if it is possible to have infinitely many objects working together causally, it should be possible to haven an infinite Newtonian universe. But it would be strange to suppose that some but not all infinite arrangements of physical objects are compossible with the Newtonian laws. After all, we can imagine asking: "What would happen if angels shuffled stuff?" So it should be possible to suppose a universe that has nothing in the half to the left of me, but in the half of the universe to my right is an infinite number of objects arranged in a uniform density in space. If that happened, I would experience an infinite force to the right (think of the gravitational force of a solid ball of uniform density at the surface: the Newtonian law makes the force be proportional to the ball's radius as the cube-dependence of the mass beats out the inverse-square-dependence), and accelerate infinitely to the right. That's impossible.