Friday, April 20, 2018

Non-instrumental pursuit

I pursue money instrumentally—for the sake of what it can buy—but I pursue fun non-instrumentally.

Here’s a tempting picture of the instrumental/non-instrumental difference as embodied in the money fun example:

  1. Non-instrumental pursuit is a negative concept: it is instrumental pursuit minus the instrumentality.

But (1) is mistaken for at least two reasons. The shallower reason is an observation we get from the ancients: it is possible to simultaneously pursue the same goal both instrumentally and non-instrumentally. You might have fun both non-instrumentally and in order to rest. But then lack of instrumentality is not necessary for non-instrumental pursuit.

The deeper reason is this. Suppose I am purely instrumentally pursuing money for the sake of what it can buy, but I then remove the instrumentality, either by ceasing to pursue things that can be bought or by ceasing to believe that money can buy things, without adding any new motivations to my will. Then clearly the pursuit of money rationally needs to disappear—if it remains, that is a clear case of irrationality. But if non-instrumental pursuit were simply an instrumental pursuit minus the instrumentality, then why wouldn’t the removal of the instrumentality from my pursuit of money leave me non-instrumentally and rationally pursuing money, just as I non-instrumentally and rationally pursue fun?

There is a positive element in my pursuit of fun, a positive element that would be lacking in my pursuit of money if I started with instrumental pursuit of money and took away the instrumentality and somehow (perhaps per impossibile) continued (but now irrationally) pursuing money. It is thus more accurate to talk of “pursuit of a goal for its own sake” than to talk of “non-instrumental pursuit”, as the latter suggests something negative.

The difference here is somewhat like the difference between the concepts of an uncaused being and a self-existent being. If you take away the cause of a brick and yet keep the brick (perhaps per impossibile), you have a mere uncaused being. That’s not a self-existent being like God is said to be.

Thursday, April 19, 2018

Affronts to human dignity

Some evils are not just very bad. They are affronts to human dignity. But those evils, paradoxically, provide an argument for the existence of God. We do not know what human dignity consists in, but it isn’t just being an agent, being really smart, etc. For human dignity to play the sort of moral role it does, it needs to be something beyond the physical, something numinous, something like a divine spark. And on our best theories of what things are like if there is no God, there is nothing like that.

So:

  1. There are affronts to human dignity.

  2. If there are affronts to human dignity, there is human dignity.

  3. If there is human dignity, there is a God.

  4. So, there is a God.

This argument is very close to the one I made here, but manages to avoid some rabbit-holes.

Wednesday, April 18, 2018

Van Inwagen on evil

Peter van Inwagen argues that because a little less evil would always serve God’s ends just as well, there is no minimum to the amount of evil needed to achieve God’s ends, and hence the arguer from evil cannot complain that God could have achieved his ends with less evil. Van Inwagen gives a nice analogy of a 10-year prison sentence: clearly, he thinks, a 10-year sentence can be just even if 10 years less a day would achieve all the purposes of the punishment just as well.

I am not convinced about either the punishment or the evil case. Perhaps the judge really shouldn’t choose a punishment where a day less would serve the purposes just as well. I imagine that if we graph the satisfaction of the purposes of punishment against the amount of punishment, we initially get an increase, then a level area, and then eventually a drop-off. Van Inwagen is thinking that the judge is choosing a punishment in the level area. But maybe instead the judge should choose a punishment in the increase area, since only then will it be the case that a lower punishment would serve the purposes of the punishment less well. The down-side of choosing the punishment in that area is that a higher punishment would serve the purposes of the punishment better. But perhaps there is a moral imperative to sacrifice the purposes of punishment to some degree, in the name of not punishing more than is necessary. Mercy is more important than retribution, etc.

Similarly, perhaps, God should choose to permit an amount of evil that sacrifices some of his ends (ends other than the minimization of evil), in order to ensure that the amount of evil that he permits is such that any decrease in the evil would result in a decrease in the satisfaction of God’s other ends. If van Inwagen is right about there not being sharp cut-offs, then this may require God to choose to permit an amount of evil such that more evil would have served God’s other ends better.

The above fits with a picture on which decrease of evil takes a certain priority over the increase of good.

Tuesday, April 17, 2018

In vitro fertilization and Artificial Intelligence

The Catholic Church teaches that it is wrong for us to intentionally reproduce by any means other than marital intercourse (though things can be done to make marital intercourse more fertile than it otherwise would be). In particular, human in vitro fertilization is wrong.

But there is clearly nothing wrong with our engaging in in vitro fertilization of plants. And I have never heard a Catholic moralist object to the in vitro fertilization of farm animals.

Suppose we met intelligent aliens. Would it be permissible for us to reproduce them in vitro? I think the question hinges on whether what is wrong with in vitro fertilization has to do with the fact that the creature that is reproduced is one of us or has to do with the fact that it is a person. I suspect it has to do with the fact that it is a person, and hence our reproducing non-human persons in vitro would be wrong, too. Otherwise, we would have the absurd situation where we might permissibly reproduce an alien in vitro, and they would permissibly reproduce a human in vitro, and then we would swap babies.

But if what is problematic is our reproducing persons in vitro, then we need to look for a relevant moral principle. I think it may have something to do with the sacredness of persons. When something is sacred, we are not surprised that there are restrictions. Sacred acts are often restricted by agent, location and time. They are something whose significance goes beyond humanity, and hence we do not have the authority to engage in them willy-nilly. It may be that the production of persons is sacred in this way, and hence we need the authority to produce persons. Our nature testifies to us that we have this authority in the context of marital intercourse. We have no data telling us that we are authorized to produce persons in any other way, and without such data we should not do it.

This would have a serious repercussion for artificial intelligence research. If we think there is a significant chance that strong AI might be possible, we should stay away from research that might well produce a software person.

The independence of the attributes in Spinoza

According to Spinoza, all of reality—namely, deus sive natura and its modes—can be independently understood under each of (at least) two attributes: thought and extension. Under the attribute of thought, we have a world of ideas, and under the attribute of extesion, we have a world of bodies. There is identity between the two worlds: each idea is about a body. We have a beautiful account of the aboutness relation: the idea is identical to the body it is about, but the idea and body are understood under different attributes.

But here is a problem. It seems that to understand an idea, one needs to understand what the idea is about. But this seems to damage the conceptual independence of the attributes of thought and extension, in that one cannot fully understand the aboutness of the ideas without understanding extension.

I am not sure what to do about this.

Monday, April 16, 2018

The Repugnant Conclusion and Strong AI

Derek Parfit’s Repugnant Conclusion says that, on standard utilitarian assumptions, if n is sufficiently large, then n lives of some minimal level of flourishing will be better any fixed size society of individuals that greatly flourish.

I’ve been thinking about the interesting things that you can get if you combine the Repugnant Conclusion argument with strong Artificial Intelligence.

Assume utilitarianism first.

Given strong Artificial Intelligence, it should be possible to make a computer system that achieves some minimal level of human-like flourishing. Once that is achieved, economies of scale become possible, and I expect it should be possible to replicate that system a vast number of times, and to do so much more cheaply per copy than the cost of supporting a single human being. Note that the replication can be done both synchronically and diachronically: we should optimize the hardware and software in such a way as to make both lots of instances of the hardware and to run as many flourishing lives per day as possible. Once the program is written, since an exact copy is being run for each instance with the same inputs, we can assure equal happiness for all.

If strong AI is possible, generating such minimally flourishing AI and making a vast number of replicates seems a more promising way to increase utility than fighting disease and poverty among humans. Indeed, it would likely be more efficient to decrease the number of humans to the minimum needed to serve the great number of duplicates. At that point, the morally best thing for humans to do will be to optimize the hardware to allow us to build more computers running the happy-ish software and to run each life in as short an amount of external time as possible, and to work to increase the amount of flourishing in the software.

Now note an interesting difference from the traditional Repugnant Conclusion. It seems not unlikely that if strong AI becomes achieved, we will be able to repeatably, safely and cheaply achieve in software not just the minimal levels of human-like flourishing, but high levels of human-like flourishing, even of forms of flourishing other than the pleasure or desire fulfillment that classical utilitarian theories talk about. We could make a piece of software that quickly and cheaply enjoys the life of a classical music afficionado, enjoying the best examples of human classical music culture, and that has no hankering for anything more. And if compatibilism is true (and it is likely that it is true if strong AI is true), then we could make a piece of software that reliably engages in acts of great moral heroism in its simulated world. We lose a bit of value from the fact that these acts only affect a simulated world, but we gain by being able to ensure that no immoral activity mars the value. If we are not certain of the correct axiology, we could hedge our bets by making a software life that is quite flourishing on any plausible axiology: say one that combines pleasure, desire satisfaction, enjoyment of the arts and virtuous activity. And then just run vast numbers of copies of that life per day.

It is plausible that, unless there is some deep spiritual component to human flourishing (of a sort that is unlikely to be there given the materialism that seems needed for strong AI to be possible), we will not only be able to more efficiently increase the sum good by running lots of copies of a happy life than by improving human life, but we will be able to more efficiently improve on the average good.

But one thing is unchanged. The conclusion is still repugnant. A picture of our highest moral imperative being the servicing of a single computer program run on as many machines as possible repeatedly as quickly possible is repugnant.

A tempting objection is to say that multiple copies of the same life count as just one. That’s easily fixed: a well-controlled amount of algorithmic variation can be introduced into lives.

Observe, too, that the above line of thought is much more practical than the original Repugnant Conclusion. The original Repugnant Conclusion is highly theoretical, in that it is difficult to imagine putting into place the kind of society that is described in it without a significant risk of utility-destroying revolution. But right now rich philanthropists could switch their resources from benefiting the human race to working to develop a happy AI (I hesitate to write this sentence, with a slight fear that someone might actually make that switch—but the likelihood of my blog having such an effect seems small). One might respond to the Repugnant Conclusion that all ethical theories give implausible answers in some hypothetical cases. But the case here is not hypothetical.

We can take the above, just as the original Repugnant Conclusion, to be a reductio ad absurdum against utilitarianism. But it seems to be more than that. Any plausible ethics has to have a consequentialist component, even if pursuit of the consequences is restricted by deontic considerations. So on many competing ethical theories, there will still be a pull to the conclusion, given the vast amount of total value, and the respectable amount of average (and median) value achieved in the repugnant proposal. And one won’t be able to resist the pull by denying the picture of value that underwrites utilitarianism, because as noted above, “deeper” values can be achieved in software, given strong AI.

I can think of three plausible ways out of the strong AI version of the Repugnant Conclusion:

  1. The correct axiology lays great stress on the value of deep differences between lives, deeper than can be reliably and safely achieved through algorithmic variation (if there is too much variation, we risk producing misery).

  2. There is a deontic restriction prohibiting the production of software-based persons, perhaps because it is wrong for us to have such a total influence over the life of another person or because it is wrong for us to produce persons by any process other than natural reproduction.

  3. Strong AI is impossible.

I am inclined to think all three are true. :-)

Friday, April 13, 2018

Impairment and non-human organisms

Consider a horse with three legs, a bird with one wing, an oak tree without bark, and a yeast cell unable to reproduce. There is something that all four have in common with each other, and which they also have in common with the human who has only one leg. And it seems to me to be important for an account of disability to acknowledge that which all these five organisms have in common. If the right account of disability is completely disjoined from anything that happens in non-human organisms—or even from anything that happens in non-social organisms—then there is another concept in the neighborhood that we really should also be studying in addition to disability, maybe “impairment”.

Moreover, it seems clear the thing that the five organisms in my examples have in common is bad as far as it goes, though of course it might be good for the organism on balance (the one-winged bird might be taken into a zoo, and thereby saved from a predator).

Thursday, April 12, 2018

Divine authority over us

Imagine a custody battle between Alice and Bob over their child Carl. Suppose the court finds that Alice loves Carl much more than Bob does, that Alice is much wiser than Bob, and that Alice knows Carl and his needs much better than Bob does. Moreover, it is discovered that Bob has knowingly unjustifiedly harmed Carl, while Alice has never done that. In the light of these, it is obvious that Alice is a more fitting candidate to have authority over Carl than Bob is.

But now, suppose x is some individual. Then God loves x much more than I love x, God is much wiser than I, God knows x and his needs much better than I do. Moreover, suppose that I have knowingly unjustifiedly harmed x, while God has never done that. In light of these, it should be plausible that God is a more fitting candidate to have authority over x than I am.

Suppose, however, that I am x. The above is still true. God loves me much more than I love myself; God is much wiser than I; God knows me and my needs much better than I do. And I have on a number of occasions knowingly unjustifiedly harmed myself—indeed, in typical cases when I sin, that’s what has happened—while God has never knowingly unjustifiedly harmed me. So, it seems that God is a more fitting candidate to have authority over me than I am.

I am not endorsing a general principle that if someone loves me more than I love myself, etc., then they are more fit to have authority over me. For the someone might be someone that has little intuitive standing to have authority over me—a complete stranger who inexplicably enormously cares about me might not have much authority over me. But it is prima facie plausible that God has significant authority over me, for the same sorts of reasons that my parents had authority over me when I was a child. And the above considerations suggest that God’s authority over me is likely to be greater than my own authority over myself.

If it is correct that God, if he existed, would have greater authority over me than I have over myself, then that would have significant repercussions for the problem of evil. For a part of the problem involves the question of whether it is permissible for God to allow a person to suffer horrendously even for the sake of greater (or incommensurable but proportionate) goods to them or (especially) another. But it would be permissible for me to allow myself to suffer horrendously for the sake of greater (or incommensurable but proportionate) goods for me or another. If God has greater authority over me than I have over myself, then it would likewise be permissible for God.

This does not of course solve the problem of evil. There is still the question whether allowing the sufferings people undergo has the right connection with greater (or incommensurable but proportionate) goods, and much of the literature on the problem of evil has focused on that. But it does help significantly with the deontic component of the question. (Though even with respect to the deontic aspects, there is still the question of divine intentions—it would I think be wrong even for God to intend an evil for the sake of a good. So care is still needed in theodicy to ensure that the theodicy doesn’t make God out to be intending evils for the sake of goods.)

Wednesday, April 11, 2018

A parable about sceptical theism and moral paralysis

Consider a game. The organizers place a $20 bill in one box and a $100 bill in another box. They seal the boxes. Then they put a $1 bill on top of one of the boxes, chosen at random fairly, and a $5 on top of the other box. The player of the game gets to choose a box, in which case she gets both what’s in the box and what’s on top of the box. Everyone knows that that’s how the game works.

If you are an ordinary person playing the game, you will be self-interestedly rational to choose the box with the $5 on top of it. The expected payoff for the box with the $5 on it is $65, while the expected payoff for the other box is $61, when one has no information about which box contains the $20 and which contains the $100.

If Alice is an ordinary person playing the game and she choses the box with the $1 on top of it, that’s very good reason to doubt that Alice is self-interestedly rational.

But now suppose that I am considering the hypothesis that Bob is a self-interestedly rational being who has X-ray vision that can distinguish a $20 bill from a $100 bill inside the box. Then if I see Bob choose the box with the $1 on top of it, that’s no evidence at all against the hypothesis that he is such a being, i.e., a self-interestedly rational being with X-ray vision. In repeated playings, we’ll see Bob choose the $1 box half the time and the $5 box half the time, if he is such a being, and if we didn't know that Bob has X-ray vision, we would think that Bob is indifferent to money.

Sceptical theism and the infinity of God

I’ve never been very sympathetic to sceptical theism until I thought of this line of reasoning, which isn’t really new, but I’ve just never quite put it together in this way.

There are radically different types of goods. At perhaps the highest level—call it level A—there are types of goods like the moral, the aesthetic and the epistemic. At a slightly lower level—call it level B—there are types of goods like the goods of moral rightness, praiseworthiness, autonomy, the virtue, beauty, sublimity, pleasure, truth, knowledge, understanding, etc. And there will be even lower levels.

Now, it is plausible that a perfect being, a God, would be infinitely good in infinitely many ways. He would thus infinitely exemplify infinitely many types goods at each level, either literally or by analogy. If so, then:

  1. If God exists, there are infinitely many types of good at each level.

Moreover:

  1. We only have concepts of a finite number of types of good at each level.

Thus:

  1. There are infinitely many types of good at each level that we have no concept of.

Now, let’s think what would likely be the case if God were to create a world. From the limited theodicies we have, we know of cases where certain types of goods would justify allowing certain evils. So we wouldn't be surprised if there were evils in the world, though of course all evils would be justified, in the sense that God would have a justification for allowing them. But we would have little reason to think that God would limit his design of the world to only allowing those evils that are justified by the finite number of types of good that we have concepts of. The other types of good are still types of good. Given that there infinitely many such goods, and only finitely many of the ones we have concepts of, it would not be significantly unlikely that if God exists, a significant proportion—perhaps a majority—of the evils that have a justification would have a justification in terms of goods that we have no concept of.

And so when we observe a large proportion of evils that we can find no justification for, we observe something that is not significantly unlikely on the hypothesis that God exists. But if something is not significantly unlikely on a hypothesis, it’s not significant evidence against that hypothesis. Hence, the fact that we cannot find justifications for a significant proportion of the evils in the world is not significant evidence against the existence of God.

Sceptical theism has a tendency to undercut design arguments for the existence of God. I do not think this version of sceptical theism has that tendency, but that’s matter for another discussion (perhaps in the comments).

Bayesianism and the multitude of mathematical structures

It seems that every mathematical structure (there are some technicalities as to how to define it) could in fact be the correct description of fundamental physical structure. This means that making Bayesianism be the whole story about epistemology—even for idealized agents—is a hopeless endeavor. For there is no hope for an epistemologically useful probability measure over the collection of all mathematical structures unless we rule out the vast majority of structures as having zero probability.

A natural law or divine command appendix to Bayesianism can solve this problem by requiring us to assign zero probability to some structures that are metaphysically possible but that our Creator wants us to be able to rule out a priori.

Monday, April 9, 2018

Reincarnation and theodicy

As I was teaching on the problem of evil today, I was struck by how nicely reincarnation could provide theodicies for recalcitrant cases. “Why is the fawn dying in the forest fire? Well, for all we know, it’s a reincarnation of someone who committed genocide and is undergoing the just punishment for this, a punishment whose restorative effect will only be seen in the next life.” “Why is Sam suffering with no improvement to his soul? Well, maybe the improvement will only manifest in the next life.”

Of course, I don’t believe in reincarnation. But if the problem of evil is aimed at theism in general, then it seems fair to say that for all that theism in general says, reincarnation could be true.

Here is a particular dialectical context where bringing in reincarnation could be helpful. The theist presses the fine-tuning argument. The atheist instead of embracing a multiverse (as is usual) responds with the argument from evil. The theist now says: While reincarnation may seem unlikely, it surely has at least a one in a million probability conditionally on theism; on the other hand, fine-tuning has a much, much smaller probability than one in a million conditionally on single-universe atheism. So theism wins.

Friday, April 6, 2018

Peer disagreement and models of error

You and I are epistemic peers and we calculate a 15% tip on a very expensive restaurant bill for a very large party. As shared background information, add that calculation mistakes for you and me are pretty much random rather than systematic. As I am calculating, I get a nagging feeling of lack of confidence in my calculation, which results in $435.51, and I assign a credence of 0.3 to that being the tip. You then tell me that you you’re not sure what the answer is, but that you assign a credence of 0.2 to its being $435.51.

I now think to myself. No doubt you had a similar kind of nagging lack of confidence to mine, but your confidence in the end was lower. So if all each of us had was their own individual calculation, we’d each have good reason to doubt that the tip is $435.51. But it would be unlikely that we would both make the same kind of mistake, given that our mistakes are random. So, the best explanation of why we both got $435.51 is that we didn’t make a mistake, and I now believe that $435.51 is right. (This story works better with larger numbers, as there are more possible randomly erroneous outputs, which is why the example uses a large bill.)

Hence, your lower reported credence of 0.2 not only did not push me down from my credence of 0.3, but it pushed me all the way up into the belief range.

Here’s the moral of the story: When faced with disagreement, instead of moving closer to the other person’s credence, we should formulate (perhaps implicitly) a model of the sources of error, and apply standard methods of reasoning based on that model and the evidence of the other’s credence. In the case at hand, the model was that error tends to be random, and hence it is very unlikely that an error would result in the particular number that was reported.

Thursday, April 5, 2018

Defeaters and the death penalty

I want to argue that one can at least somewhat reasonably hold this paradoxical thesis:

  • The best retributive justice arguments in favor of the death penalty are sound and there are no cases where the death penalty is permissible.

Here is one way in which one could hold the thesis: One could simply think that nobody commits the sorts of crimes that call for the death penalty. For instance, one could hold that nobody commits murder, etc. But it’s pretty hard to be reasonable in thinking that: one would have to deny vast amounts of data. A little less crazily, one could think that the mens rea conditions for the crimes that call for the death penalty are so strict that nobody actually meets them. Perhaps every murderer is innocent by reason of insanity. That’s an improvement over the vast amount of denial that would be involved in saying there are no murders, but it’s still really implausible.

But now notice that the best retributive justice arguments in favor of the death penalty had better not establish that there are crimes such that it is absolutely morally required that one execute the criminal. First, no matter how great the crime, there are circumstances which could morally require us to let the criminal go. If aliens were to come and threaten to destroy all life on earth with the exception of a mass murderer, we would surely have to just leave the mass murderer to divine justice. Second, if the arguments in favor of the death penalty are to be plausible, they had better be compatible with the possibility of clemency.

Thus, the most the best of the arguments can be expected to establish is that there are crimes which generate strong moral reasons of justice to execute the criminal, but the reasons had better be defeasible. One could, however, think that there defeaters occur in all actual cases. Of course, some stories about defeaters are unlikely to be reasonable: one is not likely to reasonably hold that aliens will destroy all of us if we execute someone.

But there could be defeaters that could be more reasonably believed in. Here are some such things that one could believe:

  • God commanded us to show a clemency to criminals that in fact precludes the death penalty.

  • Criminals being executed are helpless, and killing helpless people—even justly—causes a harm to the killer’s soul that is a defeater for the reasons for the death penalty.

  • We are all guilty of offenses that deserve the death penalty—say, mortal sins—and executing someone when one oneself deserves the death penalty is harmful to one’s character in a way that is a defeater for the reasons for the death penalty.

(I myself am open to the possibility that the first of these could actually be the case in New Testament times.)

Wednesday, April 4, 2018

Group impairment and Aristotelianism

Aristotelians have a metaphysical ground for claims about what is normal and abnormal in an individual: the form of a substance grounds the development of individuals in a teleological ways and specifies what the substance should be like. Thus a one-eyed owl is impaired—while it is an owl, it falls short of the specification in its form.

But there is another set of normalcy claims that are harder to ground in form: claims about the proportions of characteristics in a population. Sex ratios are perhaps the most prevalent example: if all the foals born over the next twenty years were, say, male, then that would be disastrous for the horse as a species. And yet it seems that each individual foal could still be a perfect instance of its kind, since both a male and a female can be a perfect instance of horsehood. Caste in social insects is another example: it would be disastrous for a bee hive if all the females developed into workers, even though each one could be a perfect bee.

The two cases are different. The sex of a horse is genetically determined, while social insect caste is largely or wholly environmental. Still, both are similar in that the species not only has norms as to what individuals should be like but also what the distribution of types of individuals should be. There is not only the possibility of individual but of group impairment. But what is the metaphysics behind these norms?

Infamously, Aristotle interpreters differ on whether forms are individual or common: whether two members of the same species have a merely exactly similar or a numerically identical form. Here is a place where taking forms to be common would help: for then the form could not only dictate the variation between the parts of each organism’s body but also the variation between the organisms in the species. But taking forms to be common would be ethically disastrous, because it would mean that all humans have the same soul, since the soul is the form of the human being.

Here’s my best solution to the puzzle. The form specifies the conditions of the flourishing of an individual. But these conditions can be social in addition to individual. Thus, a perfectly healthy and well-nourished male foal would not be flourishing if it lacks a society with potential future mates. And while each worker bee can internally be a fulfilled worker bee, it is not flourishing if its work does not in fact help support a queen. These social conditions for flourishing are constitutive. It’s not that the lack of a queen will cause the worker bee to die sooner (though for all I know, it might), but that the lack of a queen is constitutive of the worker bee being poorly off.

Once we see that there can be constitutive social conditions for flourishing, it is natural to think that there will be constitutive environmental conditions for flourishing. And this could be the start of an Aristotelian philosophy of ecology.

A multiple-realizability problem for computational theories of mind

Consider a computational theory of mind overlaid on a reductive physicalist ontology. Here’s I think how the story would have to work. We need a mapping between physical system (PS) and an abstract model of computation (AMC), because on a computational theory of mind, thoughts need to be defined in terms of the functioning of an AMC associated with a PS. But there are infinitely many mappings between PSs and AMCs. If thought is defined by computation and yet if we are to avoid a hyper-panpsychism on which every physical system thinks infinitely many thoughts, we need to heavily restrict the mappings between PSs and AMCs. I know of only one promising strategy of mapping restriction, and that is to require that if we specify the PSs using a truly fundamental language—one whose primitives are “structural” in Sider’s sense—the mapping can be sufficiently briefly described.

If we were dealing with infinite PSs and infinite AMCs, there would be a nice non-arbitrary way to do this: we could require that the mapping description be finite (assuming the language has expressive resources like recursion). But with finite PSs and AMCs, that will still generate hyper-panpsychism, since there will be infintely many finite AMCs that can be assigned to a given PS.

This means that not only we have to restrict the mapping description to a finite description, but to a short finite description. Once we do that, we will specify that a PS x thinks the thoughts that are associated with an AMC y if and only if the mapping between x and y is short. One obvious problem here is the seeming arbitrariness of whatever threshold of shortness we have.

But there is another interesting problem. This approach will violate the multiple realizability intuition that leads many people to computational theories of mind. For imagine a reductive physicalist world w* which is just like ours at the macroscopic level, and even at the atomic level, but whose microscopic reduction goes a number of extra levels down, with the reductions being quite complex. Thus, although in our world facts about electrons may be fundamental, in w* these facts are far from fundamental, being reducible to facts about much more fundamental things and reducible in a complex way. Multiple realizability intuitions lead one to think that macroscopic entities in a world like w* that behave just like humans down to the atomic level could think like we do. But if the reduction from the atomic level to the fundamental level in w* is sufficiently complicated, then the brain to human-like AMC mapping in w* will fail to meet the brevity condition, and hence the beings won’t think, or at least not like we do.

The problem is that it is really hard to both avoid hyper-panpsychism and allow for multiple realizability intuitions while staying within the confines of a reductive physicalist computational theory of mind. A dualist, of course, has no such difficulty: a soul can be attached to w*’s human-like organisms with no more difficulty than it can to our world’s human organisms.

Suppose the computationalist denies that multiple realizability extends to worlds like w*. Then there is a new and interesting feature of fine-tuning in our world that calls out for explanation: our world’s fundamental level is sufficiently easily mapped to a neural level to allow the neural level to count as engaging in thoughtful computation.

Tuesday, April 3, 2018

Divine command and natural law epistemology

I am impressed by the idea that other kinds of beings from humans can appropriately have different doxastic practices from ours, in light of:

  1. a different environment which makes different practices truth-conductive, and

  2. different proper goals for their doxastic practices (e.g., a difference of emphasis on explanation versus prediction; a difference in what subject matter is more important).

Option (a) is captured by reliabilism, but reliabilism does not by itself do much to help with (b), and suffers from an insuperable reference class problem.

I know of two epistemological theories that nicely capture the differences between epistemic practices in the light of both (a) and (b):

  • divine command epistemology: a doxastic practice is required just in case God commands it (variant: commands it in light of truth-based goods)

  • natural law epistemology: a doxastic practice is required just in case it is natural to its practitioner (variant: natural and ordered towards truth-based goods).

Both of these theories have an interesting meta-theoretic consequence: they make particularly weird thought experiments less useful in epistemology. For God’s reasons for requiring a doxastic practice may well be linked to our typical way of life, and a practice that is natural in one ecological niche may have unfortunate consequences outside that niche. (That’s sad for me, since making up weird thought experiments is something I particularly enjoy!)

(Note, however, that both of these theories have nothing to say on the question of knowledge. That’s a feature, not a bug. I think we don’t need a concept of (propositional) knowledge, just as we don’t need a concept of baldness. Anything worth saying using the language of “knowledge” or “baldness” can be more precisely said without it—one can talk of degrees of belief and justification, amount of scalp coverage, etc.—and while it’s an amusing question how exactly to analyze knowledge or baldness, it’s just that.)

Wednesday, March 28, 2018

A responsibility remover

Suppose soft determinism is true: the world is deterministic and yet we are responsible for our actions.

Now imagine a device that can be activated at a time when an agent is about to make a decision. The device reads the agent’s mind, figures out which action the agent is determined to choose, and then modifies the agent’s mind so the agent doesn’t make any decision but is instead compelled to perform the very action that they would otherwise have chosen. Call the device the Forcer.

Suppose you are about to make a difficult choice between posting a slanderous anonymous accusation about an enemy of yours that will go viral and ruin his life and not posting it. It is known that once the message is posted, there will be no way to undo the bad effects. Neither you nor I know how you will choose. I now activate the Forcer on you, and it makes you post the slander. Your enemy’s life is ruined. But you are not responsible for ruining it, because you didn’t choose to ruin it. You didn’t choose anything. The Forcer made you do it. Granted, you would have done it anyway. So it seems you have just had a rather marvelous piece of luck: you avoided culpability for a grave wrong and your enemy’s life is irreparably ruined.

What about me? Am I responsible for ruining your enemy’s life? Well, first, I did not know that my activation of the Forcer would cause this ruin. And, second, I knew that my activation of the Forcer would make no difference to your enemy: she would have been ruined given the activation if and only if she would have been ruined without it. So it seems that I, too, have escaped responsibility for ruining your enemy’s life. I am, however, culpable for infringing on your autonomy. However, given how glad you are of your enemy’s life being ruined with your having any culpability, no doubt you will forgive me.

Now imagine instead that you activated the Forcer on yourself, and it made you post the slander. Then for exactly the same reasons as before, you aren’t culpable for ruining your enemy’s life. For you didn’t choose to post the slander. And you didn’t know that activating the Forcer would cause this ruin, while you did know that the activation wouldn’t make any difference to your enemy—the effect of activating the Forcer on yourself would not affect whether the message would be posted. Moreover, the charge of infringing on autonomy has much less force when you activated the Forcer yourself.

It is true that by activating the Forcer you lost something: you lost the possibility of being praiseworthy for choosing not to post the slander. But that’s a loss that you might judge worthwhile.

So, given soft determinism, it is in principle possible to avoid culpability while still getting the exact same results whenever you don’t know prior to deliberation how you will choose. This seems absurd, and the absurdity gives us a reason to reject the compatibility of determinism and responsibility.

But the above story can be changed to worry libertarians, too. Suppose the Forcer reads off its patient’s mind the probabilities (i.e., chances) of the various choices, and then randomly selects an action with the probabilities of the various options exactly the same as the patient would have had. Then in acting the Forcer, it can still be true that you didn’t know how things would turn out. And while there is no longer a guarantee that things would turn out with the Forcer as they would have without it, it is true that activating the Forcer doesn’t affect the probabilities of the various actions. In particular, in the cases above, activating the Forcer does nothing to make it more likely that your enemy would be slandered. So it seems that once again activating the Forcer on yourself is a successful way of avoiding responsibility.

But while that is true, it is also true that if libertarianism is true, regular activation of the Forcer will change the shape of one’s life, because there is no guarantee that the Forcer will decide just like you would have decided. So while on the soft determinist story, regular use of the Forcer lets one get exactly the same outcome as one would otherwise have had, on the libertarian version, that is no longer true. Regular use of the Forcer on libertarianism should be scary—for it is only a matter of chance what outcome will happen. But on compatibilism, we have a guarantee that use of the Forcer won’t change what action one does. (Granted, one may worry that regular use of the Forcer will change one’s desires in ways that are bad for one. If we are worried about that, we can suppose that the Forcer erases one’s memory of using it. That has the disadvantage that one may feel guilty when one isn’t.)

I don’t know that libertarians are wholly off the hook. Just as the Forcer thought experiment makes it implausible to think that responsibility is compatible with determinism, it also makes it implausible to think that responsibility is compatible with there being precise objective chances of what choices one will make. So perhaps the libertarian would do well to adopt the view that there are no precise objective chances of choices (though there might be imprecise ones).

Tuesday, March 27, 2018

Closure for credence thresholds is atypical

In an earlier post, I speculated about thresholds and closure without doing any calculations. Now it’s time to do some calculations.

The Question: If you have two propositions that meet a credential threshold, how likely is it that their conjunction does as well? I.e., how likely is closure to hold for pairs of propositions meeting the threshold?

Model 1: Take a probability space with N points. Assign a credence to each of the N points by uniformly choosing a random number in some fixed range, and then normalizing so total probability is 1. Now among the 2N (up to equivalence) propositions about points in the probability space, choose two at random subject to the constraint that they both meet the threshold condition. Check if their conjunction meets the threshold condition. Repeat. The source code is here (MIT license).

The Results: With thresholds ranging from 0.85 to 0.95, as N increases, the probability of the conjunction meeting the threshold goes down. At N = 16, for all three thresholds, it is below 0.5. At N = 24, for all three thresholds, it is below 0.21. In other words, for randomly chosen propositions, we can expect closure to be atypical.

Note: The original model allows the two random propositions to turn out to be the same one. Otherwise, for N such that 1/N < t0, where t0 is the threshold, the probability of closure could be undefined as it might be impossible to generate two distinct propositions that meet the closure condition. Dropping the condition that allows the two random propositions to be the same will only make the probability of closure smaller. Here (also MIT license) is the modified code that does this. The results are here.

Final Remarks: This suggests that if the justification condition for knowledge is expressed in terms of a credence threshold, closure for knowledge will be atypical: i.e., for a random pair of propositions one knows, it will be unlikely that one will know the conjunction. Of course, it could be that the other conditions for knowledge, besides justification, will affect this, by making closure somewhat more likely. But I don’t have reason to think it will make an enormous difference. So, if one thinks closure should be typical, one shouldn’t think that justification is described by a credence threshold.

I go the other way: I think justification is described by a credence threshold, and now I think that closure is unlikely to be typical.

A limitation in the above models is that the propositions we normally talk about are not randomly chosen from the 2N propositions describing the probability space.

Monday, March 26, 2018

Thresholds and credence

Suppose we have some doxastic or epistemic status—say, belief or knowledge—that involves a credence threshold, such as that to count as believing p, you need to assign a credence of, say, at least 0.9 to p. I used to think that propositions that meet the threshold are apt to have credences distributed somewhat uniformly between the threshold or 1. But now I think this may be completely wrong.

Toy model: A perfectly rational agent has a probability space with N options and assigns equal credence to each option. There are 2N propositions (up to logical equivalence) that can be formed concerning the N options, e.g., “option 1 or option 2 or option 3”, one for each subset of the N options.

Given the toy model, for a threshold that is not too close to 0.5, and for a moderately large N (say, 10 or more), most of the 2N propositions that meet the threshold condition meet it just barely. The reason for that is this. A proposition can be identified with a subset of {1, ..., N}. The probability of the proposition is k/N where k is the number of elements in the subset. For any integer k between 0 and N, the number of propositions that have probability k/N will then be the binomial coefficient k!(N − k)!/N!. But when we look at this as a function of k, it will have roughly a normal distribution with standard deviation σ = N1/2/2 and center at N/2, and that distribution decays very fast, so most of the propositions that have probability at least k/N will have probability pretty close to k/N if k/N − 1/2 is significantly bigger than 1/N1/2.

I should have some graphs here, but it’s a really busy week.

Friday, March 23, 2018

Conjunctions and thresholds

Consider some positive epistemic or doxastic concept E, say knowledge or belief. Suppose that (maybe for a fixed context) E requires a credence threshold t0: a proposition only falls under E when the credence is t0 or higher.

Unless the non-credential stuff really, really cooperates, we wouldn’t expect to have closure under conjunction for all cases of E. For if p and q are cases of E that just barely satisfy the credential threshold condition, we wouldn’t expect their conjunction to satisfy it.

Question: Do we have any right to expect closure under conjunction typically, at least with respect to the credential condition? I.e., if p and q are randomly chosen distinct cases of E, is it reasonable to expect that their conjunction falls above the threshold?

Simple Model: The credences of our Es can fall anywhere between t0 and 1. Let’s suppose that the distribution of the credences is uniform between t0 and 1. Suppose, two, that distinct Es are statistically independent, so that the probability of the conjunction is the product of the probabilities.

Then there is a simple formula for the probability that the conjunction of randomly chosen distinct Es satisfy the credential threshold condition: (p0log p0 + (1 − p0))/(1 − p0)2. (Fix one credence between p0 and 1, and calculate the probability that the other credence satisfies the condition; then integrate from p0 and 1 and divide by 1 − p0.) We can plug some numbers in.

  • At threshold 0.5, probability of conjunction above threshold: 0.61

  • At threshold 0.75, probability of conjunction above threshold: 0.55

  • At threshold 0.9, probability of conjunction above threshold: 0.52

  • At threshold 0.95, probability of conjunction above threshold: 0.51

  • At threshold 0.99, probability of conjunction above threshold: 0.502

And the limit as threshold approaches 1 is 1/2.

So, it’s more likely than not that the conjunction satisfies the credential threshold, but on the other hand the probability is not high enough that we can say that it’s typically the conjunction satisfies the threshold.

But the model has two limitations which will affect the above.

Limitation 1: Intuitively, propositions with positive epistemic or doxastic status are more likely to have a credence closer to the low end of the [t0, 1] interval, rather than being uniformly distributed over it. This is going to make the probability of the conjunction meeting the threshold be lower than the Simple Model predicts.

Limitation 2: Even without being coherentists, we would expect that our doxastic states to “hang together”. Thus, typically, we would expect that if p and q are propositions that have a credence significantly above 1/2, then typically p and q will have a positive statistical correlation (with respect to credences), so that P(p ∧ q)>P(p)P(q), rather than being independent. This means that the Simple Model underestimates the how often the conjunction is above the threshold. In the extreme case that all our doxastic states are logically equivalent, the conjunction will always meet the threshold condition. In more typical cases, the correlation will be weaker, but we would still expect a significant credential correlation.

So it may well be that even if one takes into account Limitation 1, taking into account Limitation 2 will allow one to say that typically conjunctions of Es meet the threshold condition.

Acknowledgment: I am grateful to John Hawthorne for a discussion of closure and thresholds.

Thursday, March 22, 2018

Wednesday, March 21, 2018

Bohmianism and God

Bohmian mechanics is a rather nice way of side-stepping the measurement problem by having a deterministic dynamics that generates the same experimental predictions as more orthodox interpretations of Quantum Mechanics.

Famously, however, Bohmian mechanics suffers from having to make the quantum equilibrium hypothesis (QEH) that the initial distribution of the particles matches the wavefunction, i.e., that the initial particle density is given by (at least approximately) |ψ|2. In other words, Bohmian mechanics requires the initial conditions to be fine-tuned for the theory to work, and we can then think of Bohmian mechanics as deterministic Bohmian dynamics plus QEH.

Can we give a fine-tuning argument for the existence of God on the basis of the QEH, assuming Bohmian dynamics? I think so. Given the QEH, nature becomes predictable at the quantum level, and God would have good reason to provide such predictability. Thus if God were to opt for Bohmian dynamics, he would be likely to make QEH true. On the other hand, in a naturalistic setting, QEH seems to be no better than an exceedingly lucky coincidence. So, given Bohmian dynamics, QEH does support theism over naturalism.

Theism makes it possible to be an intellectually fulfilled Bohmian. But I don’t know that we have good reason to be Bohmian.

Tuesday, March 20, 2018

Pruss and Rasmussen, Necessary Existence

Josh Rasmussen's and my Necessary Existence (OUP) book is out, both in Europe and in the US. I wish the price was much lower. The authors don't have a say over that, I think.

The great cover was designed by Rachel Rasmussen (Josh's talented artist wife).

Monday, March 19, 2018

"Before I formed you in the womb I knew you" (Jeremiah 1:5)

  1. Always: If x (objectually) knows y, then y exists (simpliciter). (Premise)

  2. Before I came into existence, it was true that God (objectually) knows me. (Premise)

  3. Thus, before I came into existence, it was true that I exist (simpliciter). (1 and 2)

  4. If 3, then eternalism is true. (Premise)

  5. Thus, eternalism is true. (3 and 4)

A variant of this argument uses “has a rightly ordered love for” in place of “(objectually) knows”.

Thursday, March 15, 2018

Something that has no reasonable numerical epistemic probability

I think I can give an example of something that has no reasonable (numerical) epistemic probability.

Consider Goedel’s Axiom of Constructibility. Goedel proved that if the Zermelo-Fraenkel (ZF) axioms are consistent, they are also consistent with Constructibility (C). We don’t have any strong arguments against C.

Now, either we have a reasonable epistemic probability for C or we don’t.

If we don’t, here is my example of something that has no reasonable epistemic probability: C.

If we do, then note that Goedel showed that ZF + C implies the Axiom of Choice, and hence implies the existence of non-measurable sets. Moreover, C implies that there is a well-ordering W on the universe of all sets that is explicitly definable in the language of set theory.

Now consider some physical quantity Q where we know that Q lies in some interval [x − δ, x + δ], but we have no more precise knowledge. If C is true, let U be the W-smallest non-measurable subset of [x − δ, x + δ].

Assuming that we do have a reasonable epistemic probability for C, here is my example of something that has no reasonable epistemic probability: C is false or Q is a member of U.

Logical closure accounts of necessity

A family of views of necessity (e.g., Peacocke, Sider, Swinburne, and maybe Chalmers) identifies a family F of special true statements that get counted as necessary—say, statements giving the facts about the constitution of natural kinds, the axioms of mathematics, etc.—and then says that a statement is necessary if and only if it can be proved from F. Call these “logical closure accounts of necessity”. There are two importantly different variants: on one “F” is a definite description of the family and on the other “F” is a name for the family.

Here is a problem. Consider:

  1. Statement (1) cannot be proved from F.

If you are worried about the explicit self-reference in (1), I should be able to get rid of it by a technique similar to the diagonal lemma in Goedel’s incompleteness theorem. Now, either (1) is true or it’s false. If it’s false, then it can be proved from F. Since F is a family of truths, it follows that a falsehood can be proved from truths, and that would be the end of the world. So it’s true. Thus it cannot be proved from F. But if it cannot be proved from F, then it is contingently true.

Thus (1) is true but there is a possible world w where (1) is false. In that world, (1) can be proved from F, and hence in that world (1) is necessary. Hence, (1) is false but possibly necessary, in violation of the Brouwer Axiom of modal logic (and hence of S5). Thus:

  1. Logical closure accounts of necessity require the denial of the Brouwer Axiom and S5.

But things get even worse for logical closure accounts. For an account of necessity had better itself not be a contingent truth. Thus, a logical closure account of necessity if true in the actual world will also be true in w. Now in w run the earlier argument showing that (1) is true. Thus, (1) is true in w. But (1) was false in w. Contradiction! So:

  1. Logical closure accounts of necessity can at best be contingently true.

Objection: This is basically the Liar Paradox.

Response: This is indeed my main worry about the argument. I am hoping, however, that it is more like Goedel’s Incompleteness Theorems than like the Liar Paradox.

Here's how I think the hope can be satisfied. The Liar Paradox and its relatives arise from unbounded application of semantic predicates like “is (not) true”. By “unbounded”, I mean that one is free to apply the semantic predicates to any sentence one wishes. Now, if F is a name for a family of statements, then it seems that (1) (or its definite description variant akin to that produced by the diagonal lemma) has no semantic vocabulary in it at all. If F is a description of a family of statements, there might be some semantic predicates there. For instance, it could be that F is explicitly said to include “all true mathematical claims” (Chalmers will do that). But then it seems that the semantic predicates are bounded—they need only be applied in the special kinds of cases that come up within F. It is a central feature of logical closure accounts of necessity that the statements in F be a limited class of statements.

Well, not quite. There is still a possible hitch. It may be that there is semantic vocabulary built into “proved”. Perhaps there are rules of proof that involve semantic vocabulary, such as Tarski’s T-schema, and perhaps these rules involve unbounded application of a semantic predicate. But if so, then the notion of “proof” involved in the account is a pretty problematic one and liable to license Liar Paradoxes.

One might also worry that my argument that (1) is true explicitly used semantic vocabulary. Yes: but that argument is in the metalanguage.

Tuesday, March 13, 2018

A third kind of moral argument

The most common kind of moral argument for theism is that theism better fits with there being moral truths (either moral truths in general, or some specific kind of moral truths, like that there are obligations) than alternative theories do. Often, though not always, this argument is coupled with a divine commmand theory.

A somewhat less common kind of argument is that theism better explains how we know moral truths. This argument is likely to be coupled with an evolutionary debunking argument to argue that if naturalism and evolution were true, our moral beliefs might be true, and might even be reliable, but wouldn’t be knowledge.

But there is a third kind of moral argument that one doesn’t meet much at all in philosophical circles—though I suspect it is not uncommon popularly—and it is that theism better explains why we have moral beliefs. The reason we don’t meet this argument much in philosophical circles is probably that there seems to be very plausible evolutionary explanations of moral beliefs in terms of kin selection and/or cultural selection. Social animals as clever as we are benefit as a group from moral beliefs to discourage secret anti-cooperative selfishness.

I want to try to rescue the third kind of moral argument in this post in two ways. First, note that moral beliefs are only one of several solutions to the problem of discouraging secret selfishness. Here are three others:

  • belief in karmic laws of nature on which uncooperative individuals get very undesirable reincarnatory outcomes

  • belief in an afterlife judgment by a deity on which uncooperative individuals get very unpleasant outcomes

  • a credence around 1/2 to an afterlife judgment by a deity on which uncooperative individuals get an infinitely bad outcome (cf. Pascal’s Wager).

These three options make one think that cooperativeness is prudent, but not that it is morally required. Moreover, they are arguably more robust drivers of cooperative behavior than beliefs about moral requirement. Admittedly, though, the first two of the above might lead to moral beliefs as part of a theory about the operation of the karmic laws or the afterlife judgment.

Let’s assume that there are important moral truths. Still, P(moral beliefs | naturalism) is not going to exceed 1/2. On the other hand, P(moral beliefs | God) is going to be high, because moral truths are exactly the sort of thing we would expect God to ensure our belief in (through evolutionary means, perhaps). So, the fact of moral belief will be evidence for theism over naturalism.

The second approach to rescuing the moral argument is deeper and I think more interesting. Moreover, it generalizes beyond the moral case. This approach says that a necessary condition for moral beliefs is being able to have moral concepts. But to have moral concepts requires semantic access to moral properties. And it is difficult to explain on contemporary naturalistic grounds how we have semantic access to moral properties. Our best naturalistic theories of reference are causal, but moral properties on contemporary naturalism (as opposed to, say, the views of a Plato or an Aristotle) are causally inert. Theism, however, can nicely accommodate our semantic access to moral properties. The two main theistic approaches to morality ground morality in God or in an Aristotelian teleology. Aristotelian teleology allows us to have a causal connection to moral properties—but then Aristotelian teleology itself calls for an explanation of our teleological properties that theism is best suited to give. And approaches that ground morality in God give God direct semantic access to moral properties, which semantic access God can extend to us.

This generalizes to other kinds of normativity, such as epistemic and aesthetic: theism is better suited to providing an explanation of how we have semantic access to the properties in question.

Conscious computers and reliability

Suppose the ACME AI company manufactures an intelligent, conscious and perfectly reliable computer, C0. (I assume that the computers in this post are mere computers, rather than objects endowed with soul.) But then a clone company manufactures a clone of C1 out of slightly less reliable components. And another clone company makes a slightly less reliable clone of C2. And so on. At some point in the cloning sequence, say at C10000, we reach a point where the components produce completely random outputs.

Now, imagine that all the devices from C0 through C10000 happen to get the same inputs over a certain day, and that all their components do the same things. In the case of C10000, this is astronomically unlikely, as the super-unreliable components of the C10000 produce completely random outputs.

Now, C10000 is not computing. Its outputs are no more the results of intelligence than the copy of Hamlet typed by the monkeys is the result of intelligent authorship. By the same token, C10000 is not conscious on computational theories of consciousness.

On the other hand, C0’s outputs are the results of intelligence and C0 is conscious. The same is true for C1, since if intelligence or consciousness required complete reliability, we wouldn’t be intelligent and conscious. So somewhere in the sequence from C0 to C10000 there must be a transition from intelligence to lack thereof and somewhere (perhaps somewhere else) a transition from consciousness to lack thereof.

Now, intelligence could plausibly be a vague property. But it is not plausible that consciousness is a vague property. So, there must be some precise transition point in reliability needed for computation to yield consciousness, so that a slight decrease in reliability—even when the actual functioning is unchanged (remember that the Ci are all functioning in the same way)—will remove consciousness.

More generally, this means that given functionalism about mind, there must be a dividing line in measures of reliability between cases of consciousness and ones of unconsciousness.

I wonder if this is a problem. I suppose if the dividing line is somehow natural, it’s not a problem. I wonder if a natural dividing line of reliability can in fact be specified, though.

Monday, March 12, 2018

The usefulness of having two kinds of quantifiers

A central Aristotelian insight is that substances exist in a primary way and other things—say, accidents—in a derivative way. This insight implies that use of a single existential quantifier ∃x for both substances and forms does not cut nature at the joints as well as it can be cut.

Here are two pieces of terminology that together not only capture the above insight about existence, but do a lot of other (but closely related) ontological work:

  1. a fundamental quantifier ∃u over substances.

  2. for any y, a quantifier ∃yx over all the (immediate) modes (tropes) of y.

We can now define:

  • a is a substance iff ∃u(u = a)

  • b is a (immediate) mode of a iff ∃ax(x = b)

  • f is a substantial form of a substance a iff a is a substance and ∃ax(x = f): substantial forms are immediate modes of substances

  • b is a (first-level) accident of a substance a iff u is a substance ∃axxy(y = b & y ≠ x): first-level accidents are immediate modes of substantial forms, distinct from these forms (this qualifier is needed so that God wouldn’t coount as having any accidents

  • f is a substantial form iff ∃uux(x = f)

  • b is a (first-level) accident iff ∃uuxxy(y = b).

This is a close variant on the suggestion here.

Friday, March 9, 2018

A regress of qualitative difference

According to heavyweight Platonism, qualitative differences arise from differences between the universals being instantiated. There is a qualitative difference between my seeing yellow and your smelling a rose. This difference has to come from the difference between the universals seeing yellow (Y) and smelling a rose (R). But one doesn’t get a qualitative difference from being related in the same way to numerically but not qualitatively different things (compare: being taller than Alice is not qualitatively different from being taller than Bea if Alice and Bea are qualitatively the same—and in particular, of the same height). Thus, if the qualitative difference between my seeing yellow and your smelling a rose comes from being related by instantiation to different things, namely Y and R, then this presupposes that the two things are themselves qualitatively different. But this qualitative difference between Y and R depends on Y and R exemplifying different—and indeed qualitatively different—properties. And so on, in a regress!

Intrinsic attribution

  1. If heavyweight Platonism is true, all attribution of attributes to a subject is grounded in facts relating the subject to abstracta.

  2. Intrinsic attribution is never grounded in facts relating a subject to something distinct from itself.

  3. There are cases of intrinsic attribution with a non-abstract subject.

  4. If heavyweight Platonism is true, each case of intrinsic attribution to a non-abstract subject is grounded in facts relating that object to something other than itself. (By 1 and 2)

  5. So, if heavyweight Platonism is true, there are no cases of intrinsic attribution to a non-abstract subject. (2 and 4)

  6. So, heavyweight Platonism is not true. (By 2 and 5)

Here, however, is a problem with 3. All cases of attribution to a creature are grounded in the creature’s participation in God. Hence, no creature is a subject of intrinsic attribution. And God’s attributes are grounded in a relation between God and the Godhead. But by divine simplicity, God is the Godhead. Since the Godhead is abstract, God is abstract (as well as being concrete) and hence God does not provide an example of intrinsic attribution with a non-abstract subject.

I still feel that there is something to the above argument. Maybe the sense in which a creature’s attributes are grounded in the creature’s participation in God is different from the sense of grounding in 2.

Friday, March 2, 2018

Wishful thinking

Start with this observation:

  1. Commonly used forms of fallacious reasoning are typically distortions of good forms of reasoning.

For instance, affirming the consequent is a distortion of the probabilistic fact that if we are sure that if p then q, then learning q is some evidence for p (unless q already had probability 1 or p had probability 0 or 1). The ad hominem fallacy of appeal to irrelevant features in an arguer is a distortion of a reasonable questioning of a person’s reliability on the basis of relevant features. Begging the question is, I suspect, a distortion of an appeal to the obviousness of the conclusion: “Murder is wrong. Look: it’s clear that it is!”

Now:

  1. Wishful thinking is a commonly used form of fallacious reasoning.

  2. So, wishful thinking is probably a distortion of a good form of reasoning.

I suppose one could think that wishful thinking is one of the exceptions to rule (1). But to be honest, I am far from sure there are any exceptions to rule (1), despite my cautious use of “typically”. And we should avoid positing exceptions to generally correct rules unless we have to.

So, if wishful thinking is a distortion of a good form of reasoning, what is that good form of reasoning?

My best answer is that wishful thinking is a distortion of correct probabilistic reasoning on the basis of the true claim that:

  1. Typically, things go right.

The distortion consists in the fact that in the fallacy of wishful thinking one is reasoning poorly, likely because one is doing one or more of the following:

  1. confusing things going as one wishes them to go with things going right,

  2. ignoring defeaters to the particular case, or

  3. overestimating the typicality mentioned in (4).

Suppose I am right about (4) being true. Then the truth of (4) calls out for an explanation. I know of four potential explanations of (4):

  1. Theism: God creates a good world.

  2. Optimalism: everything is for the best.

  3. Aristotelianism: rightness is a matter of lining up with the telos, and causal powers normally succeed at getting at what they are aiming at.

  4. Statisticalism: norms are defined by what is typically the case.

I think (iv) is untenable, so that leaves (i)-(iii).

Now, optimalism gives strong evidence for theism. First, theism would provide an excellent explanation for optimalism (Leibniz). Second, if optimalism is true, then there is a God, because that’s for the best (Rescher).

Aristotelianism also provides evidence for theism, because it is difficult to explain naturalistically where teleology comes from.

So, thinking through the fallacy of wishful thinking provides some evidence for theism.

Thursday, March 1, 2018

Superpositions of conscious states

Consider this thesis:

  1. Reality is never in a superposition of two states that differ with respect to what, if anything, observers are conscious of.

This is one of the motivators for collapse interpretations of quantum mechanics. Now, suppose that S is an observable that describes some facet of conscious experience. Then according to (1), reality is always in some eigenstate of S.

Suppose that at the beginning t0 of some interval I of times, reality is in eigenstate ψ0. Now, suppose that collapse does not occur during I. By continuity considerations, then, over I reality cannot evolve to a state orthogonal to ψ0 without passing through a state that is a superposition of ψ0 and something else. In other words, over a collapse-free interval of time, the conscious experience that is described by S cannot change if (1) is true.

What if collapse happens? That doesn’t seem to help. There are two plausible options. Either collapses are temporally discrete or temporally dense. If they are temporally dense, then by the quantum Zeno effect with probability one we have no change with respect to S. If they are temporally discrete, then suppose that t1 is the first time after t0 at which collapse causes the system to enter a state ψ1 orthogonal to ψ0. But for collapse to be able to do that, the state would have had to have assigned some weight to ψ1 prior to the collapse, while yet assigning some weight to ψ0, and that would violate (1).

(There might also be some messy story where there are some temporally dense and some temporally isolated collapse. I haven’t figured out exactly what to say about that, other than that it is in danger of being ad hoc.)

So, whether collapse happens or not, it seems that (1) implies that there is no change with respect to conscious experience. But clearly the universe changes with respect to conscious experience. So, it seems we need to reject (1). And this rejection seems to force us into some kind of weird many-worlds interpretation on which we have superpositions of incompatible experiences.

There are, however, at least two places where this argument can be attacked.

First, the thesis that conscious experience is described by observables understood (implicitly) as Hermitian operators can be questioned. Instead, one might think that conscious states correspond to subsets of the Hilbert space, subsets that may not even be linear subspaces.

Second, one might say that (1) is false, but nothing weird happens. We get weirdness from the denial of (1) if we think that a superposition of, say, seeing a square and seeing a circle is some weird state that has a seeing-a-square aspect and a seeing-a-circle aspect (this is weird in different ways depending on whether you take a multiverse interpretation). But we need not think that. We need not think that if a quantum state ψ1 corresponds to an experience E1 and a state ψ2 corresponds to an experience E2, then ψ = a1ψ1 + a2ψ2 corresponds to some weird mix of E1 and E2. Perhaps the correspondence between physical and mental states in this case goes like this:

  1. when |a1| ≫ |a2|, the state ψ still gives rise to E1

  2. when |a1| ≪ |a2|, the state ψ gives rise to E2

  3. when a1 and a2 are similar in magnitude, the state ψ gives rise to no conscious experience at all (or gives rise to some other experience, perhaps one related to E1 and E2, or perhaps one that is entirely unrelated).

After all, we know very little about which conscious states are correlated with which physical states. So, it could be that there is always a definite conscious state in the universe. I suppose, though, that this approach also ends up denying that we should think of conscious states as corresponding in the most natural way to the eigenvectors of a Hermitian operator.

Wednesday, February 28, 2018

More on pain and presentism

Imagine two worlds, in both of which I am presently in excruciating pain. In world w1, this pain began a nanosecond ago and will end in a nanosecond. In w2, the pain began an hour ago and will end in an hour.

In world w1, I am hardly harmed if I am harmed at all. Two nanoseconds of pain, no matter how bad, are just about harmless. It would be rational to accept two nanoseconds of excruciating pain in exchange for any non-trivial good. But in world w2, things are really bad for me.

An eternalist has a simple explanation of this: even if each of the two nanosecond pains has only a tiny amount of badness, in w2 I really have 60 × 109 of them, and that’s really bad.

It seems hard, however, for a presentist to explain the difference between the two worlds. For of the 60 × 109 two-nanosecond pains I receive in w2, only one really exists. And there is one that really exists in w1. Where is the difference? Granted, in w2, I have received billions of these pains and will receive billions more. But right now only one pain exists. And throughout the two hours of pain, at any given time, only one of the pains exists—and that one pain is insignificant.

Here is my best way of trying to get the presentist out of this difficulty. Pain is like audible sound. You cannot attribute an audible sound to an object in virtue of how the object is at one moment of time, or even a very, very short interval of times. You need at least 50 microseconds to get an audible sound, since you need one complete period of air vibration (I am assuming that 50 microseconds doesn’t count as “very, very short”). When the presentist says that there is an audible sound at t, she must mean that there was air vibration going on some time before t and/or there will be air vibration going on for some time after t. Likewise, to be in pain at t requires a non-trivial period of time, much longer than two nanoseconds, during which some unpleasant mental activity is going on.

How long is that period? I don’t know. A tenth of a second, maybe? But maybe for an excruciating pain, that activity needs to go for longer, say half a second. Suppose so. Can I re-run the original argument, but using a half-second pulse of excruciating pain in place of the two-nanosecond excruciating pain? I am not sure. For a half-second of excruciating pain is not insignificant.

Collapse and the continuity of consciousness

One version of the quantum Zeno effect is that if you collapse a system’s wavefunction with respect to a measurement often enough, the measurement is not going to change.

Thus, if observation causes collapse, and you look at a pot of water on the stove often enough, it won’t boil. In particular, if you are continuously (or just at a dense set of times) observing the pot of water, then it won’t boil.

But of course watched pots do boil. Hence:

  • If observation causes collapse, consciousness is not temporally continuous (or temporally dense).

And the conclusion is what we would expect if causal finitism were true. :-)

Tuesday, February 27, 2018

A problem for Goedelian ontological arguments

Goedelian ontological arguments (e.g., mine) depend on axioms of positivity. Crucially to the argument, these axioms entail that any two positive properties are compatible (i.e., something can have both).

But I now worry whether it is true that any two positive properties are compatible. Let w0 be our world—where worlds encompass all contingent reality. Then, plausibly, actualizing w0 is a positive property that God actually has. But now consider another world, w1, which is no worse than ours. Then actualizing w1 is a positive property, albeit one that God does not actually have. But it is impossible that a being actualize both w0 and w1, since worlds encompass all contingent reality and hence it is impossible for two of them to be actual. (Of course, God can create two or more universes, but then a universe won’t encompass all contingent reality.) Thus, we have two positive properties that are incompatible.

Another example. Let E be the ovum and S1 the sperm from which Socrates originated. There is another possible world, w2, at which E and a different sperm, S2, results in Kassandra, a philosopher every bit as good and virtuous as Socrates. Plausibly, being friends with Socrates is a positive property. And being friends with Kassandra is a positive property. But also plausibly there is no possible world where both Socrates and Kassandra exist, and you can’t be friends with someone who doesn’t exist (we can make that stipulative). So, being friends with Socrates and being friends with Kassandra are incompatible and yet positive.

I am not completely confident of the counterexamples. But if they do work, then the best fix I know for the Goedelian arguments is to restrict the relevant axioms to strongly positive properties, where a property is strongly positive just in case having the property essentially is positive. (One may need some further tweaks.) Essentially actualizing w0 limits one from being able to actualize anything else, and hence isn’t positive. Likewise, essentially being friends with Socrates limits one to existing only in worlds where Socrates does, and hence isn’t positive. But, alas, the argument becomes more complicated and hence less plausible with the modification.

Another fix might be to restrict attention to positive non-relational properties, but I am less confident that that will work.

Voluntariness of beliefs

The following claims are incompatible:

  1. Beliefs are never under our direct voluntary control.

  2. Beliefs are high credences.

  3. Credences are defined by corresponding decisional dispositions.

  4. Sometimes, the decisional dispositions that correspond to a high credence are under our direct voluntary control.

Here is a reason to believe 4: We have the power to resolve to act a certain way. When successful, exercising the power of resolution results in a disposition to act in accordance with the resolution. Among the things that in some cases we can resolve to do is to make the decisions that would correspond to a high credence.

So, I think we should reject at least one of 1-3. My inclination is to reject both 1 and 3.

Friday, February 23, 2018

More on wobbling of priors

In two recent posts (here and here), I made arguments based on the idea that wobbliness in priors translates to wobbliness in posteriors. The posts while mathematically correct neglect an epistemologically important fact: a wobble in a prior may be offset be a countervailing wobble in a Bayes’ factor, resulting in a steady posterior.

Here is an example of this phenomenon. Either a fair coin or a two-headed coin was tossed by Carl. Alice thinks Carl is a normally pretty honest guy, and so she thinks it’s 90% likely that a fair coin was tossed. Bob thinks Carl is tricky, and so he thinks there is only a 50% chance that Carl tossed the fair coin. So:

  • Alice’s prior for heads is (0.9)(0.5)+(0.1)(1.0) = 0.55

  • Carl’s prior for heads is (0.5)(0.5)+(0.5)(1.0) = 0.75.

But now Carl picks up the coin, mixes up which side was at the top, and both Alice and Bob have a look at it. It sure looks to them like there is a head on one side of it. As a result, they both come to believe that the coin is very, very likely to be fair, and when they update their credences on their observation of the coin, they both come to have credence 0.5 that the coin landed heads.

But a difference in priors should translate to a corresponding difference in posteriors given the same evidence, since the force of evidence is just the addition of the logarithm of the Bayes’ factor to the logarithm of the prior odds ratio. How could they both have had such very different priors for heads, and yet a very similar posterior, given the same evidence?

The answer is this. If the only relevant difference between Alice’s and Carl’s beliefs were their priors for heads, then indeed they couldn’t get the same evidence and both end up very close to 0.5. But their Bayes’ factors also differ.

  • For Alice: P(looks fair | heads)≈0.82; P(looks fair | tails)≈1; Bayes’ factor for heads vs. tails ≈0.82

  • For Bob: P(looks fair | heads)≈0.33; P(looks fair | tails)≈1; Bayes’ factor for heads vs. tails ≈0.33.

Thus, for Alice, that the coin looks fair is pretty weak evidence against heads, lowering her credence from 0.55 to around 0.5, while for Bob, that the coin looks fair is moderate evidence against heads, lowing his credence from 0.75 to around 0.5. Both end up at roughly the same point.

Thus, we cannot assume that a difference with respect to a proposition in the priors translates to a corresponding difference in the posteriors. For there may also be a corresponding difference in the Bayes’ factors.

I don’t know if the puzzling phenomena in my two posts can be explained away in this way. But I don’t know that they can’t.

A slightly different causal finitist approach to finitude

The existence of non-standard models of arithmetic makes defining finitude problematic. A finite set is normally defined as one that can be numbered by a natural number, but what is a natural number? The Peano axioms sadly underdetermine the answer: there are non-standard models.

Now, causal finitism is the metaphysical doctrine that nothing can have an infinite causal history. Causal finitism allows for a very neat and pretty intuitive metaphysical account of what a natural number is:

  • A natural number is a number one can causally count to starting with zero.

Causal counting is counting where each step is causally dependent on the preceding one. Thus, you say “one” because you remember saying “zero”, and so on. The causal part of causal counting excludes a case where monkeys are typing at random and by chance type up 0, 1, 2, 3, 4. If causal finitism is false, the above account is apt to fail: it may be possible to count to infinite numbers, given infinite causal sequences.

While we can then plug this into the standard definition of a finite set, we can also define finitude directly:

  • A finite set or plurality is one whose elements can be causally counted.

One of the reasons we want an account of the finite is so we get an account of proof. Imagine that every day of a past eternity I said: “And thus I am the Queen of England.” Each day my statement followed from what I said before, by reiteration. And trivially all premises were true, since there were no premises. Yet the conclusion is false. How can that be? Well, because what I gave wasn’t a proof, as proofs need to be finite. (I expect we often don’t bother to mention this point explicitly in logic classes.)

The above account of finitude gives an account of the finitude of proof. But interestingly, given causal finitism, we can give an account of proof that doesn’t make use of finitude:

  • To causally prove a conclusion from some assumptions is to utter a sequence of steps, where each step’s being uttered is causally dependent on its being in accordance with the rules of the logical system.

  • A proof is a sequence of steps that could be uttered in causally proving.

My infinite “proof” that I am the Queen of England cannot be causally given if causal finitism is true, because then each day’s utterance will be causally dependent on the previous day’s utterance, in violation of causal finitism. However, interestingly, the above account of proof does not guarantee that a proof is finite. A proof could contain an infinite number of steps. For instance, uttering an axiom or stating a premise does not need to causally depend on previous steps, but only on one’s knowledge of what the axioms and premises are, and so causal finitism does not preclude having written down an infinite number of axioms or premises. However, what causal finitism does guarantee is that the conclusion will only depend on a finite number of the steps—and that’s all we need to make the proof be a good one.

What is particularly nice about this approach is that the restriction of proofs to being finite can sound ad hoc. But it is very natural to think of the process of proving as a causal process, and of proofs as abstractions from the process of proving. And given causal finitism, that’s all we need.

Wobbly priors and posteriors

Here’s a problem for Bayesianism and/or our rationality that I am not sure what exactly to do about.

Take a proposition that we are now pretty confident of, but which was highly counterintuitive so our priors were tiny. This will be a case where we were really surprised. Examples:

  1. Simultaneity is relative

  2. Physical reality is indeterministic.

Let’s say our current level of credence is 0.95, but our priors were 0.001. Now, here is the problem. Currently we (let’s assume) believe the proposition. But if our priors were 0.0001, our credence would have been only 0.65, given the same evidence, and so we wouldn’t believe the claim. (Whatever the cut-off for belief is, it’s clearly higher than 2/3: nobody should believe on tossing a die that they will get 4 or less.)

Here is the problem. It’s really hard for us to tell the difference in counterintuitiveness between 0.001 and 0.0001. Such differences are psychologically wobbly. If we just squint a little differently when looking mentally a priori at (1) and (2), our credence can go up or down by an order of magnitude. And when our priors are even lower, say 0.00001, then an order of magnitude difference in counterintuitiveness is even harder to distinguish—yet an order of magnitude difference in priors is what makes the difference between a believable 0.95 posterior and an unbelievable 0.65 posterior. And yet our posteriors, I assume, don’t wobble between the two.

In other words, the problem is this: it seems that the tiny priors have an order of magnitude wobble, but our moderate posteriors don’t exhibit a correspnding wobble.

If our posteriors were higher, this wouldn’t be a problem. At a posterior of 0.9999, an order of magnitude wobble in priors results in a wobble between 0.9999 and 0.999, and that isn’t very psychologically noticeable (except maybe when we have really high payoffs).

There is a solution to this problem. Perhaps our priors in claims aren’t tiny just because the claims are counterintuitive. It makes perfect sense to have tiny priors for reasons of indifference. My prior in winning a lottery with a million tickets and one winner is about one in a million, but my intuitive wobbliness on the prior is less than an order of magnitude (I might have some uncertainty about whether the lottery is fair, etc.) But mere counterintuitiveness should not lead to such tiny priors. The counterintuitive happens all too often! So, perhaps, our priors in (1) and (2) were, or should have been, more like 0.10. And now perhaps the wobble in the priors will probably be rather less: it might vary between 0.05 and 0.15, which will result in a less noticeable wobble, namely between 0.90 and 0.97.

Simple hypotheses like (1) and (2), thus, will have at worst moderately low priors, even if they are quite counterintuitive.

And here is an interesting corollary. The God hypothesis is a simple hypothesis—it says that there is something that has all perfections. Thus even if it is counterintuitive (as it is to many atheists), it still doesn’t have really tiny priors.

But perhaps we are irrational in not having our posteriors wobble in cases like (1) and (2).

Objection: When we apply our intuitions, we generate posteriors, not priors. So our priors in (1) and (2) can be moderate, maybe even 1/2, but then when we updated on the counterintuitiveness of (1) and (2), we got something small. And then when we updated on the physics data, we got to 0.95.

Response: This objection is based on a merely verbal disagreement. For whatever wobble there is in the priors on the account I gave in the post will correspond to a similar wobble in the counterintuitiveness-based update in the objection.

Thursday, February 22, 2018

In practice priors do not wash out often enough

Bayesian reasoning starts with prior probabilities and gathers evidence that leads to posterior probabilities. It is occasionally said that prior probabilities do not matter much, because they wash out as evidence comes in.

It is true that in the cases where there is convergence of probability to 0 or to 1, the priors do wash out. But much of our life—scientific, philosophical and practical—deals with cases where our probabilities are not that close to 0 or 1. And in those cases priors matter.

Let’s take a case which clearly matters: climate change. (I am not doing this to make any first-order comment on climate change.) The 2013 IPCC report defines several confidence levels:

  • virtually certain: 99-100%

  • very likely: 90-100%

  • likely: 66-100%

  • about as likely as not: 33-66%

  • unlikely: 0-33%

  • very unlikely: 0-10%

  • exceptionally unlikely: 0-1%.

They then assess that a human contribution to warmer and/or more frequent warm days over most land areas was “very likely”, and no higher confidence level occurs in their policymaker summary table SPM.1. Let’s suppose that this “very likely” corresponds to the middle of its confidence range, namely a credence of 0.95. How sensitive is this “very likely” to priors?

On a Bayesian reconstruction, there was some actual prior probability p0 for the claim, which, given the evidence, led to the posterior of (we’re assuming) 0.95. If that prior probability had been lower, the posterior would have been lower as well. So we can ask questions like this: How much lower would the prior had to have been than p0 for…

  • …the posterior to no longer be in the “very likely” range?

  • …the posterior to fall into the “about as likely as not range”?

These are precise and pretty simple mathematical questions. The Bayesian effect of evidence is purely additive when we work with log likelihood ratios instead of probabilities, i.e., with log p/(1 − p) in place of p, so a difference in prior log likelihood ratios generates an equal difference in posterior ones. We can thus get a formula for what kinds of changes of priors translate to what kinds of changes in posteriors. Given an actual posterior of q0 and an actual prior of p0, to have got a posterior of q1, the prior would have to have been (1 − q0)p0q0/[(q1 − q0)p0 + (1 − q1)q0], or so says Derive.

We can now plug in a few numbers, all assuming that our actual confidence is 0.95:

  • If our actual prior was 0.10, to leave the “very likely” range, our prior would have needed to be below 0.05.

  • If our actual prior was 0.50, to leave the “very likely” range, our prior would have needed to be below 0.32.

  • If our actual prior was 0.10, to get to the “about as likely as not range”, our prior would have needed to be below 0.01.

  • If our actual prior was 0.50, to get to the “about as likely as not range”, our prior would have needed to be below 0.09.

Now, we don’t know what our actual prior was, but we can see from the above that variation of priors well within an order of magnitude can push us out of the “very likely” range and into the merely “likely”. And it seems quite plausible that the difference between the “very likely” and merely “likely” matters practically, given the costs involved. And a variation in priors of about one order of magnitude moves us from “very likely” to “about as likely as not”.

Thus, as an empirical matter of fact, priors have not washed out in the case of global warming. Of course, if we observe long enough, eventually our evidence about global warming is likely to converge to 1. But by then it will be too late for us to act on that evidence!

And there is nothing special about global warming here. Plausibly, many scientific and ordinary beliefs that we need to act on have a confidence level of no more than about 0.95. And so priors matter, and can matter a lot.

We can give a rough estimate of how differences in priors make a difference regarding posteriors using the IPCC likelihood classifications. Roughly speaking, a change between one category and the next (e.g., “exceptionally unlikely” to “unlikely”) in the priors results in a change between a category and the next (e.g., “likely” to “very likely”) in the posteriors.

The only time priors have washed out is cases where our credences have converged very close to 0 or to 1. There are many scientific and ordinary claims in this category. But not nearly enough for us to be satisfied. We do need to worry about priors, and we better not be subjective Bayesians.