A very rough draft of Infinity, Causation and Paradox is now on github. Chapter 7 is the weakest, I think. I welcome comments about everything, by email or, preferable, with the issue tracker on github.
Thursday, December 31, 2015
You are a police officer, and it is looking to you like Glossop is about to kill Fink-Nottle with a shotgun. You hear Glossop say to Fink-Nottle: "This will pay you back for stealing Madeline's affections." Your justified credence that Glossop is about to murder Fink-Nottle is, say, 0.9999. Though there is some small chance that, say, Glossop and Fink-Nottle are practicing a fight scene for an amateur theatrical. The only thing you can see that you can do to save Fink-Nottle is to shoot Glossop dead (you can't yell at them, as you're too far away for them to hear you--you only know what Glossop is saying because you can read his lips). This seems to be the right thing to do, even though you risk a probability of 0.0001 that you are killing an innocent man.
On the other hand, it is not permissible to kill someone you know for sure to be innocent in order to save 9999 others.
There is an apparent tension between these two judgments. Standard decision-theoretic considerations suggest that if it is worth taking a 0.0001 probability risk of an adverse outcome (the killing of an innocent person, in this case) in order to secure a 0.9999 chance of some benefit (saving an innocent's life) then the disvalue of the adverse outcome must be less than 9999 times greater than the value of the benefit. Thus, it would follow from the first judgment that the disvalue of killing an innocent person is less than 9999 times the value of saving the life of an innocent person. But if so, then it seems it would be worthwhile to kill one innocent to save 9999 others.
Risk aversion is relevant to such judgments. But risk aversion tends to reduce the choiceworthiness (or at least apparent choiceworthiness) of actions involving uncertainty, so it's going to make it harder to justify killing Glossop, and that only strengthens the argument that if it's permissible to kill Glossop, it's permissible to kill one innocent to save 9999.
The deontologist might use the above line of argument to challenge the applicability of standard decision-theoretic considerations to moral questions. The person committed to such a decision theory might, instead, use the line of argument to undermine deontology.
But the whole above line of thought is, however, fallacious. For in killing Glossop, you accept a risk of 0.0001 of
- killing an innocent who looks guilty to you.
- killing an innocent who is known by you to be for sure innocent.
Tuesday, December 29, 2015
I just updated RaspberryJamMod, which allows one to run python code in Minecraft (using a variant of the Raspberry PI Minecraft API), to work with Minecraft 1.8.8 (with the latest beta of Forge). Merry Christmas!
Here's a rebel fighter from Space Janitors (using a mesh in their Janitor's Closet) generated with the render.py script.
Two nights ago I had a dream. I was in the military, and we were being deployed, and I suddenly got worried about something like this line of thought (I am filling in some details--it was more inchoate in the dream). I wasn't in a position to figure out on my own whether the particular actions I was going to be commanded to do are morally permissible. And these actions would include killing, and to kill permissibly one needs to be pretty confident that the killing is permissible. Moreover, only the leaders had in their possession sufficient information to make the judgment, so I would have to rely on their judgment. But I didn't actually trust the moral judgment of the leaders, particularly the president. My main reason in the dream for not trusting them was that the president is pro-choice, and someone whose moral judgment is so badly mistaken as to think that killing the unborn is permissible is not to be trusted in moral judgments relating to life and death. As a result, I refused to participate, accepting whatever penalties the military would impose. (I didn't get to find out what these were, as I woke up.)
Upon waking up and thinking this through, I wasn't so impressed by the particular reason for not trusting the leadership. A mistake about the morality of abortion may not be due to a mistake about the ethics of killing, but due to a mistake about the metaphysics of early human development, a mistake that shouldn't affect one's judgments about typical cases of wartime killing.
But the issue generalizes beyond abortion. In a pluralistic society, a random pair of people is likely to differ on many moral issues. The probability of disagreement will be lower when one of the persons is a member of a population that elected the other, but the probability of disagreement is still non-negligible. One worries that a significant percentage of soldiers have moral views that differ from those of the leadership to such a degree that if the soldiers had the same information as the leaders do, the soldiers would come to a different moral evaluation of whether the war and particular lethal acts in it are permissible. So any particular soldier who is legitimately confident of her moral views has reason to worry that she is being commanded things that are impermissible, unless she has good reason to think that her moral views align well with the leaders'. This seems to me to be a quite serious structural problem for military service in a pluralistic society, as well as a serious existential problem.
The particular problem here is not the more familiar one where the individual soldier actually evaluates the situation differently from her leaders. Rather, it arises from a particular way of solving the more familiar problem. Either the soldier has sufficient information by her lights to evaluate the situation or she does not. If she does, and she judges that the war or a lethal action is morally wrong, then of course conscience requires her to refuse, accepting any consequences for herself. Absent sufficient information, she needs to rely on her leaders. But here we have the problem above.
How to solve the problem? I don't know. One possibility is that even though there are wide disparities between moral systems, the particular judgments of these moral systems tend to agree on typical acts. Even though utilitarianism is wrong and Catholic ethics is right, the utilitarian and the Catholic moralist tend to agree about most particular cases that come up. Thus, for a typical action, a Catholic who hears the testimony of a well-informed utilitarian that an action is permissible can infer that the action is probably permissible. But war brings out differences between moral systems in a particularly vivid way. If bombing civilians in Hiroshima and Nagasaki is likely to get the emperor to surrender and save many lives, then the utilitarian is likely to say that the action is permissible while the Catholic will say it's mass murder.
It could, however, be that there are some heuristics that could be used by the soldier. If a war is against a clear aggressor, then perhaps the soldier should just trust the leadership to ensure that the other conditions (besides the justness of the cause) in the ius ad bellum conditions are met. If a lethal action does not result in disproportionate civilian deaths, then there is a good chance that the judgments of various moral systems will agree.
But what about cases where the heuristics don't apply? For instance, suppose that a Christian is ordered to drop a bomb on an area that appears to be primarily civilian, and no information is given. It could be that the leaders have discovered an important military installation in the area that needs to be destroyed, and that this is intelligence that cannot be disclosed to those who will carry out the bombing. But it could also be that the leaders want to terrorize the population into surrender or engage in retribution for enemy acts aimed at civilians. Given that there is a significant probability, even if it does not exceed 1/2, that the action is a case of mass murder rather than an act of just war, is it permissible to engage in the action? I don't know.
Perhaps knowledge of prevailing military ethical and legal doctrine can help in such cases. The Christian may know, for instance, that aiming at civilians is forbidden by that doctrine. In that case, as long as she has enough reason to think that the leadership actually obeys the doctrine, she might be justified in trusting in their judgment. This is, I suppose, an argument for militaries to make clear their ethical doctrines and the integrity of their officers. For if they don't, then there may be cases where too much disobedience of orders is called for.
I also don't know what probability of permissibility is needed for someone to permissibly engage in a killing.
I don't work in military ethics. So I really know very little about the above. It's just an ethical reflection occasioned by a dream...
Monday, December 28, 2015
Back when I was a grad student, my dad gave me a Sharp OZ 730 organizer. It had a Z80 processor, and could (slowly) run BASIC programs. Some folks figured out how to run assembly code on it, and then I figured out how to target it with C code, and I wrote most of a library, designed a file system that hid in hidden recesses of the device, prepared an SDK and wrote or co-wrote a bunch of apps: games, a serial terminal emulator, a function grapher (it's really nice and polished, as I just discovered on trying it out after many years), an ebook reader, a Greek New Testament, some utilities. I got a couple more Sharp organizers, as a guy from Sharp UK liked what I was doing. (I think he liked that one could hook up the device to a modem and use the terminal emulator to check email.) Eventually, the unit got discontinued.
Recently, I've been digging out my code archives. They're mostly now on github, though perhaps not always the latest version. The code is a messy mix of Z80 assembly and C. The SDK worked by launching a CP/M emulator, running the Hi-Tech C compiler in the emulator, and then running a peephole optimizer. With limited memory and CPU power, one had to do a lot of optimization. I remember looking carefully at the code-cycle charts for the assembly, and then doing nasty little things like storing variables inside self-modifying code (instead of allocating two bytes for a global variable and then accessing it with, say, ld bc,(address), the first time the variable is accessed, I can do ld bc,value and store the variable right inside the immediate value), and implementing some of these in the peephole optimizer.
My big kids have one organizer unit each, so a couple of weeks I ordered a $2 USB-serial adapter from banggood. It came today and I can once again transfer the apps to the device (the hard drive on the computer I used to use for development of this stuff failed, and my laptops don't have a serial port). Sharp still has the downloader software here (I run it on Win 8 x64 in Win98+Admin compatibility mode). I even put a ZMachine emulator (not written by me) on my son's so he can play Zork (purchased from GOG).
Monday, December 21, 2015
Friday, December 18, 2015
If determinism were true, then since each state could be project from the initial state, we could simply suppose that the whole four-dimensional shebang came into existence causally "all at once", so that there would be no causal relations within the four-dimensional universe. The only relevant causation could that of God's causing the universe as a whole--and an atheist might just think the four-dimensional universe to be uncaused.
I think that this acausal picture could be adapted to give an attractive picture of the role of causation in a collapse interpretation of quantum mechanics (whether the collapse is of the GRW-type or of the consciousness-caused type). On a collapse picture, we have an alternation between a deterministic evolution governed by the Schroedinger equation and an indeterministic collapse. Why not suppose, then, that there is no causation within the deterministic evolution? We could instead suppose that the state of the universe at collapse causes the whole of the four-dimensional block between that collapse and the next. As long as collapse isn't too frequent, this could allow occasions of causation to be discrete, with only a finite number of such occasions within any interval of time. And this would let us reconcile quantum physics with causal finitism even with a continuous time. (Relativity would require more work.)
Wednesday, December 16, 2015
Start with this argument:
- Christians ought to forgive all wrongdoing.
- Forgiving includes foregoing retribution.
- All ought to forego retribution in the absence of wrongdoing.
- Therefore, Christians ought in all cases to forego retribution.
- Retributive justice is central to the concept of punishment.
- Punishment is often needed for the public good.
What is the Christian to do? Well, one thought is that we should weaken (1). Thomas Aquinas in his sermons on the Lord's Prayer says that we are only required to forgive those who ask us for forgiveness. (After all, Christ tells us to expect God to forgive us as we forgive others; but we do ask God for forgiveness.) Forgiving the unrepentant is supererogatory, he says. That weakens the conflict between the displayed claims. Nonetheless, there are times when the criminal justice system needs to punish someone who we have good reason to think is repentant, because the risks to society of letting her go free may be unacceptably high. Furthermore, Aquinas's modification of (1) doesn't help all that much because the supererogatory is by definition always right. So even with Aquinas's modification of (1), we still seem to get a conflict between forgiveness and the needs of the public good.
Another move, and it may be the most promising, is to distinguish between the individual and the community. Forgiveness is the individual Christian's duty (or at least supererogation--but for brevity I won't consider that option any more), but there are wrongs that, on account of the public good, the community should not forgive. I think this is a quite promising option, but I am not completely convinced. One reason I am not convinced is that Catholic social teaching allows for the possibility of a Christian state, with Vatican City being an example. And there is some plausibility in thinking that the Christian state should behave rather like the Christian individual, but a Christian state has need of punishment for safeguarding the public good. Now maybe forgiveness and punishment are one of the things that varies between the Christian individual and the Christian state, so that the individual should forgive while the state sometimes is not permitted to do so. But it would be good to have another approach.
Think about sports and victory. The very concept of a fencing match cannot be understood apart from seeing it as a practice whose internal end is getting to a score of five before the opponent does. Nonetheless, it is possible to have a friendly and honorable match where no one is intentionally pursuing victory. Rather, the players are exercising their skills in excellent ways that tend to promote victory without actually seeking victory. (The clearest case may be a parent fencing with a child and hoping that the child's skills are so good that the parent will be defeated; but one can have cases where each wants the other to win.)
Similarly, perhaps, just as sports cannot be understood apart from victory, punishment cannot be understood apart from retribution. But just as there are reasons besides victory to play, there are reasons apart from retribution to punish. In those cases, punishment is not intended by the agent. (Another example: I have argued that sex cannot be understood apart from its reproductive end; however, agents can permissibly refrain from pursuing reproduction in particular cases of sex.) This suggests that perhaps we should weaken premise (2) of the initial argument to:
- Forgiving includes refraining from pursuit of retribution.
This may be a part of why John Paul II says in Evangelium Vitae that for the death penalty to be justified in some particular (and presumably very rare) case it must be justified on grounds of protection of society. In other words, it is the protection of society, rather than retribution, that is to be sought.
Monday, December 14, 2015
If physicalism is true, then the nature of human beings is probably essentially tied to the nature of our brains, which in turn is essentially tied to the laws of nature. So a human being couldn't have a radically transformed brain. But there are limits on the information storage of our brains. This makes plausible the first premise of the following valid argument:
- If physicalism is true, there are only finitely many integers that it is metaphysically possible for humans to think about (without externalist crutches).
- It is metaphysically possible for humans to think about any integer (without externalist crutches).
- So, physicalism is not true.
Friday, December 11, 2015
The statement that God is truth is deeply mysterious. Now the statement that God is love is also mysterious, but it is easier to get a start at what is being said:
- God isn't identical with our loving, but rather God is identical with his own loving, and our loving is but a participation in God's loving.
- God isn't identical with our truthing, but rather God is identical with his own truthing, and our truthing is but a participating in God's truthing.
I see two options: one reactive and one active. The reactive action is one that finds reality and unveils it (Heidegger suggests unveiling as at the heart of the etymology of the Greek aletheia--I have no idea if he's right as a matter of philology). The active one makes truths them true. If we're looking for something in God, the active one is more promising.
Here is a suggestion. God makes all truths true, though not always in the sense of "truthmaking". Propositions are divine ideas. Their ground is identical with God by divine simplicity. True propositions divide into the necessary and the contingent. Necessary propositions are made true by God himself: God is their ultimate truthmaker, and he makes them true by his being. So in the case of necessary truths, truthing is God's activity of making necessary propositions be true in virtue of himself. In the case of contingent truths, truthing is God's activity of making contingent propositions be true by creating the reality that grounds them.
Our truthing, on the other hand, is both active and reactive. Some truths we make true in the active way by being or creating the reality that grounds them. Others we merely react to. In both cases, our truthing is derivative from God's: our creative abilities are mere participations in God's, and require God's constant cooperation, and our reactions are ultimately reactions to God.
I do not have a great amount of confidence in this speculative analysis.
Monday, December 7, 2015
Question 1: Are there marriages in heaven?
Answer: No. Jesus explicitly says that there is no marrying or giving in marriage in heaven (Luke 20:27-38). So no new marriages are entered into in heaven. That might seem to leave open the question whether existing marriages might not continue. But the context of the discussion is of a woman who was married to a sequence of brothers and the paradoxical consequences of that if these marriages continue in the afterlife. Jesus' answer only solves the paradox if we accept the implicature that there is no marriage in heaven. Moreover, it would be an odd view if existing marriages persisted in heaven but new ones couldn't be entered into. If marriage in heaven would be a good thing for us, this kind of a setup just wouldn't be fair to a loving couple who was murdered minutes before their wedding was to take place.
Question 2: Why are there no marriage in heaven?
Answer: It's good that way. Well, that's true, but not informative: it applies to everything about heavenly life. But we can think a bit about why it's good for it to be so.
One thought is that only marriage for eternity would be fitting for heavenly life. Divorce just doesn't seem the sort of thing that is fitting for heavenly life. But I think it would be problematic for humans to bind themselves to one another for an eternal heavenly life. Heavenly life is unimaginable to us now, radically transcending our current knowledge. Because of this, it would not be appropriate for us to commit ourselves now to be bound to another person for an infinite length of time in that utterly different life. Now one might think that once in heaven the problem disappears. But I suspect it does not. I suspect that the heavenly life is a life of eternal and non-asymptotic growth (hence my recent swollen head argument against Christian physicalism). I don't know if the beatific vision itself increases eternally, but I suspect that at least the finite goods of heavenly life do. So some of the reasons why it would not be fitting for us to bind ourselves now to one another for eternity would apply in heaven: in year ten of heavenly life, perhaps, one cannot imagine what year one billion will be like; in year one billion, one cannot imagine what year 10100 will be like; and so on. Eternity is long.
A second thought is that there is something about the exclusivity of marriage that is not fitted for heavenly life. The advocates of free love had something right: there is something limiting about exclusivity. This limitation is entirely fitting given the nature of marriage and its innate links to reproduction and sexual union as one body. But nonetheless it seems plausible to me that an innately exclusive form of love is not fitted to heaven.
A third and most speculative thought: There is a link--worth exploring in much greater depth than I am going to do here--between the exclusivity of marriage and the appropriateness of according privacy to sexual activity. But I suspect that there is a great transparency in heavenly life. What is hidden is revealed. It is a life of truth rather than of withholding of information. In regard to the emotional life, those in heaven will have highly developed faculties of empathy. After all, my model is one of continued growth. People can grow immensely in empathy in this life--how much more will they likely grow in heaven! These faculties of empathy would enable people to have great empathetic insight into the sexual lives of others, to a point where that insight could make them empathetic participants in that life, in a way incompatible with the exclusivity of marriage and the privacy of sexuality.
Friday, December 4, 2015
Assuming physicalism, for any fixed volume V, there are only finitely many internalistically-distinct mental states that a human brain of volume less than or equal to V can exhibit. (There may be infinitely many brain states, space and time perhaps being continuous, but brain states that are close enough together will not yield mentally relevant differences. There may also be infinitely many externalistically-distinct states given semantic externalism.) Therefore, given physicalism and a robust supervenience of the mental on the physical, in heaven either the volume of the human head will swell beyond all bounds, or else eventually we will only have reruns of the internalistically-same mental states. Neither option is acceptable. So, Christians should reject physicalism.
Thursday, December 3, 2015
In my previous post, I showed that given a backwards infinite sequence of coin tosses, there is a simple strategy leveraging data about infinitely many past coin flips that guarantees that you guessed correctly infinitely often. I then suggested that this supports the idea that one can't leverage an infinite amount of past data, and that in turn supports causal finitism--the denial of the possibility of infinite causal histories. But there is a gap in that argument: Maybe there is some strategy that guarantees infinitely many correct guesses that doesn't require the guesser to make use of data about infinitely many past coin flips. If so, then the paradox doesn't have much to do with infinite amounts of data.
Fortunately for me, that gap can be filled (modulo the Axiom of Choice). Given the Axiom of Choice, it's a theorem that there is no strategy leveraging merely a finite amount of past data at each step that guarantees getting any guesses right. In other words, for every strategy that leverages a finite amount of past data, there is a sequence of coin flips such that that sequence would result in the guesser getting every guess wrong. The proof uses the Compactness Theorem for First-Order Logic.
Wednesday, December 2, 2015
You've existed for an infinite amount of time, and each day until the year 2100 a coin is tossed. You always know the results of the past tosses, and before each toss you are asked to guess the next toss. Given the Axiom of Choice, there is a mathematical strategy that guarantees you make only a finite number of mistakes.
Here's a simpler fact, no doubt well-known, but not dependent on the Axiom of Choice. There is a mathematical strategy that guarantees that you guess correctly infinitely often. This is surprising. Granted, it's not surprising that you guess correctly infinitely often--that is what you would expect. But what is surprising is that there is a guarantee of it! Here's the simple strategy:
- If among the past tosses, there were infinitely many heads, guess "heads"
- Otherwise, guess "tails".
I take the paradoxical existence of this mathematical strategy to be evidence for causal finitism: causal finitism rules out the possibility of your having observational information from infinitely many past tosses. Thus the strategy remains purely mathematical: it cannot be implemented in practice.
Tuesday, December 1, 2015
Suppose, as Christian materialists believe, that materialism is true and yet some people have eternal life in heaven. Good experiences happen daily in heaven, and bad things never do. It is a bad thing to fail to remember a good experience. So in heaven people will have more and more good experiences that they remember. But it is plausible that there is a maximum information density in our brains, and given materialism, all the information in memory is stored in the brain. Thus, it follows that those who will be in heaven will have their heads swell without bound. Humans will eventually have heads that are millions of light-years in diameter, just to hold all the good experiences that have happened to them. But a life with such big heads just doesn't seem to be the life of human fulfillment.
Objection 1: Perhaps there are patterns to the good experiences in heaven such that the total information content in the infinite future of good experiences is finite.
Response: If the total information content is finite, then it seems likely that one will eventually get bored. Moreover, plausibly, human flourishing involves continual growth in knowledge, and it would not be fitting for heaven if this growth were to slow down eventually in order to ensure an upper bound on the total information content.
Objection 2: The laws of nature will be different in heaven, and while there is maximum information density in our current brains, heavenly brains will be made of a different kind of matter, a matter that either has infinitely many particles in any finite volume or that is infinitely subdivisible. After all, the Christian tradition does hold that we will function differently--there is speculation that we may be able to go through solid walls as Jesus apparently did after the resurrection, move really fast, see really far, etc.
Response: This seems to me to be the best materialist response. But given that on materialism the brain is central to the kinds of beings we are, there is a worry that such a radical reworking of its structure into a different kind of matter would create beings that aren't human. The dualist can allow for a more radical change in the physical aspects of the body while allowing that we still have the same kind of being, since the kind of being could be defined by the soul (this is clearest in the hylomorphic theory).
Objection 3: The dualists face the same problem given that we have good reason to think that memories are stored in the brain.
Response: Maybe memories are not entirely stored in the brain. And see the response to Objection 2: the finer-matter response is more defensible in the case of the dualist.
Monday, November 30, 2015
I've argued that typically the person who is hiding Jews from Nazis and is asked by a Nazi if there are Jews in her house tells the truth, and does not lie, if she says: "No." That argument might be wrong, and even if it's right, it probably doesn't apply to all cases.
So, let's think about the case of the Nazi asking Helga if she is hiding Jews, when she is in fact hiding Jews, and when it would be a lie for her to say "No" (i.e., when there isn't the sort of disparity of languages that I argued is normally present). The Christian tradition has typically held lying to be always wrong, including thus in cases like this. I want to say some things to make it a bit more palatable that Helga does the right thing by refusing to lie.
The Nazi is a fellow human being. Language, and the trust that underwrites it (I was reading this morning that one of the most difficult questions in the origins of language is about the origination of the trust essential to language's usefulness), is central to our humanity. By refusing to betray the Nazi's trust in her through lying, Helga is affirming the dignity of all humans in the particular case of someone who needs it greatly--a human being who has been dehumanized by his own choices and the influence of an inhuman ideology. By attempting to dehumanize Jews, the Nazi dehumanized himself to a much greater extent. Refusing to lie, Helga gives her witness to a tattered child of God, a being created to know and live by the truth in a community of trust, and she he gives him a choice whether to embrace that community of trust or persevere on the road of self-destruction through alienation from what is centrally human. She does this by treating him as a trusting human rather than a machine to be manipulated. She does this in sadness, knowing that it is very likely that her gift of community will be refused, and will result in her own death and the deaths of those she is protecting. In so doing she upholds the dignity of everyone.
When I think about this in this way, I think of the sorts of things Christian pacifists say about their eschatological witness. But while I do embrace the idea that we should never lie, I do not embrace the pacifist rejection of violence. For I think that just violence can uphold the dignity of those we do violence to, in a way in which lying cannot. Just violence--even of an intentionally lethal sort--can accept the dignity of an evildoer as someone who has chosen a path that is wrong. We have failed to sway him by persuasion, but we treat him as a fellow member of the human community by violently preventing him from destroying the community that his own wellbeing is tied to, rather than by betraying with a lie the shattered remains of the trustful connection he has to that community.
I don't think the above is sufficient as an argument that lying is always wrong. But I think it gives some plausibility to that claim.
Saturday, November 28, 2015
A lot of worthwhile texts, both fiction and nonfiction, make direct reference to particular sports or use sports analogies or metaphors. These are difficult to understand for readers who do not know the rudiments of these sports. But there is not enough education on these sports in school, except in the context of actual participation in them. But I suspect that only a minority of children in English-speaking countries participates in all of the culturally important sports that figure in English-language texts, sports such as American football, baseball, cricket, golf, hockey and soccer, understanding of which is needed for basic cultural literacy among readers of English (I have to confess to lacking that understanding in the case of most of these sports--my own school education was deficient in this respect). Thus, either there should either be broader participation--but that is unsafe in the case of American football and likely impractical in the case of golf--or there should be teaching about the rules of sports outside of contexts of participation, say in English or history class.
This post is inspired by my daughter's noting her difficulties in reading the cricket-related bits of a P. G. Wodehouse novel.
Tuesday, November 24, 2015
Suppose we have an infinite sequence of independent and fair coins. A betting portfolio is a finite list of subsets of the space of outcomes (heads-tails sequences) together with a payoff for each subset. Assume:
- Permutation: If a rational agent would be happy to pay x for a betting portfolio, and A is one of the subsets in the betting portfolio, then she would also be happy to pay x for a betting portfolio that is exactly the same but with A replaced by A*, where A* is isomorphic to A under a permutation of the coins.
- Equivalence: A rational agent who is happy to pay x for one betting scenario, will be willing to accept an equivalent betting scenario---one that is certain to give the same payoff for each outcome---for the same price.
- Great Deal: A rational agent will be happy to pay $1.00 for a betting scenario where she wins $1.25 as long as the outcome is not all-heads or all-tails.
Monday, November 23, 2015
Consider a day in a human life that is just barely worth living. Now consider the life of Beethoven. For no finite n would having n of the barely-worth-living days be better than having all of the life of Beethoven. This suggests that values in human life cannot be modeled by real numbers. For if a and b are positive numbers, then there is always a positive integer n such that nb>a. (I am assuming additiveness between the barely-liveable days. Perhaps memory wiping is needed to ensure additiveness, to avoid tedium?)
Friday, November 20, 2015
Wednesday, November 18, 2015
Consider a countably infinite sequence of fair and independent coin tosses. Given the Axiom of Choice, there is no finitely additive probability measure that satisfies these conditions:
- It is defined for all sets of outcomes.
- It agrees with the classical probabilities where these are defined.
- It is invariant under permutations of coins.
Monday, November 16, 2015
If I promise to visit you for dinner, but then it turns out that I have a nasty case of the flu, I don't need to come, and indeed shouldn't come. But I could also promise to meet you for dinner even if I have a nasty case of the flu, then if the promise is valid, I need to come even if I have the flu. I suspect, however, that typically such a promise would be immoral: I should not spread disease. But one can imagine cases where it would be valid, maybe say if you really would like to get the flu for a serious medical experiment on yourself.
In my previous post, I gave a case where it would be beneficial to have a promise that binds even when fulfilling it costs multiple lives. Thus, there is some reason to think that one could have promises with pretty drastic "even if" clauses such as "even if a terrorist kills ten people as a result of this." But clearly not every "even if" clause is valid. For instance, if I say I promise to visit you for dinner even if I have to endanger many lives by driving unsafely fast, my "only if" clause is not valid under normal circumstances (if we know that my coming to dinner would save lives, though, then it might be).
One can try to handle the question of distinguishing valid from invalid "only if" clauses by saying that the invalid case is where it is impermissible to do the promised thing under the indicated conditions. The difficulty, however, is that whether doing the promised thing is or is not permissible can depend on whether one has promised it. Again, the example from my previous post could apply, but there are more humdrum cases where one would have an on balance moral reason to spend the evening with one's family had one not promised to visit a friend.
Maybe this is akin to laws. In order to be valid, a law has to have a minimal rationality considered as an ordinance for the common good. In order to be valid, maybe a promise has to have a minimal rationality considered as an ordinance for the common human good with a special focus on the promisee? To promise to come to an ordinary dinner even if it costs lives does not have satisfy that condition, while to promise to bring someone out of general anesthesia even if a terrorist kills people as a result could satisfy it under some circumstances. It would be nice to be able to say more, but maybe that can't be done.
Assume deontology. Can one make a promise so strong that one shouldn't break it even if breaking it saves a number of lives? I don't know for sure, but there are cases where such promises would be useful, assuming deontology.
Fred needs emergency eye-surgery. If he doesn't have the surgery this week, he will live, but he will lose sight in one eye. The surgery will be done under general anesthesia, and if Fred is not brought out of the general anesthesia he will die. But there is a complication. A terrorist has announced that if Fred lives out this week, ten random innocent people will die.
Prima facie here are the main options:
- Kill Fred. Nine lives on balance are saved.
- Do nothing. Fred loses binocular vision, and ten people are killed.
- Perform surgery as usual. Fred's full vision is saved, he lives, but ten people are killed.
(A legalistic deontologist might try to suggest another option: Perform surgery, but don't bring Fred out of general anesthesia. The thought is that the surgery is morally permissible, and bringing Fred out of general anesthesia will be prohibited, but Fred's death is never intended. It is simply not prevented, and the reason for not preventing it is not in order to save lives, but simply because Double Effect prohibits us from bringing Fred out of general anesthesia. The obvious reason why this sophistical solution fails is that there is no rational justification for the surgery if Fred is't going to be brought back to consciousness. I am reminded of a perhaps mythical Jesuit in the 1960s who would suggest to a married woman with irregular cycles that worked poorly with rhythm--or maybe NFP--that she could go on the Pill to regularize her cycles in order to make rhythm/NFP work better, and that once she did that, she wouldn't need rhythm/NFP any more. That's sophistical in a similar way.)
What we need for cases like this is promises that bind even at the expense of lives. Thus, the anesthesiologist could make such a strong promise to bring Fred back. As a result, Double Effect is not violated in solution (3), because proportionality holds: granted, on balance nine lives are lost, but also a promise is kept.
In practice, I think this is the solution we actually adopt. Maybe we do this through the Hippocratic Oath, or perhaps through an implicit taking on of professional obligations by the anesthesiologist. But it is crucial in cases like the above that such promises bind even in hard cases where lives hang on it.
All that said, the fact that it would be useful to have promises like this does not entail that there are promises like that. Still, it is evidence for it. And here we get an important metametaethical constraint: metaethical theories should be such that the usefulness of moral facts be evidence for them.
Friday, November 13, 2015
Suppose I am considering two different hypotheses, and I am sure exactly one of them is true. On H, the coin I toss is chancy, with different tosses being independent, and has a chance 1/2 of landing heads and a chance 1/2 of landing tails. On N, the way the coin falls is completely brute and unexplained--it's "fundamental chaos", in the sense of my ACPA talk. So, now, you observe n instances of the coin being tossed, about half of which are heads and half of which are tails. Intuitively, that should support H. But if N is an option, if the prior probability of N is non-zero, we actually get Bayesian divergence as n increases: we get further and further from confirmation of H.
Here's why. Let E be my total evidence--the full sequence of n observed tosses. By Bayes' Theorem we should have:
P(H|E) = P(E|H)P(H)/[P(E|H)P(H) + P(E|N)P(N)].But there is a problem: P(E|N) is undefined. What shall we do about this? Well, it is completely undefined. Thus, we should take it to be an interval of probabilities, the full interval [0,1] from 0 to 1. The posterior probability P(H|E), then, will also be an interval between:
P(E|H)P(H)/[P(E|H)P(H) + (0)·P(N)] = 1and
P(E|H)P(H)/[P(E|H)P(H) + (1)·P(N)] ≤ P(E|H)/P(N) = 2−n / P(N).(Remember that E is a sequence of n fair and independent tosses if H is true.) Thus, as the number of observations increases, the posterior probability for the "sensible" hypothesis H gets to be an interval [a,1], where a is very small. But something whose probability is almost the whole interval [0,1] is not rationally confirmed. So the more data we have, the further we are from confirmation.
This means that no-explanation hypotheses like N are pernicious to Bayesians: if they are not ruled out as having zero or infinitesimal probability from the outset, they undercut science in a way that is worse and worse the more data we get.
Fortunately, we have the Principle of Sufficient Reason which can rule out hypotheses like N.
Thursday, November 12, 2015
Wednesday, November 11, 2015
Suppose we want to explain why one tortoise doesn't fall down, and we explain this by saying that it's standing on two tortoises. And then we explain why the two lower tortoises doesn't fall down, we suppose that each stands on two tortoises. And so on. That's terrible: we're constantly explaining one puzzling thing by two that are just as puzzling.
Now suppose we try to explain the puzzle of the transition from bald to non-bald in a Sorites sequences of heads of hair (no hair, one hair, two hairs, etc.). We do this by saying that there are going to be vague cases of baldness. But this is just as the case of tortoises. For while previously we had one puzzling transition, from bald to non-bald, now we have two puzzling transitions, from definitely bald to vaguely bald and from vaguely bald to definitely bald. So, we repeat with higher levels of vagueness. The transition from definitely bald to vaguely bald yields a transition from definitely bald to vaguely vaguely bald and a transition from vaguely vaguely bald to definitely vaguely bald, and similarly for the transition from vaguely bald to definitely bald. At each stage, each transition is replaced with two. We're constantly explaining one puzzling thing by two that are just as puzzling.
That said, it is possible with care to stand a tortoise on two tortoises, and we could have evidence that a particular tortoise is doing that. In that case, the two tortoises aren't posited to solve a puzzle, but simply because we have evidence that they are there. A similar thing could be the case with baldness. We might just have direct evidence that there is vagueness in the sequence. But as we go a level deeper, I suspect the evidence peters out. After all, in ordinary discourse we don't talk of vague vagueness and the like. So perhaps we might have a view on which there is one level of vagueness--and then epistemicism, i.e., there is a sharp transition from definitely non-bald to vaguely bald, and another from vaguely bald to definitely bald. But the more levels we posit, the more we offend against parsimony.
Tuesday, November 10, 2015
In physical laws, there are a number of numerical parameters. Some of these parameters are famously part of the fine-tuning problem, but all of them are puzzling. It would be really cool if we could derive the parameters from elegant laws that lack arbitrary-seeming parameters, but as far as I can tell most physicists doubt this will happen. The parameters look deeply contingent: other values for them seem very much possible. Thus people try to come up either with plenitude-based explanations where all values of parameters are exemplified in some universe or other, or with causal explanations, say in terms of universes budding off other universes or a God who causes universes.
Ethics also has parameters. To further spell out an example from Aquinas' discussion of the order of charity, fix a set of specific circumstances involving yourself, your father and a stranger, where both your father and the stranger are in average financial circumstances, but are in danger of a financial loss, and you can save one, but not both, of them from the loss. If it's a choice between saving your father from a ten dollar loss or the stranger from an eleven dollar loss, you should save your father from the loss. But if it's a choice between saving your father from a ten dollar loss or the stranger from a ten thousand dollar loss, you should save the stranger from the larger loss. As the loss to the stranger increases, at some point the wise and virtuous agent will switch from benefiting the father to benefiting the stranger. The location of the switch-over is a parameter.
Or consider questions of imposition of risk. To save one stranger's life, it is permissible to impose a small risk of death on another stranger, say a risk of one in a million. For instance, an ambulance driver can drive fast to save someone's life, even though this endangers other people along the way. But to save a stranger's life, it is not permissible to impose a 99% risk of death on another stranger. Somewhere there is a switch-over.
There are epistemic problems with such switch-overs. Aquinas says that there is no rule we can give for when we benefit our father and when we benefit a stranger, but we must judge as the prudent person would. However I am not interested right now in the epistemic problem, but in the explanatory problem. Why do the parameters have the values they do? Now, granted, the particular switchover points in my examples are probably not fundamental parameters. The amount of money that a stranger needs to face in order that you should help the stranger rather than saving your father from a loss of $10 is surely not a fundamental parameter, especially since it depends on many of the background conditions (just how well off is your father and the stranger; what exactly is your relationship with your father; etc.) Likewise, the saving-risking switchover may well not be fundamental. But just as physicists doubt that one can derive the value of, say, the fine-structure constant (which measures the strength of electromagnetic interactions between charged particles) from laws of nature that contain no parameters other than elegant ones like 2 and π, even though it is surely a very serious possibility that the fine-structure constant isn't truly fundamental, so too it is doubtful that the switchover points in these examples can be derived from fundamental laws of ethics that contain no parameters other than elegant ones. If utilitarianism were correct, it would be an example of a parameter-free theory providing such a derivation. But utilitarianism predicts the incorrect values for the parameters. For instance, it incorrectly predicts that that the risk value at which you need to stop risking a stranger's life to certainly save another stranger is 1, so that you should put one stranger in a position of 99.9999% chance of death if that has a certainty of saving another stranger.
So we have good reason to think that the fundamental laws of ethics contain parameters that suffer from the same sort of apparent contingency that the physical ones do. These parameters, thus, appear to call for an explanation, just as the physical ones do.
But let's pause for a second in regard to the contingency. For there is one prominent proposal on which the laws of physics end up being necessary: the Aristotelian account of laws as grounded in the essences of things. On such an account, for instance, the value of the fine-structure constant may be grounded in the natures of charged particles, or maybe in the nature of charge tropes. However, such an account really does not remove contingency. For on this theory, while it is not contingent that electromagnetic interactions between, say, electrons have the magnitude they do, it is contingent that the universe contains electrons rather than shmelectrons, which are just like electrons, but they engaged in shmelectromagnetic interactions that are just like electromagnetic interactions but with a different quantity playing the role analogous to the fine-structure constant. In a case like this, while technically the laws of physics are necessary, there is still a contingency in the constants, in that it is contingent that we have particles which behave according to this value rather than other particles that would behave differently. Similarly, one might say that it is a necessary truth that such-and-such preferences are to be had between a father and a stranger, and that this necessary truth is grounded in the essence of humanity or in the nature of a paternity trope. But there is still a contingency that our world contains humans and fathers rather than something functionally very similar to humans and fathers but with different normative parameters.
So in any case we have a contingency. We need a meta-ethics with a serious dose of contingency, contingency not just derivable from the sorts of functional behavior the agents exhibit, but contingency at the normative level--for instance, contingency as to appropriate endangering-saving risk tradeoffs. This contingency undercuts the intuitions behind the thesis that the moral supervenes on the non-moral. Here, both Natural Law and Divine Command rise to the challenge. Just as the natures of contingently existing charged objects can ground the fine-structure constants governing their behavior, the natures of contingently existing agents can ground the saving-risking switchover values governing their behavior. And just as occasionalism can have God's causation ground the arbitrary-seeming parameters in the laws of physics, so God's commands can ground the arbitrary-seeming parameters in ethics (the illuminating analogy between occasionalism and Divine Command is due to Mark Murphy). Can other theories rise to the challenge? Maybe. But in any case, it is a genuine challenge.
It would be particularly interesting if there were an analogue to the fine-tuning argument in this case. The fine-tuning argument arises because in some sense "most" of the possible combinations of values of parameters in the laws of physics do not allow for life, or at least for robust, long-lasting and interesting life. I wonder if there isn't a similar argument on the ethics side, say that for "most" of the possible combinations of parameters, we aren't going to have the good moral communities (the good could be prior to the moral, so there may be no circularity in the evaluation)? I don't know. But this would be an interesting research project for a graduate student to think about.
Objection: The switchover points are vague.
Response: I didn't say they weren't. The puzzle is present either way. Vagueness doesn't remove arbitrariness. With a sharp switchover point, just the value of it is arbitrary. But with a vague switchover point, we have a vagueness profile: here something is definitely vaguely obligatory, here it is definitely vaguely vaguely obligatory, here it is vaguely vaguely vaguely obligatory, etc. In fact, vagueness may even multiply arbitrariness, in that there are a lot more degrees of freedom in a vagueness profile than in a single sharp value.
Monday, November 9, 2015
Start with the standard scenario: trolley speeding towards five innocent strangers, and you can flip a lever to redirect it to a side-track with only one innocent stranger. Here are four arguments each making it plausible that redirecting the trolley is right. [Unfortunately, as you can see from the comments, the first three arguments, at least, are very weak. - ARP]
1. Back and forth: Suppose there is just enough time to flip the lever to redirect and then flip it back--but no more time than that. Assuming one shouldn't redirect, there is nothing wrong with flipping the lever if one has a firm plan to flip it back immediately. After all, nobody is harmed by such a there-and-back movement. The action may seem flippant (pun not intended--I just can't think of a better term), but we could suppose that there is good reason for it (maybe it cures a terrible pain in your arm). But now suppose that you're half-way through this action. You've flipped the lever. The trolley is now speeding towards the one innocent. At this point it is clearly wrong for you to flip it back: everyone agrees that a trolley speeding towards one innocent stranger can't be redirected towards five. This seems paradoxical: the compound action would be permissible, but you'd be obligated to stop half way through. If redirecting the trolley is the right thing to do, we can block the paradox by saying that it's wrong to flip it there and back, because it is your duty to flip it there.
2. Undoing. If you can undo a wrong action, getting everything back to the status quo ante, you probably should. So if it's wrong to flip the lever, then if you've flipped the lever, you probably should flip it back, to undo the action. But flipping it back is uncontroversially wrong. So, probably, flipping the lever isn't wrong.
3. Advice and prevention. Typically, it's permissible to dissuade people who are resolved on a wrong action. But if someone is set on flipping the lever, it's wrong to dissuade her. For once she is resolved on flipping the lever, it is the one person on the side-track who is set to die, and so dissuading the person from flipping the lever redirects death onto the five again. But it's clearly wrong to redirect death onto the five. So, probably, flipping the lever isn't wrong. Similarly, typically one should prevent wrongdoing. But to prevent the flipping of the lever is to redirect the danger onto the five, and that's wrong.
4. Advice and prevention (reversed). The trolley is speeding towards the side-track with one person, and you see someone about to redirect the trolley onto the main track with five persons. Clearly you should try to talk the person out of it. But talking her out of it redirects death from the five innocents to the one. Hence it's right to engage in such redirection. Similarly, it's clear that if you can physical prevent the person from redirecting the trolley onto the main track, you should. But that's redirection of danger from five to one.
Start with the standard trolley scenario: trolley is heading towards five innocent people but you can redirect it towards one. Suppose you think that it is wrong to redirect. Now add to the case the following: You're restrained in the control booth, and the button that redirects the trolley is very sensitive, so if you breathe a single breath over the next 20 seconds, the trolley will be redirected towards the one person.
To breathe or not to breathe, that is the question. If you breathe, you redirect. Suppose you hold your breath, thinking that redirecting is wrong. Why are you holding your breath, then? To keep the trolley away from the one person. But by holding your breath, you're also keeping the trolley on course towards the five. If in the original case it was wrong to redirect the trolley towards the one, why isn't it wrong to hold your breath so as to keep the trolley on course towards the five? So perhaps you need to breathe. But if you breathe, your breathing redirects the trolley, and you thought that was wrong.
I suppose the intuition behind not redirecting in the original case is a killing vs. letting die intuition: By redirecting, you kill the one. By not redirecting, you let the five die, but you don't kill them. However, when the redirection is controlled by the wonky button, things perhaps change. For perhaps holding one's breath is a positive action, and not just a refraining. So in the wonky button version, holding one's breath is killing, while breathing is letting die. So perhaps the person who thinks it's wrong to redirect in the original case can consistently say that in the breath case, it's obligatory to breathe and redirect.
But things aren't so simple. It's true that normally breathing is automatic, and that it is the holding of one's breath rather than the breathing that is a positive action. But if lives hung on it, you'd no doubt become extremely conscious of your breathing. So conscious, I suspect, that every breath would be a positive decision. So to breathe would then be a positive action. And so if redirecting in the original case is wrong, it's wrong to breathe in this case. Yet holding one's breath is generally a decision, too, a positive action. So now it's looking like in the breath-activated case, whatever happens, you do a positive action, and so you kill in both cases. It's better to kill one rather than killing five, so you should breathe.
But this approach makes what is right and wrong depend too much on your habits. Suppose that you have been trained for rescue operations by a utilitarian organization, so that it became second nature to you to redirect trolleys towards the smaller number of people. But now you've come to realize that utilitarianism is false, and you haven't been convinced by the Double Effect arguments for redirecting trolleys. Still, your instincts remain. You see the trolley, and you have an instinct to redirect. You would have to stop yourself from it. But stopping yourself is a positive action, just as holding your breath is. So by stopping yourself, you'd be killing the five. And by letting yourself go, you'd be killing the one. So by the above reasoning, you should let yourself go. Yet, surely, whether you should redirect or not doesn't depend on which action is more ingrained in you.
Where is this heading? Well, I think it's a roundabout reductio ad absurdum of the idea that you shouldn't redirect. The view that you should redirect is much more stable until such tweaks. If, on the other hand, you say in the original case that you should redirect, then you can say the same thing about all the other cases.
I think the above line of thought should make one suspicious of other cases where people want to employ the distinction between killing and letting-die. (Perhaps instead one should employ Double Effect or the distinction between ordinary and extraordinary means of sustenance.)
Friday, November 6, 2015
In the standard trolley case, a runaway trolley is heading towards five innocent people, but can be redirected onto a side-track where there is only one innocent person. I will suppose that the redirection is permissible. This is hard to deny. If redirection here is impermissible, it's impermissible to mass-manufacture vaccines, since mass vaccinations redirect death from a larger number of potentially sick people to a handful of people who die of vaccine-related complications. But vaccinations are good, so redirection is permissible.
I will now suggest that it is difficult to be a pacifist if one agrees with what I just said.
Imagine a variant where the one person on the side-track isn't innocent at all. Indeed, she is the person who set the trolley in motion against the five innocents, and now she's sitting on the side-track, hoping that you'll be unwilling to get your hands dirty by redirecting the trolley at her. Surely the fact that she's a malefactor doesn't make it wrong to direct the trolley at the side-track she's on. So it is permissible to protect innocents by activity that is lethal to malefactors.
This conclusion should make a pacifist already a bit uncomfortable, but perhaps a pacifist can say that it is wrong to protect innocents by violence that is lethal to malefactors. I don't think this can be sustained. For protecting innocents by non-lethal violence is surely permissible. It would be absurd to say a woman can't pepper-spray a rapist. But now modify the trolley case a little more. The malefactor is holding a remote control for the track switch, and will not give it to you unless you violently extract it from her grasp. You also realize that when you violently extract the remote control from the malefactor, in the process of extracting it the button that switches the tracks will be pressed. Thus your violent extraction of the remote will redirect the trolley at the malefactor. Yet surely if it is permissible to do violence to the malefactor and it is permissible to redirect the trolley, it is permissible to redirect the trolley by violence done to the malefactor. But if you do that, you will do a violent action that is lethal to the malefactor.
So it is permissible to protect innocents by violence that is lethal to malefactors. Now, perhaps, it is contended that in the last trolley case, the death of the malefactor is incidental to the violence. But the same is true when one justifies lethal violence in self-defense by means of the Principle of Double Effect. For instance, one can hit an attacker with a club intending to stop the malefactor, with the malefactor's death being an unintended side-effect.
This means that if it is permissible to redirect the trolley, some lethal violence is permissible. What is left untouched, however, by this argument is a pacifism that says that it is always impermissible to intend a malefactor's death. I disagree with that pacifism, too, but this argument doesn't affect it.
Thursday, November 5, 2015
To pervert a social practice is to engage in it while subverting a defining end. Sports and other games are social practices one of whose internal ends is score (which generalizes victory). To throw a match is, thus, a form of perversion: it subverts the defining end of score.
Interestingly, cheating is actually a case of throwing a match. To score one must follow the rules. Thus cheaters don't win: they at most appear to. Their cheating subverts score, a defining end of the game. Cheating is, thus, a form of perversion.
The difference between the cheat and the paradigm case of throwing a match is that the cheat seeks to make it look like he did well at the game while the ordinary thrower of a match doesn't.
It might turn out to be like this: There is no significant difference between matter and antimatter, except insofar as they are related to one another. A proton is attracted to antiproton, while each is repelled by its own kind. Our universe, as a contingent matter of fact, has more matter than antimatter. But, perhaps, if one swapped the matter and antimatter, the resulting universe wouldn't be different in any significant way. If we this is true, we might say that there is a relational matter-antimatter essentialism. It is of great importance to matter and to antimatter that they are matter and antimatter, respectively, but it is important only because of the relation between the two, not because of intrinsic differences.
I don't know if it's like that with matter and antimatter, but I do know that it's like that with the Father, the Son and the Holy Spirit. The only important non-contingent differences are those constituted by the relationships between them. (There are also contingent extrinsic differences.)
Could it be like that with men and women? The special relation between men and women--say, that man is for woman and woman for man, or that one of each is needed for procreation--is essential and important to men and women. But there are no important non-contingent intrinsic differences on this theory.
There might, however, be important contingent theological differences due to some symmetry-breaking contingent event or events. Maybe, when the Logos became one human being, the Logos had to become either a man or a woman. If the relation between men and women is important, the decision whether to become a man or to become a woman, might have been a kind of symmetry-breaking, with other differences in salvation history following on it. In itself, that decision could have been unimportant. If the Logos had become a woman, we would have a salvation history that was very much alike, except now Sarah would have been asked to sacrifice a daughter, we would have had an all-female priesthood, and so on.
Or perhaps the symmetry-breaking came from the contingent structure of our sinfulness. Perhaps the contingent fact that men tended to oppress women more than the other way around made it appropriate for the Logos to become a man, so as to provide the more sorely needed example of a man becoming the servant of all and sacrificing himself for all, and in turn followed the other differences.
I don't know if relational gender essentialism is the right picture. But it's a picture worth thinking about.
Wednesday, November 4, 2015
A perfect Bayesian agent is really quite simple. It has a database of probability assignments, a utility function, an input system and an output system. Inputs change the probability assignments according to simple rules. It computes which outputs maximize expected utility (either causally or evidentially--it won't matter for this post). And it does that (in cases of ties, it can take the lexicographically first option).
In particular, there is no need for consciousness, freedom, reflection, moral constraints, etc. Moreover, apart perhaps from gerrymandered cases (Newcomb?), for maximizing expected utility of a fixed utility function, the perfect Bayesian agent is as good as one can hope to get.
So, if we are the product of entirely unguided evolution, why did we get consciousness and these other things that the perfect Bayesian agent doesn't need, rather than just a database of probability assignments, a utility function that maximizes reproductive potential, and finely-honed input and output systems? Perhaps it is as some sort of compensation for us not being perfect Bayesian agents. There is an interesting research program available here: find out how these things we have compensate for the shortfalls, say, by allowing lossy compression of the database of probability assignments or providing heuristics in lieu of full optimizations. I think that some of the things the perfect Bayesian agent doesn't need can fit into these categories (some reflection and some moral constraints). But I doubt consciousness is on that list.
Consciousness, I think, points towards a very different utility function than one we would expect in an unguidedly evolutionarily produced system. Say, a utility function where contemplation is a highest good, and our everyday consciousness (and even that of animals) is a mirror of that contemplation.
Monday, November 2, 2015
Consider three cases of inappropriate pains:
- The deep sorrow of a morally culpable racist at social progress in racial integration.
- Someone's great pain at minor "first world problems" in their life.
- The deep sorrow of a parent who has been misinformed that their child died.
Let's say that full empathy involves feeling something similar to the pain that the person being empathized with feels. In the parent case, full empathy is the right reaction by a third party, even a third party who knows that the child had not died (but, say, is unable to communicate this to the parent). But in the racist and first-world-problem cases, full empathy is inappropriate. We should feel sorry for those who have the sorrows, but I think we don't need to "feel their pain", except in a remote way. Instead, what should be the object of our sorrow is the value system that give rise to the pain, something which the person does not take pain in.
I think that in appropriate empathy, one feels something analogous to what the person one empathizes with feels. But the kind of analogy that exists is going to depend on the kind of pain that is involved. In particular, I think the following three cases will all involve different analogies: morally appropriate psychological pain; morally inappropriate psychological pain; physical pain. I suspect that "full empathy", where the analogy involves significant similarity, should only occur in the first of the three cases.
I am working away at Infinity, Causation and Paradox, and am about half way to the first draft. As an experiment, I am putting all my in-progress materials on github. There is both TeX source and a periodically updated pdf file of the whole thing (click on "document.pdf" and choose "Raw").
To report bugs, i.e., philosophical errors, nonsequiturs, typos, etc., or to make improvement suggestions, click on "Issues" at the top of the github page (preferred), or just email me.
I will be committing new material to the repository after each days' work on the book. Click on the commit description to see what I've added. If you must quote from the manuscript, explicitly say that you are quoting from "an in-progress early draft".
Please bear in mind that this is super rough. A gradually growing first draft.
Please note that while you're permitted to download the material for your personal use only, you are not permitted to redistribute any material. The repository may disappear at any time (and in particular when the draft is ready for initial submission).
Thursday, October 29, 2015
The title is provocative, but the thesis is less provocative (and in essence well-known: Hawthorne's work on the deeply contingent a priori is relevant) once I spell out what I stipulatively mean by the terms. By evidential Bayesianism, I mean the view that evidence should only impact our credences by conditionalization. By evidentialism, I mean the view that high credence in contingent matters should not be had except by evidence (most evidentialists make a stronger claims). By weak fallibilism, I mean that sometimes a correctly functioning epistemic agent appropriately would have high credence on the basis of non-entailing evidence. These three theses cannot all be true.
For suppose that they are all true, and I am a correctly functioning epistemic agent who has appropriate high credence in a contingent matter H, and yet my total evidence E does not entail H. By evidentialism, my credence comes from the evidence. By evidential Bayesianism, if P measures my prior probabilities, then P(H|E) is high. But it is a theorem that P(H|E) is less than or equal to P(E→H), where the arrow is a material conditional. So the prior probability of E→H is high. This conditional is not necessary as E does not etnail H. Hence, I have high prior credence in a contingent matter. Prior probabilities are by definition independent of my total evidence. So evidentialism is violated.
Tuesday, October 27, 2015
It has been argued that if we are the product of unguided evolution, we would not expect our moral sense to get the moral facts right. I think there is a lot to those arguments, but let's suppose that they fail, so that there really is a good evolutionary story about how we would get a reliable moral sense.
There is, nonetheless, still a serious problem for the common method of cases as used in analytic moral philosophy. Even when a reliable process is properly functioning, its reliability and proper function only yield the expectation of correct results in normal cases. A process can be reliable and properly functioning and still quite unreliable in edge cases. Consider, for instance, the myriad of illusions that our visual system is prone to even when properly functioning. And yet our visual system is reliable.
This wouldn't matter much if ethical inquiry restricted itself to considering normal cases. But often ethical inquiry proceeds by thinking through hypothetical cases. These cases are carefully crafted to separate one relevant feature from others, and this crafting makes the cases abnormal. For instance, when arguing against utilitarianism, one considers such cases as that of the transplant doctor who is able to murder a patient and use her organs to save three others, and we carefully craft the case to rule out the normal utilitarian arguments against this action: nobody can find out about the murder, the doctor's moral sensibilities are not damaged by this, etc. But we know from how visual illusions work that often a reliable cognitive system concludes by heuristics rather than algorithms designed to function robustly in edge cases as well.
Now one traditional guiding principle in ethical inquiry, at least since Aristotle, has been to put a special weight on the opinions of the virtuous. However, while an agent's being virtuous may guarantee that her moral sense is properly functioning--that there is no malfunction--typical cognitive systems will give wrong answers in edge cases even when properly functioning. The heuristics embodied in the visual system that give rise to visual illusions are a part of the system's proper functioning: they enable the system to use fewer resources and respond faster in the more typical cases.
We now see that there is a serious problem for the method of cases in ethics, even if the moral sense is reliable and properly functioning. Even if we have good reason to think that the moral sense evolved to get moral facts right, we should not expect it to get edge case facts right. In fact, we would expect systematic error in edge cases, even among the truly virtuous. At most, we would expect evolution to impose a safety feature which ensures that failure in edge cases isn't too catastrophic (e.g., so that someone who is presented with a very weird case doesn't conclude that the right solution is to burn down her village).
Yet it may not be possible to do ethics successfully without the method of cases, including far-out cases, especially now that medical science is on the verge of making some of these cases no longer be hypothetical.
I think there are two solutions that let one keep the method of cases. The first is to say that we are not the product of unguided evolution, but that we are designed to have consciences that, when properly functioning (as they are in the truly virtuous), are good guides not just in typical cases but in all the vicissitudes of life, including those arising from future technological progress. This might still place limits on the method of cases, but the limits will be more modest. The second is to say that our moral judgments are at least partly grounded in facts about what our moral judgment would say were it properly functioning--this is a kind of natural law approach. (Of course, if one drops the "properly functioning" qualifier, we get relativism.)
Monday, October 26, 2015
I was thinking about the method of cases in ethics, and it made me think of what we do when we apply the method as a reverse engineering of conscience. Reverse engineering of software has been one of the most fun things in my life. When I reverse engineer software, in order to figure out what the software does (e.g., how it stores data in an undocumented file format), I typically employ anywhere between one and three of the following methods:
- Observe the outputs in the ordinary course of operation.
- Observe the outputs given carefully crafted inputs.
- Look under the hood: disassemble the software, trace through the execution, do experiments with modifying the software, etc.
But now suppose that this all works, that we really do succeed in reverse engineering conscience, and find out by what principles a properly functioning conscience decides whether an action is right or wrong. Why think this gives us anything of ethical interest? If we have a divine command theory, we have a nice answer: The same being whose commands constitute rightness and wrongness made that conscience, and it is plausible to think that he made it in order to communicate his commands to us. Perhaps more generally theistic theories other than divine command can give us a good answer, in that the faculty of conscience is designed by a being who cares immensely about right behavior. Likewise, if we have a natural law theory, we also have a nice answer: The faculty of conscience is part of our nature, and our nature defines what is right and wrong for us.
But what if conscience is simply the product of unguided evolution? Then by reverse engineering conscience we would not expect to find out anything other than facts about what kinds of behavior-guiding algorithms help us to pass on our genes.
So if all we do in the method of cases is this kind of reverse engineering, then outside of a theistic or natural law context we really should eschew use of the method in ethics.
I've been thinking a bit about one of the key issues of the recent Synod on the Family, whether Catholics who have divorced and attempted remarriage without an annulment should be allowed to receive communion. As I understand the disagreement (I found this quite helpful), it's not really about the nature of marriage.
The basic case to think about is this:
Jack believes himself to be married to Jill, and publicly lives with her as husband and wife. But the Church knows, although Jack does not, that Jack is either unmarried or married to Suzy.Should Jack be allowed to receive communion? After all, Jack is committing adultery (if he is actually married to Suzy) or fornication (if he's not actually married to Suzy) with Jill, and that's public wrongdoing. However, Jack is deluded into thinking that he's actually married to Jill. So Jack isn't aware that he's committing adultery or fornication. Jack may or may not be innocent in his delusion. If he is innocent in his delusion, then he is not culpably sinning ("formally sinning", as we Catholics say) in his adultery or fornication.
This is a hard question. On the one hand, given the spiritual benefits of the Eucharist, the Church should strive to avoid denying communion to an innocent person, and Jack might be innocent. On the other hand, letting Jack receive communion reinforces his delusion of being married to Jill, making him think that all is well with this aspect of his life, and committing adultery and fornication is good neither for Jack nor for Jill, even if they are ignorant of the fact that their relationship is adulterous or fornicatory.
One thing should be clear: this is not a clear case. There really are serious considerations in both directions, considerations fully faithful to the teaching of Scripture and Tradition that adultery and fornication are gravely wrong and that one should not receive communion when one is guilty of grave wrong.
One may think that the above way of spinning the case is not a fair reflection of real-world divorce and remarriage cases. What I said above makes it sound like Jack has hallucinated a wedding with Jill and may have amnesia about a wedding with Suzy. And indeed it is a difficult and far from clear pastoral question what to do with congregants who are suffering from hallucinations and amnesia. But in the real-life cases under debate, Jack really does remember exchanging vows with Suzy, and yet he has later exchanged other vows, in a non-Catholic ceremony, with Jill. Moreover, Jack knows that the Church teaches things that imply that he isn't really married to Jill. Does this make the basic case clear?
Well, to fill out the case, we also need to add the further information that the culture, at both the popular and elite levels, is telling Jack that he is married to Jill. And Jack thinks that the Church is wrong and the culture is right. I doubt we can draw a bright line between cases of mental aberration and those of being misled by the opinions of others. We are social animals, after all. (If "everyone" were to tell me that I never had a PhD thesis defense, I would start doubting my memories.)
At the same time, the cultural point plays in two opposite ways. On the one hand, it makes it more likely that Jack's ignorance is not culpable. On the other hand, it makes it imperative--not just for Jack and Jill's sake, but now also that of many others--not to act in ways that reinforce the ignorance and delusion. Moreover, the issue for Jack's spiritual health isn't just about his relationship with Jill. If Jack puts more weight in the culture than in Catholic teaching, Jack has other problems, and may need a serious jolt. But even that's not clear: that jolt might push him even further away from where he should be.
So I don't really know what the Church should do, and I hope the Holy Spirit will guide Pope Francis to act wisely.
In any case, I think my point stands that this isn't really about the nature of marriage. One can have complete agreement that adultery and fornication are wrong and that Jack isn't married to Jill, without it being clear what to do.
Friday, October 23, 2015
I have a strong theoretical commitment to:
- To feel pain is to perceive something as if it were bad.
- Veridical perception is non-instrumentally good.
- Particularly intense physical pain is always non-instrumentally bad.
But today I realized that there is no real contradiction between (1), (2) and (3). Rather than deriving a contradiction from (1)-(3), what we should conclude is:
- No instance of particularly intense physical pain is veridical.
One difficulty is that the plausibility of my position depends on how one understands "particularly intense". If one has a high enough standard for that, then (4) is plausible, but it also becomes plausible that pains that just fall short of the standard still are non-instrumentally bad. If one has a lower standard for "particularly intense", then (4) becomes less plausible. I am hoping that there is a sweet spot (well, actually, a miserable spot!) where the position works.
Thursday, October 22, 2015
There is nothing essential new here, but it is a particularly vivid way to put an observation by Paul Bartha.
You are going to receive a sequence of a hundred tickets from an countably infinite fair lottery. When you get the first ticket, you will be nearly certain (your probability will be 1 or 1 minus an infinitesimal) that the next ticket will have a bigger number. When you get the second, you will be nearly certain that the third will be bigger than it. And so on. Thus, throughout the sequence you will be nearly certain that the next ticket will be bigger.
But surely at some point you will be wrong. After all, it's incredibly unlikely that a hundred tickets from a lottery will be sorted in ascending order. To make the point clear, suppose that the way the sequence of tickets is picked is as follows. First, a hundred tickets are picked via a countably infinite fair lottery, either the same lottery, in which case they are guaranteed to be different, or independent lotteries, in which case they are nearly certain to be all different. Then the hundred tickets are shuffled, and you're given them one by one. Nonetheless, the above argument is unaffected by the shuffling: at each point you will be nearly certain that the next ticket you get will have a bigger number, there being only finitely many options for that to fail and infinitely many for it to succeed, and with all the options being equally likely.
Yet if you take a hundred numbers and shuffle them, it's extremely unlikely that
they will be in ascending order. So you will be nearly certain of something, and yet very likely wrong in a number of the cases. And even while you are nearly certain of it, you will be able to go through this argument, see that in many of the judgments that the next number is bigger you will be wrong, and yet this won't affect your near certainty that the next number is bigger.
Intuitively, imposing a game of Russian roulette on an innocent victim is constitutive of twice as much moral depravity when there are two bullets in the six-shooter as when there is only one. If so, then a one-bullet game of Russian roulette will carry about a sixth of the moral depravity of a six-bullet game, and hence about a sixth of the depravity of plain murder.
I am not so sure, though. The person imposing the game of Russian roulette is, I shall suppose, intending a conditional:
- If the bullet ends up in the barrel, the victim will die.
- If you can't pay the mayor off, get rid of him.
Perhaps, though, this judgment about the moral depravity of issuing order (2) is based on the thought that the kind of person who issues this order doesn't care much if the probability of integrity is 0.001 or 0.1 or 1. But the person who intends (1) may well care about the probability that the bullet ends up in the barrel. So perhaps the mob boss response doesn't quite do the job.
Here's another thought. It is gravely wrong to play Russian roulette with a single-bullet and a revolver with six thousand chambers. It doesn't seem that the moral depravity of this is a thousandth of the moral depravity of "standard" Russian roulette. And it sure doesn't sound like the moral depravity goes down by a factor of ten as the number of chambers goes up by a factor of ten.
Here, then, is an alternate suggestion. The person playing Russian roulette, like the mob boss, sets her heart on the death of an innocent person under certain circumstances. This setting of one's heart on someone's death is constitutive of a grave moral depravity, regardless of how likely the circumstances are. It could even be that this is wrong even when I know the circumstances won't obtain. For instance, it would be morally depraved to set one's heart on killing the Tooth Fairy if she turns out to exist, even when one knows that she doesn't exist. There is then an additional dollop of depravity proportional to the subjective probability that the circumstances obtain. That additional dollop comes from the risk one takes that someone will die and the risk one takes that one will become an actual murder. As a result, very roughly (in the end, the numerical evaluations are very much a toy model), the moral depravity in willing a conditional like (1) and (2) is something like:
- A + pB
Wednesday, October 21, 2015
Works of art are designed to be observed through a particular perceptual apparatus deployed in a particular way. A music CD may be shiny and pretty to the eye, but this is orthogonal to the relevant aesthetic qualities which are meant to be experienced through the ear. A beautiful painting made for a trichromat human would be apt to look ugly to people with the five color pigments (including ultraviolet!) of a pigeon. A sculpture is meant to be observed with visible light, rather than x-rays, and a specific set of points of view are intended--for instance, most sculptures are meant to be looked at from the outside rather than the inside (the inside of a beautiful statue can be ugly). So when we evaluate the aesthetic qualities of a work of art, we evaluate a pair: "the object itself" and the set of intended deployments of perception. But "perception" here must be understood broadly enough to include language processing. The same sequence of sounds can be nonsense in one language, an exquisite metaphor in another, and trite in a third. And once we include language processing, it's hard to see where to stop in the degree of cognitive update to be specified in the set of deployments of perception (think, for instance, about the background knowledge needed to appreciate many works).
Furthermore, for every physical object, there is a possible deployment of a possible perceptual apparatus that decodes the object into something with the structure of the Mona Lisa or of War and Peace. We already pretty much have the technology to make smart goggles that turn water bottles in the visual field into copies of Michelangelo's David, and someone could make sculptures designed to be seen only through those goggles. (Indeed, the first exhibit could just be a single water bottle.) And if one insists that art must be viewable without mechanical aids--an implausible restriction--one could in principle genetically engineer a human who sees in such a way.
Thus any object could be beautiful, sublime or ugly, when paired with the right set of deployments of perceptual apparatus, including of cognitive faculties. This sounds very subjectivistic, but it's not. For the story is quite compatible with there being a non-trivial objective fact about which pairs of object and set of perceptual deployments exhibit which aesthetic qualities.
Still, the story does make for trivialization. I could draw a scribble on the board and then specify: "This scribble must be seen through a perceptual deployment that makes it into an intricate work of beautiful symmetry." On the above view, I will have created a beautiful work of art relative to the intended perceptual deployments. But I will have outsourced all of the creative burden onto the viewer who will need to, say, design distorting lenses that give rise to a beautiful symmetry when trained on the scribble. That's like marketing a pair of chopsticks as a device that is guaranteed to rid one's home of mosquitoes if the directions are followed, where the directions say: "Catch mosquito with chopsticks, squish, repeat until done." One just isn't being helpful.