Monday, October 31, 2022

Transsubstantiation and magnets

On Thomistic accounts of transsubstantiation, the accidents of bread and wine continue to exist even when the substance no longer does (having been turned into the substance of Christ’s body and blood). This seems problematic.

Here is an analogy that occurred to me. Consider a magnet. It’s not crazy to think of the magnet’s magnetic field as an accident of the magnet. But the magnetic field extends spatially beyond the magnet. Thus, it exists in places where the magnet does not.

Now, according to four-dimensionalism, time is rather like space. If so, then an accident existing when its substance does not is rather like an accident existing where its substance does not. Hence to the four-dimensionalist, the magnet analogy should be quite helpful.

Actually, if we throw relativity into the mix, then we can get an even closer analogy, assuming still that a magnet’s field is an accident of the magnet. Imagine that the magnet is annihilated. The magnetic field disappears, but gradually, starting near the magnet, because all effects propagate at most at the speed of light. Thus, even when the magnet is destroyed, for a short period its magnetic field still exists.

That said, I don’t know if the magnet’s field is an accident of it. (Rob Koons in conversation suggested it might be.) But it’s comprehensible to think of it as such, and hence the analogy makes Thomistic transsubtantiaton comprehensible, I think.

Friday, October 28, 2022

Does our ignorance always grow when we learn?

Here is an odd thesis:

  1. Whenever you gain a true belief, you gain a false belief.

This follows from:

  1. Whenever you gain a belief, you gain a false belief.

The argument for (2) is:

  1. You always have at least one false belief.

  2. You believe a conjunction if and only if you believe the conjuncts.

  3. Suppose you just gained a belief p.

  4. There is now some false belief q that you have. (By (3))

  5. Before you gained the belief p you didn’t believe the conjunction of p and q. (By (4))

  6. So, you just gained the belief in the conjunction of p and q. (By (5) and (7))

  7. The conjunction of p and q is false. (By (6))

  8. So, you just gained a false belief. (By (8) and (9))

I am not sure I accept (4), though.

“Accuracy, probabilism and Bayesian update in infinite domains”

The paper has just come out online in Synthese.

Abstract: Scoring rules measure the accuracy or epistemic utility of a credence assignment. A significant literature uses plausible conditions on scoring rules on finite sample spaces to argue for both probabilism—the doctrine that credences ought to satisfy the axioms of probabilism—and for the optimality of Bayesian update as a response to evidence. I prove a number of formal results regarding scoring rules on infinite sample spaces that impact the extension of these arguments to infinite sample spaces. A common condition in the arguments for probabilism and Bayesian update is strict propriety: that according to each probabilistic credence, the expected accuracy of any other credence is worse. Much of the discussion needs to divide depending on whether we require finite or countable additivity of our probabilities. I show that in a number of natural infinite finitely additive cases, there simply do not exist strictly proper scoring rules, and the prospects for arguments for probabilism and Bayesian update are limited. In many natural infinite countably additive cases, on the other hand, there do exist strictly proper scoring rules that are continuous on the probabilities, and which support arguments for Bayesian update, but which do not support arguments for probabilism. There may be more hope for accuracy-based arguments if we drop the assumption that scores are extended-real-valued. I sketch a framework for scoring rules whose values are nets of extended reals, and show the existence of a strictly proper net-valued scoring rules in all infinite cases, both for f.a. and c.a. probabilities. These can be used in an argument for Bayesian update, but it is not at present known what is to be said about probabilism in this case.

Choices on a spectrum

My usual story about how to reconcile libertarianism with the Principle of Sufficient Reason is that when we choose, we choose on the basis of incommensurable reasons, some of which favor the choice we made and others favor other choices. Moreover, this is a kind of constrastive explanation.

This story, though it has some difficulties, is designed for choices between options that promote significantly different goods—say, whether to read a book or go for a walk or write a paper.

But a different kind of situation comes up for choices of a point on a spectrum. For instance, suppose I am deciding how much homework to assign, how hard a question to ask on an exam, or how long a walk to go for. What is going on there?

Well, here is a model that applies to a number of cases. There are two incommensurable goods one better served as one goes in one direction in the spectrum and the other better served as one goes in the other direction in the spectrum. Let’s say that we can quantify the spectrum as one from less to more with respect to some quantity Q (amount of homework, difficulty of a question or length of a walk), and good A is promoted by less of Q and incommensurable good B is promoted by more of Q. For instance, with homework, A is the student’s having time for other classes and for non-academic pursuits and B is the student’s learning more about the subject at hand. With exam difficulty, A may be avoiding frustration and B is giving a worthy challenge. With a walk, A is reducing fatigue and B is increasing health benefits. (Note that the claim that A is promoted by less Q and B is promoted by more Q may only be correct within a certain range of Q. A walk that is too long leads to injury rather than health.)

So, now, suppose we choose Q = Q1. Why did one choose that? It is odd to say that one chose Q on account of reasons A and B that are opposed to each other—that sounds inconsistent.

Here is one suggestion. Take the choice to make Q equal to Q1 to be the conjunction of two (implicit?) choices:

  1. Make Q at most Q1

  2. Make Q at least Q1.

Now, we can explain choice (a) in terms of (a) serving good A better than the alternative, which would be to make Q be bigger than Q1. And we can explain (b) in terms of (b) serving good B better than the alternative of making Q be smaller.

Here is a variant suggestion. Partition the set of options into two ranges R1, consisting of options where Q < Q1 and R2, where Q > Q1. Why did I choose Q = Q1? Well, I chose Q over all the choices in R1 because Q better promotes B than anything in R1, and I chose Q over all the choices in R2 because Q better promotes A than anything in R1.

On both approaches, the apparent inconsistency of citing opposed goods disappears because they are cited to explain different contrasts.

Note that nothing in the above explanatory stories requires any commitment to there being some sort of third good, a good of balance or compromise between A and B. There is no commitment to Q1 being the best way to position Q.

Simplicity and gravity

I like to illustrate the evidential force of simplicity by noting that for about two hundred years people justifiably believed that the force of gravity was Gm1m2/r2 even though Gm1m2/r2 + ϵ fit the observational data better if a small enough but non-zero ϵ. A minor point about this struck me yesterday. There is doubtless some p ≠ 2 such that Gm1m2/rp would have fit the observational data better. For in general when you make sufficiently high precision measurements, you never find exactly the correct value. So if someone bothered to collate all the observational data and figure out exactly which p is the best fit (e.g., which one is exactly in the middle of the normal distribution that best fits all the observations), the chance that that number would be 2 up to the requisite number of significant figures would be vanishingly small, even if in fact the true value is p = 2. So simplicity is not merely a tie-breaker.

Note that our preference for simplicity here is actually infinite. For if we were to collate the data, there would not just be one real number that fits the data better than 2 does, but a range J of real numbers that fits the data better than 2. And J contains uncountably many real numbers. Yet we rightly think that 2 is more likely than the claim that the true exponent is in J, so 2 must be infinitely more likely than most of the numbers in J.

Bayesian reasoning isn't our duty

Ought implies can. Most people can’t do Bayesian reasoning correctly. So Bayesian reasoning is not how they ought to reason. In particular, a reduction of epistemic ought to the kinds of probability fcts that are involved in Bayesian reasoning fails.

I suppose the main worry with this argument is that perhaps only an ought governing voluntary activity implies can. But the epistemic life is in large part involuntary. An eye ought to transmit visual information, but some eyes cannot—and that is not a problem because seeing is involuntary.

However, it is implausible to think that we humans ought to do something that nobody has been able to do until recently and even now only a few can do, and only in limited cases, even if the something is involuntary.

If Bayesian reasoning isn’t how we ought to reason, what’s the point of it? I am inclined to think it is a useful tool for figuring out the truth in those particular cases to which it is well suited. There are different tools for reasoning in different situations.

Thursday, October 27, 2022

Probabilistic trolleys

Suppose a trolley is heading towards five people, and you can redirect it towards one. But the trolley needs to go up a hill before it can roll down it to hit the five people, and your best estimate of its probability of making it up the hill is 1/4. On the other hand, if you redirect it, it’s a straight path to the one person, who is certain to be killed. Do you redirect? Expected utilities:  − 1.25 lives for not redirecting and  − 1 lives for redirecting.

Or suppose you are driving a fire truck to a place where five people are about to die in a fire, and you know that you have a 1/4 chance of putting out the fire and saving them if you get there in time. Moreover, there is a person sleeping on the road in front of the only road to the fire, and if you stop to remove the person from the road, it will be too late for the five. Do you brake? Expected utilities:  − 5 lives for braking and  − 1 − 3.75 =  − 4.75 lives for continuing to the fire and running over the person on the road.

I think you shouldn’t redirect and you should brake. There is something morally obnoxious about certainly causing death for a highly uncertain benefit when the expected values are close. This complicates the proportionality condition in the Principle of Double Effect even more, and provides further evidence against expected-value utilitarianism.

Wednesday, October 26, 2022

The Law of Large Numbers and infinite run payoffs

In discussions of maximization of expected value, the Law of Large Numbers is sometimes invoked, at times—especially by me—off-handedly. According to the Strong Law of Large Numbers (SLLN), if you have an infinite sequence of independent random variables X1, X2, ... satisfying some conditions (e.g., in the Kolmogorov version n(σn2/n2) < ∞, where σn2 is the variance of Xn), then with probability one, the average of the random variables converges to the average of the mathematical expectations of the random variables. The thought is that in that case, if the expectation of each Xn is positive, it is rationally required to accept the bet represented by Xn.

In a recent post, showed how in some cases where the Strong Law of Large Numbers is not met, in an infinite run it can be disastrous to bet in each case according to expected value.

Here I want to make a minor observation. The fact that the SLLN applies to some sequence of independent random variables is itself not sufficient to make it rational to bet in each case according to the expectations in an infinite run. Let Xn be 2n/n with probability 1/2n and  − 1/(2n) with probability 1 − 1/2n. Then

  • EXn = (1/2n)(2n/n) − 1/(2n)(1−1/2n) = (1/n)(1−(1/2)(1−1/2n)).

Clearly EXn > 0. So in individual decisions based on expected value, each Xn will be a required bet.

Now, just as in my previous post, almost surely (i.e., with probability one) only finitely many of the bets Xn will have the positive payoff. Thus, with a finite number of exceptions, our sequence of payoffs will be the sequence  − 1/2,  − 1/4,  − 1/6,  − 1/8, .... Therefore, almost surely, the average of the first n payoffs converges to zero. Moreover, the average of the first n mathematical expectations converges to zero. Hence the variables X1, X2, ... satisfy the Strong Law of Large Numbers. But what is the infinite run payoff of accepting all the bets? Well, given that almost surely there are only a finite number of n such that the payoff of bet n is not of the form  − 1/(2n), it follows that almost surely the infinite run payoff differs by a finite amount from  − 1/2 − 1/4 − 1/6 − 1/8 =  − ∞. Thus the infinite run payoff is negative infinity, a disaster.

Hence even when the SLLN applies, we can have cases where almost surely there are only finitely many positive payments, infinitely many negative ones, and the negative ones add up to  − ∞.

In the above example, while the variables satisfy the SLLN, they do not satisfy the conditions for the Kolmogorov version of the SLLN: the variances grows exponentially. It is somewhat interesting to ask if the variance condition in the Kolmogorov Law is enough to prevent this pathology. It’s not. Generalize my example by supposing that a1, a2, ... is a sequence of numbers strictly between 0 and 1 with finite sum. Let Xn be 1/(nan) with probability an and  − 1/(2n) with probability 1 − an. As before, the expected value is positive, and by Borel-Cantelli (given that the sum of the an is finite) almost surely the payoffs are  − 1/(2n) with finitely many exceptions, and hence the there is a finite positive payoff and an infinite negative one in the infinite run.

But the variance σn2 is less than an/(nan)2 + 1 = (1/(n2an)) + 1. If we let an = 1/n2 (the sum of these is finite), then each variance is at most 2, and so the conditions of the Kolmogorov version of the SLLN are satisfied.

In an earlier post, I suggested that perhaps the Central Limit Theorem (CLT) rather than the Law of Large Numbers is what one should use to justify betting according to expected utilities. If the variables X1, X2, ... satisfy the conditions of the CLT, and have non-negative expectations, then P(X1+...+Xn≥0) will eventually exceed any number less than 1/2. In particular, we won’t have the kind of disastrous situation where the overall payoffs almost surely go negative, and so no example like my above one can satisfy the conditions of the CLT.

Tuesday, October 25, 2022

Learning from what you know to be false

Here’s an odd phenomenon. Someone tells you something. You know it’s false, but their telling it to you raises the probability of it.

For instance, suppose at the beginning of a science class you are teachingabout your studnts about significant figures, and you ask a student to tell you the mass of a textbook in kilograms. They put it on a scale calibrated in pounds, look up on the internet that a pound is exactly 0.45359237 kg, and report that the mass of the object is 1.496854821 kg.

Now, you know that the classroom scale is not accurate to ten significant figures. The chance that the student’s measurement was right to ten significant figures is tiny. You know that the student’s statement is wrong, assuming that it is in fact wrong.

Nonetheless, even though you know the statement is wrong, it raises the probability that the textbook’s mass is 1.496854821 kg (to ten significant figures). For while most of the digits are garbage, the first couple are likely close. Before you you heard the student’s statement, you might have estimated the mass as somewhere between one and two kilograms. Now you estimate it as between 1.45 and 1.55 kg, say. That raises the probability that in fact, up to ten significant figures, the mass is 1.496854821 kg by about a factor of ten.

So, you know that what the student says is false, but your credence in the content has just gone up by a factor of ten.

Of course, some people will want to turn this story into an argument that you don’t know that the student’s statement is wrong. My preference is just to make this statement another example of why knowledge is an unhelpful category.

Thursday, October 20, 2022

Double punishment

Suppose Alice deserves a punishment of degree d, and Bob and Carl each impose on her a different punishment of degree d. Who unjustly punished Alice?

If one punishment came before the other, we can say that the second punishment was unjust, since it was the punishment of a person who no longer deserved punishment. But what if the two punishments are simultaneous?

Maybe we can say that each of Bob and Carl contributed to an unjust punishment. But what each contributed was just! Still, I think the contribution story seems best to me.

Wednesday, October 19, 2022

More on independence

Suppose that I uniformly randomly choose a number x between 0, inclusive, and 1, exclusive. I then look at the bits b1, b2, ... after the binary point in the binary expansion x = 0.b1b2.... Each bit has equal probability 1/2 of being 0 or 1, and the bits are independent by the standard mathematical definition of independence.

Now, what I said is actually underspecified. For some numbers have two binary expansions. E.g., 1/2 can be written as 0.100000... or as 0.011111... (compare how in decimal we have 1/2 = 0.50000... = 0.49999...). So when talked of “the” binary expansion, I need to choose one of the two. Suppose I do the intuitive thing, and consistently choose the expansion that ends with an infinite string of zeroes over the expansion that ends with an infinite string of ones.

This fine point doesn’t affect anything I said about independence, given the standard mathematical definition thereof. But there is an intuitive sense of independence in which we can now see that the bits are not independent. For instance, while each bit can be 1 on its own, it is impossible to have all the bits be 1 (this is actually impossible regardless of how I decided on choosing the expansion, because x = 1 is excluded), and indeed impossible to have all the bits be 1 from some point on. There is a very subtle dependence between the bits that we cannot define within classical probability, a dependence that would be lacking if we tossed an infinite number of "really" independent fair coins.

Tuesday, October 18, 2022

Expected utility maximization

Suppose every day for eternity you will be offered a gamble, where on day n ≥ 1 you can choose to pay half a unit of utility to get a chance of 2n at winning 2n units of utility.

At each step, the expected winnings are 2n ⋅ 2n = 1 unit of utility, and at the price of half a unit, it looks a good deal.

Here’s what will happen if you always go for this gamble. It is almost sure (i.e., it has probability one) that you will only win a finite number of times. This follows from the Borel-Cantelli lemma and the fact that ∑2n < ∞. So you will pay the price of half a unit of utility every day for eternity, and win only a finite amount. That’s a bad deal.

Granted, this assumes you will in fact play an infinite number of times. But it is enough to show that expected utility maximization in individual choices is not always the best policy (and suggests a limitation in the argument here).

Objection: All this has to do with aggregating an infinite number of payments, or traversing an infinite future, and hence is just another paradox of infinity.

Response: Actually the crucial point can be made without aggregating infinitely many payments. Suppose you adopt the policy of accepting the gamble. Then, with probability one, there will come a day M after which you never win again. By day M, you may well have won some (maybe very large) finite amount. But after that day, you will keep on paying to play and never win again. After some further finite number of days, your losses will overtake your winnings, and after that you will just fall further and further behind every day. This unhappy fate is almost sure if you always accept the gamble, and hence if you adopt expected utility maximization in individual decisions as your policy. And the unhappiness of this fate does not depend on aggregation of infinitely many utilities.

Question: What if the game ends after a fixed large finite number of steps?

Response: In any finite number of steps, of course the expected winnings are higher than the price you pay. But nonetheless as the number of steps gets large, the chance at those expected winnings shrinks. Imagine that the game goes on for 200 days, the game on day 100 has finished, and you’re now choosing your policy for the next 100 days. The expected utility of playing for the next 100 days is 50 units. However, assuming you accept this policy, the probability that you will win anything over the next 100 days is less than 2−100, and if you don’t win anything, you lose 50 units of utility. So it doesn’t seem crazy to think that the no-playing policy is better, even though it has worse expected utility. In fact, it seems like quite a reasonable thing to neglect that tiny probability of winning, less than 2−100, and refuse to play. And knowing that the expected utility reasoning when extended for infinite time leads to disaster (infinite loss!) should make one feel better about the decision to violate expected utility maximization.

Final remark: It is worth considering what happens in interpersonal cases, too. Suppose infinitely many people numbered 1, 2, 3, ... are given the opportunity to play the game, with person n being given the opportunity of winning 2n units with probability 2n. If everyone goes for the game, then almost surely a finite number of people will win a finite amount while an infinite number pay the half-unit price. That’s disastrous: an infinite price is being paid for a finite benefit.

Monday, October 17, 2022

Probabilistic trolleys

Generally people think that if a trolley is heading for a bunch of people, it’s wrong to push an innocent bystander in front of the trolley to stop it before it kills the other people, with the innocent bystander dying from the impact.

But imagine that it is 99% likely that the bystander will survive the impact, but 100% certain that the five people further down the track would die. Perhaps the trolley is accelerating downhill, and currently it only has a 1% chance of lethality, but by the time it reaches the five people at the bottom of the hill, it has a 100% chance of lethality. Or perhaps the five people are more fragile, or the bystander is well-armored. For simplicity, let’s also suppose that the trolley cannot inflict any major injury other than death. At this point, it seems plausible that it is permissible to push the bystander in front of the trolley.

But now let’s suppose the situation is repeated over and over, with new people at the bottom of the track but the same unfortunate bystander. Eventually the bystander dies, and the situation stops (maybe that death is what convinces the railroad company to fix the brakes on their trolleys). We can expect about 500 people to be saved at this point. However, it seems that in the case where the bystander wasn’t going to survive the impact, it would have been wrong to push them even to save 500.

There are at least two non-consequentialist ways out of this puzzle.

  1. It is wrong to push the bystander in front of the trolley in the original case where doing so is fatal. After all, one is not intending the bystander’s death, but only their absorption of kinetic energy. In my 2013 paper, I argued that this constitutes wrongful lethal endangerment when the bystander does not consent, even if it is not an intentional killing. But perhaps that judgment is wrong.

  2. It is wrong to push the bystander to save five, but not wrong to push them to save five hundred. While this is a special case of threshold deontology, one can make this move without embracing threshold deontology. One can say that no matter how many are saved, it is wrong to intentionally kill the innocent bystander, but lethal endangerment becomes permissible once the number of people saved is high enough.

Initially, I also thought the following was an appealing solution: It matters whether it is the same bystander who is pushed in front of the trolley each time or a different one. Pushing the same bystander repeatedly unjustly imposes a likely-lethal burden on them, and that is wrong. But it would be permissible to push a different bystander each time onto the track, even though it is still almost certain that eventually a bystander will die. The problem with this solution is this. When the sad situation is repeated with different bystanders, by adopting the policy of pushing the bystander, we are basically setting up a lethal lottery for the bystanders—one of them will be killed. But if we can do that, then it seems we could set up a lethal lottery a different way: Choose a random bystander out of, say, 500, and then keep on pushing that bystander. (Remember that the way the story was set up, death is the only possible injury, so don’t think of that bystander as getting more and more bruised; they are unscathed until they die.) But that doesn’t seem any different from just pushing the same bystander without any lottery, because it is pretty much random which human being will end up being the bystander.

Friday, October 14, 2022

Another thought on consequentializing deontology

One strategy for accounting for deontology while allowing the tools of decision theory to be used is to set such a high disvalue on violations of deontic constraints that we end up having to obey the constraints.

I think this leads to a very implausible consequence. Suppose you shouldn’t violate a deontic constraint to save a million lives. But now imagine you’re in a situation where you need to ϕ to save ten thousand lives, and suppose that the non-deontic-consequence badness of ϕing is negligible as compared to ten thousand lives. Further, you think it’s pretty likely that there is no deontic constraint against ϕing, but you’ve heard that a small number of morally sensitive people think there is. You conclude that there is a 1% chance that there is a deontic constraint against ϕing. If we account for the fact that you shouldn’t violate a deontic constraint to save a million lives by setting a disvalue on violation of deontic constraints greater than the disvalue of a million deaths, then a 1% risk of violating a deontic constraint is worse than ten thousand deaths, and so you shouldn’t ϕ because of the 1% risk of violating a deontic constraint. But this is surely the wrong result. One understands a person of principle refusing to do something that clearly violates a deontic constraint to save lots of lives. But to refuse to do something that has a 99% chance of not violating a deontic constraint to save lots of lives, solely because of that 1% chance of deontic violation, is very implausible.

While I think this argument is basically correct, it is also puzzling. Why is it that it is so morally awful to knowingly violate a deontic constraint, but a small risk of violation can be tolerated? My guess is it has to do with where deontic constraints come from: they come from the fact that in certain prohibited actions one is setting one’s will against a basic good, like the life of the innocent. In cases where violation is very likely, one simply is setting one’s will against the good. But when it is unlikely, one simply is not.

Objection The above argument assumes that the disvalue of deaths varies linearly in the number of deaths and that expected utility maximization is the way to go.

Response: Vary the case. Imagine that there is a ticking bomb that has a 99% chance of being defective and a 1% chance of being functional. If it’s functional, then when the timer goes off a million people die. And now suppose that the only way to disarm the bomb is to do something that has a 1% chance of violating a deontic constraint, with the two chances (functionality of the bomb and violation of constraint) being independent. It seems plausible that you should take the 1% risk of violating a deontic constraint to avoid a 1% chance of a million people dying.

Thursday, October 13, 2022

On monkeys and exemplar theories of salvation

On “exemplar” theories of salvation, Christ’s work of the cross saves us by providing a deeply inspiring example of love, sacrifice, or the like.

Such theories of salvation have the following unsavory consequence: they imply that it would be possible for us to be saved by a monkey.

For imagine that a monkey typing on a typerwriter at random wrote a fictitious story of a life in morally relevant respects like that of Christ, and people started believing that story. If Christ saves us by providing an inspiring example, then we could have gotten the very same effect by reading that fictitious story typed at random by a monkey and erroneously thinking the story to be true.

Of course, that’s just a particularly vivid way of putting the standard objection against exemplar theories that they are Pelagian. I have nothing against monkeys except that they are creatures, and so that if it is possible to be saved by a monkey, then it is possible to be saved by creatures, which is Pelagianism.

Wednesday, October 12, 2022

Compatibilism and servitude

Suppose determinism and compatibilism are true. Imagine that a clever alien crafted a human embryo and the conditions on earth so as to produce a human, Alice, who would end up living in ways that served the alien’s purposes, but whose decisions to serve the alien had the right kind of connection with higher-order desires, reasons, decision-making faculties, etc. so that a compatibilist would count them as right. Would Alice's decisions be free?

The answer depends on whether we include among the compatibilist conditions on freedom the condition that the agent’s actions are not intentionally determined by another agent. If we include that condition, then Alice is not free. But it is my impression that defenders of compatibilism these days (e.g., Mele) have been inclining towards not requiring such a non-determination-by-another-agent condition. So I will take it that there is no such condition, and Alice is free.

If this is right, then, given determinism and compatibilism, it would be in principle possible to produce a group of people who would economically function just like slaves, but who would be fully free. Their higher-order desires, purposes and values would be chosen through processes that the compatibilist takes to be free, but these desires, purposes and values would leave them freely giving all of their waking hours to producing phones for a mega-corporation in exchange for a bare minimum of sustenance, and with no possibility of choosing otherwise.

That's not freedom. I conclude, of course, that compatibilism is false.

Divine permission ethics

There are two ways of thinking about the ethics of consent.

On the first approach, there are complex prohibitions against non-consensual treatment in a number of areas of life, with details varying depending on the area of life (e.g., the prohibitions are even more severe in sexual ethics than in medicine). Thus, this is a picture where we start with a default permission, and layer prohibitions on top of it.

On the second, we start with a default autonomy-based prohibition on one person doing anything that affects another. That, of course, ends up prohibiting pretty much everything. But then we layer exceptions on that. The first is a blanket exception for when the affected person consents in the fullest way. And then we add lots and lots more exceptions, such as when the the effect is insignificant, when one has a special right to the action, etc.

The second approach is interesting. Most ethical systems start with a default of permission, and then have prohibitions on top of that. But the second system starts with a default of prohibitions, and then has permissions on top of that.

The second approach raises this question. Given that the default prohibition on other-affecting actions is grounded in autonomy, how could anything but the other’s consent override that prohibition? I think one direction this question points is towards something I’ve never heard explored: divine permission ethics. God’s permission seems our best candidate for what could override an autonomy-based prohibition. So we might get this picture of ethics. There is a default prohibition on all other-affecting actions, followed by two exceptions: when the affected person consents and when God permits.

I still prefer the first approach.

Thursday, October 6, 2022

Having to do what one thinks is very likely wrong

Suppose Alice borrowed some money from Bob and promised to give it back in ten years, and this month it is time to give it back. Alice’s friend Carl is in dire financial need, however, and Alice promised Carl that at the end of the month, she will give him any of her income this month that she hasn’t spent on necessities. Paying a debt is, of course, a necessity.

Now, suppose neither Alice nor Bob remember how much Alice borrowed. They just remember that it was some amount of money between $300 and $500. Now, obviously in light of her promise to Bob:

  1. It is wrong for Alice to give less to Bob than she borrowed.

But because of her promise to Carl, and because any amount above the owed debt is not a necessity:

  1. It is wrong for Alice to give more to Bob than she borrowed.

And now we have a puzzle. Whatever amount between Alice gives to Bob, she can be extremely confident is either less or more than she borrowed, and in either case she does wrong. Thus whatever Alice does, she is confident she is doing wrong.

What should Alice do? I think it’s intuitive that she should do something like minimize the expected amount of wrong.

Wednesday, October 5, 2022

Induction to the Causal Principle?

I’m curious whether one can infer the causal principle C that everything that comes into existence has a cause inductively on the basis of our observations of things with causes.

There are a couple of issues with such an inference. First, let’s think about the inductive evidence about causes globally. It seems to consist primarily in these two observations:

  1. we have found causes for many things that come into existence, but

  2. there are many things that come into existence for which we have yet to find causes.

It is worth noting that in terms of individuals, (b) vastly outnumbers (a). Consider insects. Of the myriad insects that we come into contact daily, we have found the causes of very few. Of course, we assume that the others have causes, causes that we suppose to be parent insects, but we haven’t found the parents.

For observations (a) and (b) to support C, these observations have to be more likely on C than on C’s negation. But now we have two problems. First, on the negation of C it doesn’t seem like we can make any sense of the probability that some item has or does not have a cause. Causeless events have no probabilities. Second, even if somehow assign such a probability, it is far from clear that the observations of (a) and (b) are more to be expected on C than on not C.

Second, I suspect that often when we claim to have found y to be the cause of x, our reason for belief that y is the cause of x depends on our assumption of C. Our best candidate for a cause of x is y, so we take y to be the cause. But I wonder how often this inference isn’t based on our dismissing the possibility that x just has no cause.

None of this is meant to impugn C. I certainly think C is true. But I think the reasons for believing C are metaphysical or philosophical rather than inductive observation.

Monday, October 3, 2022

The Church-Turing Thesis and generalized Molinism

The physical Church-Turing (PCT) thesis says that anything that can be physically computed can be computed by a Turing machine.

If generalized Molinism—the thesis that for any sufficiently precisely described counterfactual situation, there is a fact of the matter what would happen in that situation—is true, and indeterminism is true, then PCT seems very likely false. For imagine the function f from the natural numbers to {0, 1} such that f(n) is 1 if and only if the coin toss on day n would be heads, were I to live forever and daily toss a fair coin—with whatever other details need to be put in to get the "sufficiently precisely described". But only countably many functions are Turing computable, so with probability one, an infinite sequence of coin tosses would define a Turing non-computable function. But f is physically computable: I could just do the experiment.

But wait: I’m going to die, and even if there is an afterlife, it doesn’t seem right to characterize whatever happens in the afterlife as physical computation. So all I can compute is f(n) for n < 30000 or so.

Fair enough. But if we say this, then the PCT becomes trivial. For given finite life-spans of human beings and of any machinery in an expanding universe with increasing entropy, only finitely many values of any given function can be physically computed. And any function defined on a finite set can, of course, be trivially computed by a Turing machine via a lookup-table.

So, either we trivialize PCT by insisting on the facts of our physical universe that put a finite limit on our computations, or in our notion of “physically computed” we allow for idealizations that make it possible to go on forever. If we do allow for such idealizations, then my argument works: generalized Molinism makes PCT unlikely to be true.

Saturday, October 1, 2022

Vagueness and moral obligation

It sure seems like there is vagueness in moral obligation. For instance, torture of the innocent is always wrong, making an innocent person’s life mildly unpleasant for a good cause is not always wrong, and in between we can run a Sorites sequence.

What view could a moral realist have about this? Here are four standard things that people say about a vague term “ϕ”.

  1. Error theory: nothing is or could be ϕ; or maybe “ϕ” is nonsense.

  2. Non-classical logic: there are cases where attributions of “ϕ” are neither true nor false.

  3. Supervaluationism: there are a lot of decent candidates for the meaning of “ϕ”, and no one of them is the meaning.

  4. Standard epistemicism: there are a lot of decent candidates for the meaning of “$”, and one of them is the meaning, but we don’t know which one, because we don’t know the true semantic theory and the details of our linguistic usage.

If “ϕ” is “moral obligation”, and we maintain moral realism, then (1) is out. I think (3) and (4) are only possible options if we have a watered-down moral realism. For on a robust moral realism, moral obligations really central to our lives, and nothing else could play the kind of central role in our lives that they do. On a robust moral realism, moral obligation is not one thing among many that just as well or almost as well fit our linguistic usage. Here is another way to put the point. On both (3) and (4), the question of what exact content “ϕ” has is a merely verbal question, like the question of how much hair someone can have and still be bald: we could decide to use “bald” differently, with no loss. But questions about moral obligation are not merely verbal in this way.

This means that given robust moral realism, of the standard views of vagueness all we have available is non-classical logic. But non-classical logic is just illogical (thumps table, hard)! :-)

So we need something else. If we deny (1)-(3), we have to say that ultimately “moral obligation” is sharp, but of course we can’t help but admit that there are Sorites sequences and we can’t tell where moral obligation begins and ends in them. But we cannot explain our ignorance in the semantic way of standard epistemicism. What we need is something like epistemicism, but where moral obligation facts are uniquely distinguished from other facts—they have this central overriding role in our lives—and yet there are moral facts that are likely beyond human ken. One might want to call this fifth view “non-standard epistemicism about vagueness” or “denial of vagueness”—whether we call it one or the other may just be a verbal question. :-)

In any case, I find it quite interesting that to save robust moral realism, we need either non-classical logic or something that we might call “denial of vagueness”.