Thursday, March 21, 2019

If open futurism is true, then there are possible worlds that can't ever be actual

Assume open futurism, so that, necessarily, undetermined future tensed “will” statements are either all false or all lack truth value. Then there are possible worlds containing me such that it is impossible for it to be true that I am ever in that world. What do I mean?

Consider possible worlds where I flip an indeterministic fair coin on infinitely many days, starting with day 1. Among these worlds, there is a possible world wH where the coin always comes up heads. But it is impossible for it to be true that I am in that world. For that I ever am in that world entails that infinitely many future indeterministic fair coin tosses will be heads. But a proposition reporting future indeterministic events cannot be true given an open future. So, likewise, it cannot be true that I am ever in that world.

But isn’t it absurd that there be a possible world with me such that it is impossible that it be true that I am in it?

My presence is an unnecessary part of the above argument. The point can also be put this way. If open futurism is true, there are possible worlds (such as wH) that can’t possibly ever be actual.

If classical theism rules out open theism, then classical theism rules out presentism

If presentism and most, if not all, other versions of the A-theory are true, then propositions change in truth value. For instance, on presentism, in the time of the dinosaurs it was not true that horses exist, but now it is true; on growing block, ten years ago the year 2019 wasn't at the leading edge of reality, but now it is. The following argument seems to show that such views are incompatible with classical theism.

  1. God never comes to know anything.

  2. If at t1, x doesn’t know a proposition p but at t2 > t1, x knows p, then x comes to know p.

  3. If propositions change in truth value, then there are times t1 < t2 and a proposition p such that p is not true at t1 and p is true at t2.

  4. It is always the case that God knows every true proposition.

  5. It is never the case that anyone knows any proposition that isn’t true.

  6. So, if propositions change in truth value, then there are times t1 < t2 and a proposition p such that God doesn’t know p at t1 but God does know p at t2. (by 3-5)

  7. So, if propositions change in truth value, God comes to know something. (by 2 and 6)

  8. So, propositions do not change in truth value. (by 1 and 7)

I think the only controversial proposition is (1). Of course, some non-classical theists—say, open theists—will deny (1). But non-classical theists aren’t the target of the argument.

However, there is a way for classical theists to try to get out of (1) as well. They could say that the content of God’s knowledge changes, even though God and God’s act of knowing are unchanging. The move would be like this. We classical theists accept divine simplicity, and hence hold that God would not have been intrinsically any different had he created otherwise than he did. But had God created otherwise than he did, the content of his knowledge would have been different (since God knows what he creates). So the content of God’s knowledge needs to be partially constituted by created reality. (This could be a radical semantic externalism, say.) Thus, had God created otherwise than he did, God (and his act of knowledge which is identical to God) would have been merely extrinsically different.

But exactly the same move allows one to reconcile the denial of (1) with immutability. The content of God’s knowledge is partially constituted by created reality, and hence as created reality changes, the content of God’s knowledge changes, but the change in God is merely extrinsic, like a mother’s change from being taller than her daughter to being shorter than her daughter solely due to her daughter’s growth.

I agree that denying (1) is compatible with God’s being intrinsically unchanging. For a long time I thought that this observation destroyed the argument (1)-(8). But I now think not. For I am now thinking that even if (1) is compatible with immutability, (1) is a part of classical theism. For it is a part of classical theism that God doesn’t learn in any way, and coming to know is a kind of learning.

Here is one way to see that (1) is a part of classical theism. Classical theists want to reject any open theist views. But here is one open theist view, probably the best one. The future is open and propositions reporting what people will freely do tomorrow are now either false or neither-true-nor-false, but tomorrow they come to be true. An omniscient being knows all true propositions, but it is no shortcoming of omniscience to fail to know propositions that aren’t true. Then, our open theist says, God learns these propositions as soon as they become true. This is all that omniscience calls for.

Now, classical theists will want to reject this open theist view on the grounds of its violating immutability. But they cannot do so if they themselves reject (1). For the presentist (say) classical theist can reject (1) without violating immutability, so can our open theist. Indeed, our open theists can say exactly the same thing I suggested earlier: God changes extrinsically as time progresses, and the content of God’s knowledge changes, but God remains intrinsically the same.

So, what do I think the classical theist should say to our open theist? I think this: that God doesn’t come to know is not just a consequence of the doctrine of immutability, but is itself a part of the doctrine of immutability. A God who learns is mutable in an objectionable way even if this learning is not an intrinsic change in God. But if we say this, then of course we are committed to (1), and we cannot be presentists or accept any other of the theories of time on which propositions change in truth value.

I think the best response on the part of the classical theist who is an entrenched presentist would be to deny (1) and concede that classical theism does not rule out open theism. Instead, open theism is ruled out by divine revelation, and revelation here adds to classical theism. But it seems very strange to say that classical theism does not rule out open theism.

Wednesday, March 20, 2019

God and the B-theory of time

  1. All reality is such that it can be known perfectly from the point of view of God.

  2. The point of view of God is eternal and timeless.

  3. Thus, all reality is such that it can be known perfectly from an eternal and timeless point of view.

  4. If all reality is such that it can be known perfectly from an eternal and timeless point of view, then the B-theory of time is true.

  5. So, the B-theory of time is true.

I am not sure of premise (4), however.

Tuesday, March 19, 2019

Will dogs live forever?

Suppose a dog lives forever. Assuming the dog stays roughly dog-sized, there is only a finite number of possible configurations of the dog’s matter (disregarding insignificant differences on the order of magnitude of a Planck length, say). Then, eventually, all of the dog’s matter configurations will be re-runs, as we will run out of possible new configurations. Whatever the dog is feeling, remembering or doing is something the dog has already felt, remembered or done. It will be literally impossible to teach the dog a new trick (without swelling the dog beyond normal dog size).

But a dog’s life is a material life, unlike perhaps the life of a person. Plausibly, a dog’s mental states are determined by the configuration of the dog’s (brain) matter. So, eventually, every one of the dog’s mental states will be a re-run, too.

And then we will run out of states re-run once, and the dog will only have states that are on their second or later re-run. And so on. There will come a day when whatever the dog is feeling, remembering or doing is something the dog has felt, remembered or done a billion times: and there is still eternity to go.

Moreover, we’re not just talking about momentary re-runs. Eventually, every day of the dog’s life will be an identical re-run of an earlier day of the dog’s life (at least insofar as the dog is concerned: things beyond the power of the dog’s sensory apparatus might change). And then eventually every year of the dog’s life will be a re-run of an earlier year. And then there will come a year when every coming year of the dog’s life will already have been done a billion times already.

This doesn’t strike me as a particularly flourishing life for a dog. Indeed, it strikes me that it would be a more flourishing life for the dog to cut out the nth re-runs, and have the dog’s life come to a peaceful end.

Granted, the dog won’t be bored by the re-runs. In fact, probably the dog won’t know that things are being re-run over and over. In any case, dogs don’t mind repetition. But there is still something grotesque about such a life of re-runs. That’s just not the temporal shape a dog’s life should have, much as a dog shouldn’t be cubical or pyramidal in spatial shape.

If this is right, then considerations of a dog’s well-being do not lead to the desirability of eternal life for the dog. As far as God’s love for dogs goes, we shouldn’t expect God to make the dogs live forever.

This is, of course, the swollen head argument, transposed to dogs, from naturalist accounts of humans.

But maybe God would make dogs live forever because of his love for their human friends, not because of his love for the dogs themselves? Here, I think there is a better case for eternal life for dogs. But I am still sceptical. For the humans would presumably know that from the dog’s point of view, everything is an endless re-run. The dog has already taken a walk that looked and felt just like this one a billion times, and there is an infinite number of walks that look and feel just like this one to the dog ahead. Maybe to the human they feel different: the human can think about new things each time, because naturalism is false of humans, and so differences in human mental states don’t require differences in neural states (or so those of us who believe in an eternal afterlife for humans should say). But to the dog it’s just as before. And know that on the dog’s side it’s just endless repetition would, I think, be disquieting and dissatisfying to us. It seems to me that it is not fitting for a human to be tied down for an eternity of a friendship with a finite being that eventually has nothing new to exhibit in its life.

So, I doubt that God would make dogs live forever because of his love for us, either. And the same goes for other brute animals. So, I don’t think brute animals live forever.

All this neglects Dougherty’s speculative suggestion that in the afterlife animals may be transformed, Narnia-like, so that they become persons. If he’s right, then the naturalistic supervenience assumption will be no more true for the animals than for us, and the repetition argument above against dogs living forever will fail. But the argument above will still show that we shouldn’t expect brute animals to live forever. And I am dubious of the transformation hypothesis, too.

At the same time, I want to note that I think it is not unlikely that there will be brute animals on the New Earth. But if so, I expect they will have finite lifespans. For while an upper temporal limit to the life of a human would be an evil, an upper temporal limit to the life of a brute animal seems perfectly fitting.

Monday, March 18, 2019

Disliking

It is a staple of sermons on love that we are required to love our neighbor, not like them. I think this is true. But it seems to me that in many cases, perhaps even most cases, _dis_liking people is a moral flaw. My argument below has holes, but I still think there is something to the line of thought. I am sharing it because it has helped me identify what seems to be a flaw in myself, and it may be a flaw that you share.

Just about everyone has some dislikable feature. After all, just about everyone has a moral flaw, and every moral flaw is dislikable. Moreover, there are many dislikable features that are not moral flaws: a voice that is too hoarse, a face that is too asymmetrical, an intellect that is too slow, etc. However, that Alice has a dislikable feature F need not justify my disliking Alice: on its face it only justifies my disliking F. For the feature to justify disliking Alice, it would have to be a feature sufficiently central to Alice as a person. And only moral flaws or faults would have the relevant centrality, I think.

If I dislike persons because they have a disability or because of their gender or their race, that is a moral flaw in me, even if I act justly towards them. This suggests that dislikes cannot have an arbitrary basis. There must be a good reason for disliking. And it is hard to see how anything other than a moral flaw could form the right kind of basis.

Moreover, not just any moral flaw is sufficient to justify dislike of the person. It has to be a flaw that goes significantly beyond the degree of flawedness that people ordinarily exhibit. Here is a quick line of thought. Few people should dislike themselves. (Maybe Hitler should. And I don’t deny that almost everyone should be dissatisfied with themselves.) Hence few people are dislikable. Granted, there is a leap here: a move from being dislikable to self and being dislikable to another. But if the basis of dislikability is moral flaws, it seems to me that there would be something objectionably arbitrary about disliking someone who isn’t dislikable simpliciter.

Yet I find myself disliking people on the basis of features that aren’t moral flaws or at least aren’t moral flaws significantly bigger than flaws I myself have. Indeed, often the basis is a flaw smaller than flaws I know myself to have, and sometimes it is a flaw I myself share. This disliking is itself a flaw.

I may love the people I unfairly dislike. But I don’t love them enough. For unfair disliking goes against the appreciative aspect of love (unless, of course, the person is so flawed as to be really dislikable—in which case the appreciative aspect may be largely limited to an appreciation of what they ought to be rather than what they now are).

I used to be rather lessez-faire about my dislikes, on the fallacious ground that love is not the same thing as liking. Enough. Time to fight the good fight against dislike of persons and hence for a more appreciative love. Pray for me.

That said, there is nothing wrong in disliking particular dislikable features in others. But when they are dislikable, one should also dislike them in oneself.

Σ10 alethic Platonism

Here is an interesting metaphysical thesis about mathematics: Σ10 alethic Platonism. According to Σ10 alethic Platonism, every sentence about arithmetic with only one unbounded existential quantifier (i.e., an existential quantifier that ranges over all natural numbers, rather than all the natural numbers up to some bound), i.e., every Σ10 sentence, has an objective truth value. (And we automatically get Π10 alethic Platonism, as Π10 sentences are equivalent to negations of Σ10 sentences.)

Note that Σ10 alethic Platonism is sufficient to underwrite a weak logicism that says that mathematics is about what statements (narrowly) logically follow from what recursive axiomatizations. For Σ10 alethic Platonism is equivalent to the thesis that there is always a fact of the matter about what logically follows from what recursive axiomatization.

Of course, every alethic Platonist is a Σ10 alethic Platonist. But I think there is something particularly compelling about Σ10 alethic Platonism. Any Σ10 sentence, after all, can be rephrased into a sentence saying that a certain abstract Turing machine will halt. And it does seems like it should be possible to embody an abstract Turing machine as a physical Turing machine in some metaphysically possible world with an infinite future and infinite physical resources, and then there should be a fact of the matter whether that machine would in fact halt.

There is a hitch in this line of thought. We need to worry about worlds with “non-standard” embodiments of the Turing machine, embodiments where the “physical Turing machine” is performing an infinite task (a supertask, in fact an infinitely iterated supertask). To rule those worlds out in a non-arbitrary way requires an account of the finite and the infinite, and that account is apt to presuppose Platonism about the natural numbers (since the standard mathematical definition of the finite is that a finite set is one whose cardinality is a natural number). We causal finitists, however, do not need to worry, as we think that it is impossible for Turing machines to perform infinite tasks. This means that causal finitists—as well as anyone else who has a good account of the difference between the finite and the infinite—have good reason to accept Σ10 alethic Platonism.

I haven't done any surveys, but I suspect that most mathematicians would be correctly identified as at least being Σ10 alethic Platonists.

Logicism and Goedel

Famously, Goedel’s incompleteness theorems refuted (naive) logicism, the view that mathematical truth is just provability.

But one doesn’t need all of the technical machinery of the incompleteness theorems to refute that. All one needs is Goedel’s simple but powerful insight that proofs are themselves mathematical objects—sequence of symbols (an insight emphasized by Goedel numbering). For once we see that, then the logicist view is that what makes a mathematical proposition true is that a certain kind of mathematical object—a proof—exists. But the latter claim is itself a mathematical claim, and so we are off on a vicious regress.

Friday, March 8, 2019

Obligations of friendship

We are said to have various obligations, especially of benevolence, to our friends precisely because they are our friends. Yet this seems mistaken to me if friendship is by definition mutual.

Suppose you and I think we really are friends. We do all the things good friends do together. We think we are friends. And you really exhibited with respect to me all, externally and internally, all the things that good friends exhibit. But one day I realize that the behavior of my heart has not met the minimal constitutive standards for friendship. Perhaps though I had done things to benefit you, they were all done for selfish ends. And thus I was never your friend, and if friendship is mutual, it follows that we weren’t ever friends.

At the same time, I learn that you are in precisely the kind of need that triggers onerous obligations of benevolence in friends. And so I think to myself: “Whew! I thought I would have an obligation to help, but since I was always selfish in the relationship, and not a real friend, I don’t.”

This thought would surely be a further moral corruption. Granted, if I found out that you had never acted towards me as a friend does, but had always been selfish, that might undercut my obligation to you. But it would be very odd to think that finding out that I was selfish would give me permission for further selfishness!

So, I think, in the case above I still would have towards you the kinds of obligations of benevolence that one has towards one’s friends. Therefore, it seems, these obligations do not arise precisely from friendship. The two-sided appearance of friendship coupled with one-sided (on your side) reality is enough to generate these obligations.

Variant case: For years I’ve been pretending to be your friend for the sake of political gain, while you were sincerely doing what a friend does. And now you need my help. Surely I owe it to you!

I am not saying that these sorts of fake friendships give rise to all the obligations normally attributed to friendship. For instance, one of the obligations normally attributed to friendship is to be willing to admit that one is friends with them (Peter violated this obligation when he denied Jesus). But this obligation requires real friendship. Moreover, certain obligations to socialize with one’s friends depend on the friendship being real.

A tempting thought: Even if friendship is mutual, there is a non-mutual relation of “being a friend to”. You can be a friend to someone who isn’t a friend to you. Perhaps in the above cases, my obligation to you arises not from our friendship, which does not exist, but from your being a friend to me. But I think that’s not quite right. For then we could force people to have obligations towards us by being friends to them, and that doesn’t seem right.

Maybe what happens is this. In friendship, we invite our friends’ trust in us. This invitation of trust, rather than the friendship itself, is what gives rise to the obligations of benevolence. And in fake friendships, the invitation of trust—even if insincere—also gives rise to obligations of benevolence.

So, we can say that we have obligations of benevolence to our friends because they are our friends, but not precisely because they are our friends. Rather, the obligations arise from a part of friendship, the invitation of trust, a part that can exist apart from friendship.

Wednesday, March 6, 2019

Another dilemma?

Following up on my posts (this and this) regarding puzzles generated by moral uncertainty, here is another curious case.

Dr. Alice Kowalska believes that a steroid injection will be good for her patient, Bob. However, due to a failure of introspection, she also believes that she does not believe that a steroid injection will be beneficial to Bob. Should she administer the steroid injection?

In other words: Should Dr. Kowalska do what she thinks is good for her patient, or should she do what she thinks she thinks is good for her patient?

The earlier posts pushed me in the direction of thinking that subjective obligation takes precedence over objective obligation. That would suggest that she should do what she thinks she thinks is good for her patient.

But doesn’t this seem mistaken? After all, we don’t want Dr. Kowalska to be gazing at her own navel, trying to figure out what she thinks is good for the patient. We want her to be looking at the patient, trying to figure out what is good for the patient. So, likewise, it seems that her action should be guided by what she thinks is good for the patient, not what she thinks she thinks is good for the patient.

How, though, to reconcile this with the action-guiding precedence that the subjective seems to have in my previous posts? Maybe it’s this. What should be relevant to Dr. Kowalska is not so much what she believes, but what her evidence is. And here the case is underdescribed. Here is one story compatible with what I said above:

  1. Dr. Kowalska has lots of evidence that steroid injections are good for patients of this sort. But her psychologist has informed her that because of a traumatic experience involving a steroid injection, she has been unable to form the belief that naturally goes with this evidence. However, Dr. Kowalska’s psychologist is incompetent, and Dr. Kowalska indeed has the belief in question, but trusts her psychologist and hence thinks she does not have it.

In this case, it doesn’t matter whether Dr. Kowalska believes the injection would be good for patient. What matters is that she has lots of evidence, and she should inject.

Here is another story compatible with the setup, however:

  1. Dr. Kowalska knows there is no evidence that steroid injections are good for patients of this sort. However, her retirement savings are invested in a pharmaceutical company that specializes in these kinds of steroids, and wishful thinking has led to her subconsciously and epistemically akratically forming the belief that these injections are beneficial. Dr. Kowalska does not, however, realize that she has formed this subconscious belief.

In this case, intuitively, again it doesn’t matter that Dr. Kowalska has this subconscious belief. What matters is that she knows there is no evidence that the injections are good for patients of this sort, and given this, she should not inject.

If I am right in my judgments about 1 and 2, the original story left out crucial details.

Maybe we can tell the original story simply in terms of evidence. Maybe Dr. Kowalska on balance has evidence that the injection is good, while at the same time on balance having evidence that she does not on balance have evidence that the injection is good. I am not sure this is possible, though. The higher order evidence seems to undercut the lower order evidence, and hence I suspect that as soon as she gained evidence that she does not on balance have evidence, it would be the case that on balance she does not have evidence.

Here is another line of thought suggesting that what matters is evidence, not belief. Imagine that Dr. Kowalska and Dr. Schmidt both have the same evidence that it is 92% likely that the injections would be beneficial. Dr. Schmidt thereupon forms the belief that the injections would be beneficial, but Dr. Kowalska is more doxastically cautious and does not form this belief. But there is no disagreement between them as to the probabilities on the evidence. Then I think there should be no disagreement between them as to what course of action should be taken. What matters is whether 92% likelihood of benefit is enough to outweigh the cost, discomfort and side-effects, and whether the doctor additionally believes in the benefit is quite irrelevant.

Tuesday, March 5, 2019

More on moral risk

You are the captain of a small damaged spaceship two light years from Earth, with a crew of ten. Your hyperdrive is failing. You can activate it right now, in a last burst of energy, and then get home. If you delay activating the hyperdrive, it will become irreparable, and you will have to travel to earth at sublight speed, which will take 10 years, causing severe disruption to the personal lives of the crew.

The problem is this. When such a failing hyperdrive is activated, everything within a million kilometers of the spaceship’s position will be briefly bathed in lethal radiation, though the spaceship itself will be protected and the radiation will quickly dissipate. Your scanners, fortunately, show no planets or spaceships within a million kilometers, but they do show one large asteroid. You know there are two asteroids that pass through that area of space: one of them is inhabited, with a population of 10 million, while the other is barren. You turn your telescope to the asteroid. It looks like the uninhabited asteroid.

So, you come to believe there is no life within a million kilometers. Moreover, you believe that as the captain of the ship who has a resposibility to get the crew home in a reasonable amount of time, unless of course this causes undue harm. Thus, you believe:

  1. You are obligated to activate the hyperdrive.

You reflect, however, on the fact that ship’s captains have made mistakes in asteroid identification before. You pull up the training database, and find that at this distance, captains with your level of training make the relevant mistake only once in a million times. So you still believe that this is the lifeless asteroid. but now you get worried. You imagine a million starship captains making the same kind of decision as you. As a result, 10 million crew members get home on time to their friends and families, but in one case, 10 million people are wiped out in an asteroid. You conclude, reasonably, that this is an unacceptable level of risk. One in a million isn’t good enough. So, you conclude:

  1. You are obligated not to activate the hyperdrive.

This reflection on the possibility of perceptual error does not remove your belief in (1), indeed your knowledge of (1). After all, a one in a million chance of error is less than the chance of error in many cases of ordinary everyday perceptual knowledge—and, indeed, asteroid identification just is a case of everyday perceptual knowledge for a captain like yourself.

Maybe this is just a case of your knowing you are in a real moral dilemma: you have two conflicting duties, one to activate the hyperdrive and the other not to. But this fails to account for the asymmetry in the case, namely that caution should prevail, and there has to be an important sense of “right” in which the right decision is not to activate the hyperdrive.

I don’t know what to say about cases like this. Here is my best start. First, make a distinction between subjective and objective obligations. This disambiguates (1) and (2) as:

  1. You are objectively obligated to activate the hyperdrive.

  2. You are subjectively obligated not to activate the hyperdrive.

Second, deny the plausible bridge principle:

  1. If you believe you are objectively obligated to ϕ, then you are subjectively obligated to ϕ.

You need to deny (4), since you believe (3), and if (4) were true, then it would follow you are subjectively obligated to activate the hyperdrive, and we would once again have lost sight of the asymmetric “right” on which the right thing is not to activate.

This works as far as it goes, though we need some sort of a replacement for (4), some other principle bridging from the objective to the subjective. What that principle is is not clear to me. A first try is some sort of an analogue to expected utility calculations, where instead of utilities we have the moral weights of non-violated duties. But I doubt that these weights can be handled numerically.

And I still don’t know how to handle is the problem of ignorance of the bridge principles between the objective and the subjective.

It seems there is some complex function from one’s total mental state to one’s full-stop subjective obligation. This complex function is one which is not known to us at present. (Which is a bit weird, in that it is the function that governs subjective obligation.)

A way out of this mess would be to have some sort of infallibilism about subjective obligation. Perhaps there is some specially epistemically illuminated state that we are in when we are subjectively obligated, a state that is a deliverance of a conscience that is at least infallible with respect to subjective obligation. I see difficulties for this approach, but maybe there is some hope, too.

Objection: Because of pragmatic encroachment, the standards for knowledge go up heavily when ten million lives are at stake, and you don’t know that the asteroid is uninhabited when lives depend on this. Thus, you don’t know (1), whereas you do know (2), which restores the crucial action-guiding asymmetry.

Response: I don’t buy pragmatic encroachment. I think the only rational process by which you lose knowledge is getting counterevidence; the stakes going up does not make for counterevidence.

But this is a big discussion in epistemology. I think I can avoid it by supposing (as I expect is true) that you are no more than 99.9999% sure of the risk principles underlying the cautionary judgment in (2). Moreover, the stakes go up for that judgment just as much as they do for (1). Hence, I can suppose that you know neither (1) nor (2), but are merely very confident, and rationally so, of both. This restores the symmetry between (1) and (2).

Monday, March 4, 2019

Isomorphism of inputs

For simplicity, I’ll stick to deterministic systems in this post. Functionalists think that if A is a conscious system, and B is functionally isomorphic to B, then when B receives valid inputs that correspond under the isomorphism to A’s valid inputs, B has exactly the same conscious states as A does.

Crucial to this is the notion of a functional isomorphism. A paradigmatic example would be a computer built of electronics and a hydraulic computer, with the same software. The electronic computer has electrical buttons as inputs and the hydraulic computer uses valves. Perhaps a pressed state of a button has as its isomorph an open valve.

But I think the notion of a functional isomorphic is a dubious one. Start with two electronic systems.

  • System A: Has 16 toggle switches, in two rows of 8, a momentary button, and 9 LEDs. When the button is pressed, the LEDs indicate the sum of the binary numbers encoded in the obvious way by the two rows of toggle switches.

  • System B: Has 25 toggle switches, in three rows, of 8, 8 and 9, respectively, a momentary button, and 9 LEDs. When the momentary button is pressed, the LEDs indicate the positions of the toggle switches in the third row. The toggle switches in the first two rows are not electrically connected to anything.

These two systems seem to be clearly non-isomorphic. The first seems to be an 8-bit adder and the second is just nine directly controlled lights.

But now imagine that the systems come with these instructions:

  • A: 8-bit adder. To use, move the toggle switches in the two rows to correspond to the bits in the two input numbers (down=1, up=0), and press the momentary button. The input state is only validly defined when the momentary button is pressed.

  • B: 8-bit adder. To use, move the toggle switches in the two rows to correspond to the bits in the two input numbers (down=1, up=0), move the toggle switches in the third row to correspond to the bits in the sum of the two input numbers, and press the momentary button. The input state is only validly defined when the momentary button is pressed and the third row of switches contains the sum of the numbers in the first two rows.

There is now an isomorphism between valid inputs of A and B. Thus, the valid input of A:

  • 00000001,00000001,momentary pressed

corresponds to the valid input of B:

  • 00000001,00000001,000000010,momentary pressed.

Moreover, the outputs given the isomorphically corresponding valid inputs match: given the above inputs, both devices show (left to right) seven LEDs off, one LED on, and one LED off.

So it seems that whether A and B count as functionally isomorphic depends on what the instruction manuals specify as valid inputs. If the only valid inputs of B are ones where the third row of inputs corresponds to the sum of the first two, then B is an 8-bit adder. If that restriction is removed, then B is no longer an adder, but something much less interesting.

This point generalizes. Any computational system can be made isomorphic to a much simpler system with a more complex instruction manual.

This is all well and good if we are dealing with computers and software that come with specifications and manuals. But it is disastrous for the functionalist project. For the functionalist project is supposed to be a contemporarynaturalistic naturalistic account of our minds, and given naturalism, our brains do not come with specifications or manuals if contemporary naturalism is true. (If we have Aristotelian naturalism instead, we might get something akin to specifications or manuals embedded in our teleology.)

Objection 1: We need only allow those systems where the specification of valid inputs is relatively simple in a language whose linguistic structure corresponds to what is perfectly natural (Lewis) or structural (Sider), or only count as an isomorphism something that can be described in relatively simple ways in such a language.

Response: First, where is the line of the “relatively simple” to be drawn. Precise specification of the position of a toggle switch or water valve in the language of fundamental physics will be very complicated.

Second, System A is a bona fide electronic 8-bit adder. Imagine System A* is a very similar bona fide hydraulic 8-bit adder. It is very likely that a specification of what counts as a depressed toggle switch or an open valve in the language of microphysics is quite complex (just describing electricity or the flow of water in microphysics is really hard). It is also quite likely that the specification of one of these inputs is quite a bit more complex than the specification of the other. Let’s suppose, for simplicity, that A* is the system where the microphysical specification of how valid inputs work is quite a bit more complicated. Intuitively, fluid dynamics is further from the microphysics than electricity. Then the specification of the valid input states of System B may welll turn out to be closer in complexity to the specification of the valid input states of System A than that of the hydraulic A*. If so, then counting A* as isomorphic to A would force one to likewise count B as isomorphic to A.

Objection 2: The trick in the argument above was to use the notion of a valid input. But perhaps functional isomorphism needs a correspondence between all inputs, not just valid ones.

Response: This is implausible. Amongst invalid inputs to a human brain is a bullet, which produces a variety of outputs, namely death or a wide variety of forms of damage (and corresponding mutations of other behaviors), depending on the bullet trajectory. It is too stringent a requirement on an isomorph of the human brain that it should have the possibility of being damaged in precisely the ways that a bullet would damage a human brain, with exactly isomorphic mutations of behaviors.

More generally, the variety of invalid inputs is just too great to insist on isomorphism. Think of our electronic and hydraulic case. The kind of output you get when you press a toggle switch too hard, or too lightly, is unlikely to correspond to the kind of output you get when you open a valve too much, or too little, and such correspondence should not be required for isomorphism.

Conclusions: We need a manual or other source of specifications to talk of functional isomorphism. Functionalism, thus, requires a robust notion of function that is incompatible with contemporary naturalism.

Friday, March 1, 2019

Between subjective and objective obligation

I fear that a correct account of the moral life will require both objective and subjective obligations. That’s not too bad. But I’m also afraid that there may be a whole range of hybrid things that we will need to take into account.

Let’s start with clear examples of objective and subjective obligations. If Bob promised Alice to give her $10 but I misremember the promise and instead thinks he promised never to give her any more, then:

  1. Bob is objectively required to give Alice $10.

  2. Bob is subjectively required not to give Alice any money.

These cases come from a mistake about particular fact. There are also cases arising from mistakes about general facts. Helmut is a soldier in the Germany army in 1944 who knows the war is unjust but mistakenly believes that because he is a soldier, he is morally required to kill enemy combatants. Then:

  1. Helmut is objectively required to refrain from shooting Allied combatants.

  2. Helmut is subjectively required to kill Allied combatants.

But there are interesting cases of mistakes elsewhere in the reasoning that generate curious cases that aren’t neatly classified in the objective/subjective schema.

Consider moral principles about what one should subjectively do in cases of moral risk. For instance, suppose that Carl and his young daughter are stuck on a desert island for the next three months. The island is full of chickens. Carl believes it is 25% likely that chickens have the same rights as humans, and he needs to feed his daughter. His daughter has a mild allergy to the only other protein source on the island: her eyes will sting and her nose run for the next three months if she doesn’t live on chicken. Carl thus thinks that if chickens have the same rights as humans, he is forbidden from feeding chicken to his daughter; but if they don’t, then he is obligated to feed chicken to her.

Carl could now accept one of these two moral risk principles (obviously, these will be derivative from more general principles):

  1. An action that has a 75% probability of being required, and a 25% chance of being forbidden, should always be done.

  2. An action that has a 25% probability of being forbidden with a moral weight on par with the prohibition on multiple homicides and a 75% probability of being required with a moral weight on par with that of preventing one’s child’s mild allergic symptoms for three months should never be done.

Suppose that in fact chickens have very little in the way of rights. Then, probably:

  1. Carl is objectively required to feed chicken to his daughter.

Suppose further that Carl’s evidence leads him to be sure that (5) is true, and hence he concludes that he is required to feed chicken to his daughter. Then:

  1. Carl is subjectively required to feed chicken to his daughter.

This is a subjective requirement: it comes from what Carl thinks about the probabilities of rights, moral principles about what what to do in cases of risk, etc. It is independent of the objective obligation in (7), though in this example it agrees with it.

But suppose, as is very plausible, that (5) is false, and that (6) is the right moral principle here. (To see the point, suppose that he sees a large mammal in the woods that would suffice to feed his daughter for three months. If the chance that that mammal is a human being is 25%, that’s too high a risk to take.) Then Carl’s reasoning is mistaken. Instead, given his uncertainty:

  1. Carl is required to to refrain from killing chickens.

But what kind of an obligation is (9)? Both (8) and (9) are independent of the objective facts about the rights of chickens and depend on Carl’s beliefs, so it sounds like it’s subjective like (8). But (8) has some additional subjectivity in it: (8) is based on Carl’s mistaken belief about what his obligations are in cases of mortal risk, while (9) is based on what Carl’s obligations (but of what sort?) “really are” in those cases.

It seems that (9) is some sort of a hybrid objective-subjective obligation.

And the kinds of hybrid obligations can be multiplied. For we could ask about what we should do when we are not sure which principle of deciding in circumstances of moral risk we should adopt. And we could be right or we could be wrong about that.

We could try to deny (9), and say that all we have are (7) and (8). But consider this familiar line of reasoning: Both Bob and Helmut are mistaken about their obligations; they are not mistaken about their subjective obligations; so, there must be some other kinds of obligations they are mistaken about, namely objective ones. Similarly, Carl is mistaken about something. He isn’t mistaken about his subjective obligation to feed chicken. Moreover, his mistake does not rest in a deviation between subjective and objective obligation, as in Bob’s and Helmut’s case, because in fact objectively Carl should feed chicken to his daughter, as in fact (I assume for the sake of the argument) chickens have no rights. So just as we needed to suppose an objective obligation that Bob and Helmut got wrong, we need a hybrid objective-subjective one that Carl got wrong.

Here’s another way to see the problem. Bob thinks he is objectively obligated to give no money to Alice and Helmut thinks he is objectively obligated to kill enemy soldiers. But when Carl applies (5), what does he come to think? He doesn’t come to think that he is objectively required to feed chicken to his daughter. He already thought that this was 75% likely, and (5) does not affect that judgment at all. It seems that just as Bob and Helmut have a belief about something other than mere subjective obligation, Carl does as well, but in his case that’s not objective obligation. So it seems Carl has to be judging, and doing so incorrectly, about some sort of a hybrid obligation.

This makes me really, really want an account of obligation that doesn’t involve two different kinds. But I don’t know a really good one.

Thursday, February 28, 2019

A reading of 1 Corinthians 14:33b-34a

1 Corinthians 14:33b-34a is one of the “hard texts” of the New Testament. The RSV translates it as:

As in all the churches of the saints, the women should keep silence in the churches.

Besides the fact that this is a hard saying, a textual difficulty is that earlier in the letter, at 11:5, Paul has no objection to women prophesying or praying (it seems very likely that praying would be out loud), though it has been suggested that this was outside of a liturgical context. Nor does later Church practice prohibit women from joining in vocal prayer during the liturgy.

I assume that the second "the churches" means "the churches of Corinth", while the first "the churches" refers to the churches more generally. And yesterday at our Department Bible study, I was struck by the fact that the “As” (Greek hōs) that begins the text can be read as “In the manner of”. On that reading, the first sentence of the hard text does not say that women should keep silent in the Corinthian churches. Rather, it says that women should keep silent in the Corinthian churches in the way and to the extent to which they keep silent in the other churches. In other words, women should only speak up in Corinthian liturgies at the points at which women speak up in non-Corinthian liturgies. This is compatible with women having various speaking roles—but only as long as they have these roles in “all the churches of the saints.”

(Note, however, that some versions punctuate differently, and make “As in all the churches of the saints” qualify what came earlier rather than what comes afterwards. My reading requires the RSV’s punctuation. Of course, the original has no punctuation.)

On this reading, the first sentence of the text is an application of a principle of liturgical uniformity between the churches, and Paul could equally well have said the same thing about the men. But the text suggests to me that there was some particular problem, which we can only speculate about, that specifically involved disorderly liturgical participation by Corinthian women, in addition to other problems of disorderly participation that Paul discusses earlier in the chapter.

The difficulty for my reading is the next sentence, however:

For they are not permitted to speak, but should be subordinate, as even the law says. (1 Cor. 14:34b, RSV)

I would want to read this with “speak” restricted to the kinds of speech not found in the other churches. Perhaps in the other churches, there was no “chatting in the pews”, or socializing during the liturgy (Mowczko in a very nice summary of interpretations notes that this is St. John Chrystostom’s interpretation).

Another interpretation is that “the law” here is Roman law or Corinthian custom (though I don’t know that in Koine Greek “nomos” can still cover custom, like it can in classical Greek), so that Paul is reprising a motif of noting that the Corinthians are behaving badly even by their own cultural standards.

I don’t know that my reading is right. I think it is a little bit more natural to read the Greek as having a complete prohibition on women speaking, but my reading seems to be grammatically permissible, and one must balance naturalness of language with consistency in a text (in this case, consistency with 11:5). And in the case of a Biblical text, I also want an interpretation compatible with divine inspiration.

Wednesday, February 27, 2019

White lies

Suppose Bob is known by Alice to be an act utilitarian. Then Bob won’t believe when Alice asserts p in cases where Bob knows that by Alice’s lights, if p is false, nonetheless the utility of getting Bob to believe p exceeds the utility of Bob knowing that p is false. For in such cases an act utilitarian is apt to lie, and her testimony to p is of little worth.

Such cases are not uncommon in daily life. Alice feels bad about a presentation she just made. Bob praises it. Alice dismisses the praise on the grounds that even if her presentation was bad, getting her to feel better outweighs the utility of her having a correct estimate of the presentation, at least by Bob’s lights.

Praise from an act utilitarian is of little value: instead of being direct evidence for the proposition that one did well, it is direct evidence for the proposition that it would be good for one to believe that one did well. Now, that it would good for one to believe that one did well is some evidence that one did well, but it is fairly weak evidence given facts about human psychology.

And so in cases where praise is deserved, the known act utilitarian is not going to promote utility for friends as effectively as a known deontologist, since the deontologist’s praise is going to get a lot more credence. Such cases are not rare: it is quite common for human performances to deserve praise and for the agent to be such that they would benefit from being uplifted by praise. While, on the other hand, in cases where praise is undeserved, the known act utilitarian’s praise does little to uplift the spirit.

These kinds of ordinary interactions are such a large part of our lives that I think a case can be made that just on the basis of these, by the lights of act utilitarianism, an act utilitarian should either hide their act utilitarianism from others or else should convert to some other normative ethical view (say, by self-brainwashing). Since the relevant interactions are often with friends, and it is unlikely one can hide one’s character from one’s friends over a significant period of time, and since doing so is likely to be damaging to one’s character in ways that even the act utilitarian will object to, this seems to be yet another of the cases where act utilitarianism pushes one not to be an act utilitarian.

Such arguments have been made before in other contexts (e.g., worries that the demandingness of act utilitarianism would sap our energies). They are not definitive refutations of act utilitarianism. As Parfit has convincingly argued, it is logically consistent to hold that an ethical theory is true but that one morally should not believe it. But still we get the conclusion that everybody morally should be something other than an act utilitarian. For if act utilitarianism is false, you surely shouldn’t be an act utilitarian. And if it’s true, you shouldn’t, either.

The above, I think, is more generally relevant to any view on which everyday white lies are acceptable. For the only justifications available for white lies are consequentialist ones. But hiding from one’s friends that one is the sort of person who engages in white lies is costly and difficult, whereas letting it be known undercuts the benefits of the white lies, while at the same removing the benefits of parallel white truths. Thus, we should all reject white lies in our lives, and make it clear that we do so.

Here, I use “white lie” in a sense in which it is a lie. I do not think “Fine” is a lie, white or otherwise, when answering “How are you?” even when you are not fine, because this is not a case of assertion but of a standardized greeting. (There is no inconsistency in an atheist saying “Good-bye”, even though it’s a contraction of “God be with you.”) One way to see this isn't a lie is to note that it is generally considered rude (but sometimes required) to suggest that one's interlocutor lied, there is nothing rude about saying to someone who answered “Fine”: “Are you sure? You look really tired.” At that point, we do move into assertion category. The friend who persists in the “Fine” answer but isn't fine now is lying.

Tuesday, February 26, 2019

The reportable and the assertible

I’ve just had a long conversation with a grad student about (inter alia) reporting and asserting. My first thought was that asserting is a special case of reporting, but one can report without asserting. For instance, I might have a graduate assistant write a report on some aspect of the graduate program, and then I could sign and submit that report without reading it. I would then be reporting various things (whether responsibly so would depend on how strong my reasons to trust the student were), but it doesn’t seem right to say that I would be asserting these things.

But then I came to think that just as one can report without asserting, one can assert without reporting. For instance, there is no problem with asserting facts about the future, such as that the sun will rise tomorrow. But I can’t report such facts, even though I know them.

It’s not really a question of time. For (a) I also cannot report that the sun rose a million years ago, and (b) if I were to time-travel to the future, observe the sunrise, and come back, then I could report that the sun will rise tomorrow.

And it’s not a distinction with respect to the quantity of evidence. After all, I can legitimately report what I had for dinner yesterday, but it’s not likely that I have as good evidence about that as I do that the sun will rise tomorrow.

I suspect it’s a distinction as to the kind of evidence that is involved. I am a legally bound reporter of illegal activity on campus. But I can’t appropriately report that a violation of liquor laws occurred in the dorms over the weekend if I know it only on the basis of the general claim that such violations, surely, occur every weekend. The kind of evidence that memory provides is typically appropriate for reporting, while the kind of evidence that induction provides is at least typically not.

Interestingly, although I can’t appropriately report that tomorrow the sun will rise, I can appropriately report that I know that the sun will rise tomorrow. This means that the reportable is not closed under obvious entailment.

Lying and consequences

Suppose Alice never lies while Bob lies to saves innocent lives.

Consider circumstances where Alice and Bob know that getting Carl to believe a proposition p would save an innocent life, and suppose that Alice and Bob know whether p is true.

In some cases of this sort, Bob is likely to do better with respect to innocent lives:

  1. p is false and Carl doesn’t know Alice and Bob’s character.

  2. p is false and Carl doesn’t know that Alice and Bob know that getting Carl to believe p would save an innocent livfe.

For in cases 1 and 2, Bob is likely to succeed in getting Carl to believe p, while Alice is not.

But in one family of cases, Alice is likely to do better:

  1. p is true and Carl knows Alice and Bob’s character and knows that they believe that getting Carl to believe p would save an innocent life.

For in these cases, Carl wouldn’t be likely to believe Bob with regard to p, as he would know that Bob would affirm p whether p was true or false, as Bob is the sort of person who lies to save innocent lives, while Carl would surely believe Alice.

Are cases of type (1) and (2) more or less common than cases of type (3)?

I suppose standard cases where an aggressor at the door is asking whether a prospective victim is in the house may fall under category (1) when the aggressor knows that they are known to be an aggressor and will fall under category (2) when the aggressor doesn’t know that they are known to be an aggressor (Korsgaard discusses this case in a paper on Kant on lying).

On the other hand, category (3) includes some death penalty cases where (a) the life of the accused depends on some true testimony being believed and (b) the testifier is someone likely to think the accused to be innocent independently of the testimony (say, because the accused is a friend). For in such a case, Bob would just give the testimony whether it’s true or false, while Alice would only give it if it were true (or at least she thought it was), and so Bob’s testimony carries no weight while Alice’s does.

Category (3) also includes some cases where an aggressor at the door knows the character of their interlocutor in the house, and knows that they are known to be an aggressor, and where the prospective victim is not in the house, but a search of the house would reveal other prospective victims. For instance, suppose a Gestapo officer is asking whether there are Jews in the house, which there aren’t, but there are Roma refugees in the house. The Gestapo officer may know that Bob would say there aren’t any Jews even if there were, and so he searches the house and finds the Roma if Bob is at the door; but he believes Alice, and doesn’t search, and the Roma survive.

Roughly, the question of whether Alice or Bob’s character is better consequentialistically comes down to the question whether it is more useful, with respect to innocent life, to be more believable and always honest (Alice) or to be less believable and able to lie (Bob).

More on grounding of universals

The standard First Order Logic translation of “All As are Bs” is:

  1. x(A(x)→B(x)).

Suppose we accept this translation and we further accept the principle:

  1. Universal facts are always partially grounded in their instances.

Then we have the oddity that the fact that all ravens are black seems to be partially grounded in my garbage can being black. Let R(x) and B(x) say that x is a raven and black, respectively, and let g be my garbage can. Then an instance of ∀x(R(x)→B(x)) is R(g)→B(g), and the latter material conditional is definable as ¬R(g)∨B(g). But a disjunction is grounded in its true disjuncts, and hence this one will be grounded in B(g) (as well as in ¬R(g)).

There are three things to dispute here: the translation (1), the grounding principle (2), and the claim that a material conditional is grounded in its consequent whenever that consequent is true. Of these, I am most suspicious of the translation of the two-place universal quantifier and the grounding principle (2).

Friday, February 22, 2019

Grounding of universals and partial grounding

It is common to claim that:

  1. The fact that everything is F is partially grounded in the fact that a1 is F and in the fact that a2 is F and so on, for all the objects ai in the world.

But this can’t be right if partial grounds are parts of full grounds. For suppose you live in a world with only two objects, a and b, which are both sapient. Then everything is sapient, and by (1) it follows that:

  1. The fact that everything is sapient is partially grounded in a being sapient and in b being sapient.

But suppose partial grounds are parts of full grounds. The facts that a is sapient and b is sapient are not a full ground of the fact that everything is sapient, because the full grounds of a fact entail that fact, and a being sapient and b being sapient doesn’t entail that everything is sapient (since it’s possible for a to be sapient and b to be sapient and yet for there to exist a c that is not).

So we need to be able to add something to the two particular sapience facts to get full grounds. The most obvious thing to add is:

  1. Everything is a or b.

Clearly fact (3) together with the facts that a is sapient and b is sapient will entail that everything is sapient.

But applying (1) to (3), we get:

  1. Fact (3) is partially grounded in the facts that a is a or b and that b is a or b.

But, once again, if partial grounds are parts of full grounds, then we need a fact to add to the two facts on the right hand side of the grounding relation in (4) such that together these facts will entail (3). But the obvious candidate to add is:

  1. Everything is a or b.

And that yields circularity.

So it seems that either we should reject the particular-grounds-universal principle (1) or we should reject the principle that partial grounds are parts of full grounds.

Here is a reason for the latter move. Maybe we should say that God’s creating me is partially grounded in God. But that’s merely a partial grounding, since God’s existence doesn’t entail that God created me. And it seems that the only good candidate for a further fact to be added to the grounds so as to entail that God created me would be my existence. (One might try to add the fact that God willed that I exist. But by divine simplicity, that fact has to be partly constituted by my existence or the like.) But my existence is grounded in God’s creating me, so that would be viciously circular.

Are desires really different from wishes?

It is tempting to conflate what is worth desiring with what is worth pursuing. But there seem to be cases where things are worth desiring but not worth pursuing:

  1. Having a surprising good happen to you completely gratuitously—i.e., without your having done anything to invite it—seems worth desiring but the pursuit of it doesn’t seem to make sense.

  2. If I have published a paper claiming a certain mathematical result, and I have come to realize that the result is false, it seems to make perfect sense to desire that the result be true, but it makes no sense to pursue that.

The standard response to cases like 1 and 2 is to distinguish wishes from desires, and say that it makes sense to wish for things that it makes no sense to pursue, but it does not make sense to desire such things.

But consider this. Suppose in case 2, I came to be convinced that God has power over mathematics, and that if I pray that the result be true, God might make it be true. Then the affective state I have in case 2 would motivate me to pray. But the nature of the affective state need not have changed upon coming to think that God has power over mathematics. Thus, either (a) I would be motivated to pray by a mere wish or else (b) wishes and desires are the same thing. But the wish/desire distinction does not fit with (a), which leaves (b).

I suppose one could claim that a desire just is a wish plus a belief that the object is attainable. But that makes desires be too gerrymandered.

Wednesday, February 20, 2019

Three places for beauty in representational art

There seem to be three senses in which beauty can be found in a piece of representational art:

  1. The piece represents something as beautiful.

  2. The piece in and of itself is beautiful.

  3. The task of representing is performed beautifully.

One can have any one of the three without the others. For instance, the one-line poem “The kitty was pretty” satisfies 1 but fails 2 and 3. Though, to be precise, I think sense 1 is not a real case of something being beautiful, but only of something being represented as beautiful. The kitty could be ugly and yet described as pretty.

I think 3 is particularly interesting. It opens up the way for works of art that are in themselves not beautiful and that do not represent beauty, but which do a beautiful job of representing their objects (Sartwell says that Picasso’s Guernica may be beautiful; I think my aspect 3 of the beauty of representational art may explain this). Note that “beautiful” here does not merely mean “accurate”, as the case of my one-line poem shows, since that poem may represent the beauty of a cat with perfect accuracy, but there is very little of the beautiful about how it accomplishes this.

Fundamental bearers of aesthetic properties

I am finding myself frustrated trying to figure out whether the fundamental bearers of aesthetic properties are mental states or things out in the world. When I think about the fact that there does not seem to be any significant difference between the beauty of music that one actually listens to with one’s ears versus “music” that is directly piped to the auditory center of the brain, that makes me think that the fundamental bearers of aesthetic properties are mental states.

But on the other hand, when I think about the beauty of character exhibited by a Mother Teresa, I find it hard to think that it is my mental states—say, my thoughts about Mother Teresa—that bear the fundamental aesthetic properties. If I thought that it was my mental states that are the bearers of aesthetic properties, then I would think that a fictional Mother Teresa is just as beautiful as a real one. But it seems to me that a part of the beauty of the real Mother Teresa is that she is real.

Perhaps the fundamental bearers of aesthetic properties vary. For music and film, perhaps, the fundamental bearers are mental states: the experiences one paradigmatically has when listening and viewing (but which one could also have by direct brain input). For the characters of real people, perhaps, the fundamental bearers are the people themselves or their characters. For the characters of fictional people, perhaps, the fundamental bearers are mentally constituted (in the mind of the author or that of the audience or both).

Maybe the beauty of a real person is a different thing from the beauty of a fictional character. This kind of makes sense. For we might imagine an author who creates a beautiful work of literature portraying a nasty person: the nasty person qua fictional character is beautiful, but would have been ugly in real life, perhaps.

But I hate views on which we have such a pluralism of fundamental bearers of a property.

Tuesday, February 19, 2019

Conciliationism and natural law epistemology

Suppose we have a group of perfect Bayesian agents with the same evidence who nonetheless disagree. By definition of “perfect Bayesian agent”, the disagreement must be rooted in differences in priors between these peers. Here is a natural-sounding recipe for conciliating their disagreement: the agents go back to their priors, they replace their priors by the arithmetic average of the priors within the group, and then they re-updated on all the evidence that they had previous got. (And in so doing, they lose their status as perfect Bayesian agents, since this procedure is not a Bayesian update.)

Since the average of consistent probability functions is a consistent probability function, we maintain consistency. Moreover, the recipe is a conciliation in the following sense: whenever the agents previously all agreed on some posterior, they still agree on it after the procedure, and with the same credence as before. Whenever the agents disagreed on something, they now agree, and their new credence is strictly between the lowest and highest posteriors that the group assigned prior to conciliation.

Here is a theory that can give a justification for this natural-sounding procedure. Start with natural law Bayesianism which is an Aristotelian theory that holds that human nature sets constraints on what priors count as natural to human beings. Thus, just as it is unnatural for a human being to be ten feet tall, it is unnatural for a human being to have a prior of 10−100 for there being mathematically elegant laws of nature. And just as there is a range of heights that is natural for a mature human being, there is a range of priors that is natural for the proposition that there are mathematically elegant laws.

Aristotelian natures, however, are connected with the actual propensities of the beings that have them. Thus, humans have a propensity to develop a natural height. Because of this propensity, an average height is likely to be a natural height. More generally, for any numerical attribute governed by a nature of kind K, the average value of that attribute amongst the Ks is likely to be within the natural range. Likely, but not certain. It is possible, for instance, to have a species whose average weight is too high or too low. But it’s unlikely.

Consequently, we would expect that if we average the values of the prior for a given proposition q over the human population, the average would be within the natural range for that prior. Moreover, as the size of a group increases, we expect the average value of an attribute over the group to approach the average value the attribute has in the full population. Then, if I am a member of the group of disagreeing evidence-sharing Bayesians, it is more likely that the average of the priors for q amongst the members of the group lies within the natural human range for that prior for q than it is that my own prior for q lies within the natural human range for q. It is more likely that I have an unnatural height or weight than that the average in a larger group is outside the natural range for height or weight.

Thus, the prior-averaging recipe is likely to replace priors that are defectively outside the normal human range with priors within the normal human range. And that’s to the good rationally speaking, because on a natural law epistemology, the rational way for humans to reason is the same as the normal way for humans to reason.

It’s an interesting question how this procedure compares to the procedure of simply averaging the posteriors. Philosophically, there does not seem to be a good justification of the latter. It turns out, however, that typically the two procedures give the same result. For instance, I had my computer randomly generate 100,000 pairs of four-point prior probability spaces, and compare the result of prior- to posterior-averaging. The average of the absolute value of the difference in the outputs was 0.028. So the intuitive, but philosophically unjustified, averaging of posteriors is close to what I think is the more principled averaging of priors.

The procedure also has an obvious generalization from the case where the agents share the same evidence to the case where they do not. What’s needed is for the agents to make a collective list of all their evidence, replace their priors by averaged priors, and then update on all the items in the collective list.

Monday, February 18, 2019

Musical beauty and virtual music

We have beautiful music at home on a hard drive. But wait: the arrangement of magnetic dipoles on a disc is not musically beautiful! So it seems inaccurate to say that there is music on the hard drive. Rather, the computer, hard drive, speakers and the orientations of magnetic dipoles jointly form a device that can produce the sound of beautiful music on demand.

One day, however, I expect many people will have direct brain-computer interfaces. When they “listen to music”, no sounds will be emitted (other than the quiet hum of computer cooling fans, say). Yet I do not think this will significantly change anything of aesthetic significance. Thus, the production of musical sounds seems accidental to the enjoyment of music. Indeed, we can imagine a world where neither composers nor performers nor audiences produce or consume any relevant sounds.

Perhaps, then, we should say that what is of aesthetic significance about my computer, with its arrangements of magnetic dipoles, is that it is a device that can produce musical experiences.

But where does the musical beauty lie? Is it that the computer (or the arrangement of magnetic dipoles on its drive) is musically beautiful? That seems wrong: it seems to be the wrong kind of thing to be musically beautiful. Is it the musical experiences that are musically beautiful? But that seems wrong, too. After all, a musical performance—of the ordinary, audible sort—can be musically beautiful, and yet it too gives rise to a musical experience, and surely we don’t want to say that there are two things that are musically beautiful there.

Perhaps a Platonic answer works well here: Maybe it is some Platonic entities that are trulymusically musically beautiful, and sometimes their beauty is experienced in and through an audible performance and sometimes directly in the brain?

Another possibility I am drawn to is that there is a property that isn’t exactly beauty, call it beauty*, which is had by the musical experiences in the mind. And it is this property that is the aesthetically valuable one.

And of course what goes for musical beauty goes for visual beauty, etc.

Friday, February 15, 2019

Natural law: Between objectivism and subjectivism

Aristotelian natural law approaches provide an attractive middle road between objectivist and subjectivist answers to various normative questions: the answers to the questions are relative to the kind of entity that they concern, but not to the particular particular entity.

For instance, a natural law approach to aesthetics would not make the claim that there is one objective beauty for humans, klingons, vulcans and angels. But it would make the absolutist claim that there is one beauty for Alice, Bob, Carl and Davita, as long as they are all humans. The natural lawyer aestheticist could take a subjectivist’s accounts of beauty in terms, of say, disinterested pleasure, but give it a species relative normative twist: the beautiful to members of kind K (say, humans or klingons) is what should give members of kind K disinterested pleasure. The human who fails to find that pleasure in a Monet painting suffers from a defect, but a klingon might suffer from a defect if she found pleasure in the Monet.

Supervenience and omniscience

Problem: It seems that if God necessarily exists, then the moral automatically supervenes on the non-moral. For, any two worlds that differ in moral facts also differ in what God believes about moral facts, and presumably belief facts are non-moral. This trivializes the mechanism of supervenience for theists.

Potential Solution: Divine simplicity makes God’s beliefs about God-external facts be externally constituted. Thus, a part of what makes it true that God believes that there are sheep are the sheep. If so, then perhaps a part of what makes it true that God believes a moral fact is that very moral fact. Thus, God’s beliefs about moral facts are partly constituted by the moral facts, and hence are not themselves non-moral.

Wednesday, February 13, 2019

Anti-reductionism and supervenience

In the philosophy of mind, those who take anti-reductionism really seriously will also reject the supervenience of the mental on the non-mental. After all, if a mental property does not reduce to the non-mental, we should be able to apply a rearrangement principle to fix the non-mental properties but change the mental one, much as one can fix the shape of an object but change its electrical charge, precisely because charge doesn’t reduce to shape or shape to charge. There might be some necessary connections, of course. Perhaps some shapes are incompatible with some charges, and perhaps similarly some mental states are incompatible with some physical arrangement. But it would be surprising, in the absence of a reduction, if fixing physical arrangement were to fix the mental state.

Yet it seems that in metaethics, even the staunchest anti-reductionists tend to want to preserve the supervenience of the normative on the non-normative. That is surprising, I think. After all, the same kind of rearrangement reasoning should apply if the normative properties do not reduce to the non-normative ones or vice versa: we should be able to fix the non-normative ones and change the normative ones at least to some degree.

Here’s something in the vicinity I’ve just been thinking about. Suppose that A-type properties supervene on B-type properties, and consider an A-type property Q. Then consider the property QB of being such that the nexus of all B-type properties is logically compatible with having Q. For any Q and B, having QB is necessary for having Q. But if Q supervenes on B-type properties, then having QB is also sufficient for having Q. Moreover, QB seems to be a B-type property in our paradigmatic cases: if B is the physical properties, then QB is a physical property, and if B is the non-normative properties, then QB is a non-normative property. (Interestingly, it is a physical or non-normative property defined in terms of mental or normative properties.)

But now isn’t it just as weird for a staunch anti-reductionist to think that there is a non-normative property that is necessary and sufficient for, say, being obligated to dance as it is for a staunch anti-reductionist to think there is a physical property that is necessary and sufficient for feeling pain?

Tuesday, February 12, 2019

Supervenience and natural law

The B-properties supervene on the A-properties provided that any two possible worlds with the same A-properties have the same B-properties.

It is a widely accepted constraint in metaethics that normative properties supervene on non-normative ones. Does natural law meet the contraint?

As I read natural law, the right action is one that goes along with the teleological properties of the will. Teleological properties, in turn, are normative in nature and (sometimes) fundamental. As far as I can see, it is possible to have zombie-like phenomena, where two substances look and behave in exactly the same way but different teleological properties. Thus, one could have animals that are physically indistinguishable from our world’s sheep, and in particularly have four legs, but, unlike the sheep, have the property of being normally six-legged. In other words, they would be all defective, in lacking two of their six legs.

This suggests that natural law theories depend on a metaphysics that rejects the supervenience of the normative. But I think that is too quick. For in an Aristotelian metaphysics, the teleological properties are not purely teleological. A sheep’s being naturally four-legged simultaneously explains the normative fact that a sheep should have four legs and the non-normative statistical fact that most sheep in fact have four legs. For the teleological structures are not just normative but also efficiently causal: they efficiently guide the embryonic development of the sheep, say.

In fact, on the Koons-Pruss reading of teleology, the teleological properties just are causal powers. The causal power to ϕ in circumtances C is teleological and dispositional: it is both a teleological directedness towards ϕing in C and a disposition to ϕ in C. And there is no metaphysical way of separating these aspects, as they are both features of the very same property.

Our naturally-six-but-actually-four-legged quasi-sheep, then, would differ from the actual world’s sheep in not having the same dispositions to develop quadrapedality. This seems to save supervenience, by exhibiting a difference in non-normative properties between the sheep and the quasi-sheep.

But I think it doesn’t actually save it. For the disposition to develop four (or six) legs is the same property as the teleological directedness to quadrapedality in sheep. And this property is a normative property, though not just normative. We might say this: The sheep and the quasi-sheep differ in a non-normative respect but they do not differ in a non-normative property. For the disposition is a normative property.

Perhaps this suggests that the natural lawyer should weaken the supervenience claim and talk of differences in features or respects rather than properties. That would allow one to save a version of supervenience. But notice that if we do that, we preserve supervenience but not the intuition behind it. For the intuition behind the supervenience of the normative on the non-normative is that the normative is explained by the non-normative. But on our Aristotelian metaphysics, it is the teleological properties that explain that actual non-normative behavior of things.

Thursday, February 7, 2019

Properties, relations and functions

Many philosophical discussions presuppose a picture of reality on which, fundamentally, there are objects which have properties and stand in relations. But if we look to how science describes the world, it might be more natural to bring (partial) functions in at the ground level.

Objects have attributes like mass, momentum, charge, DNA sequence, size and shape. These attributes associate values, like 3.4kg, 15~kg m/s north-east, 5C, TTCGAAAAG, 5m and sphericity, to the objects. The usual philosophical way of modeling such attributes is through the mechanism of determinables and determinates. Thus, an object may have the determinable property of having mass and its determinate having mass 3.4kg. We then have a metaphysical law that prohibits objects from having multiple same-level determinates of the same determinable.

A special challenge arises from the numerical or vector structure of many of the values of the attributes. I suppose what we would say is that the set of lowest-level determinates of a determinable “naturally” has the mathematical structure of a subset of a complete ordered field (i.e., of something isomorphic to the set of real numbers) or of a vector space over such a field, so that momenta can be added, masses can be multiplied, etc. There is a lot of duplication here, however: there is one addition operator on the space of lowest-level momentum determinates and another addition operator on the space of lowest-level position determinates in the Newtonian picture. Moreover, for science to work, we need to be able to combine the values of various attributes: we need to be able to divide products of masses by squares of distances to make sense of Newton’s laws of gravitation. But it doesn’t seem to make sense to divide mass properties, or their products, by distance properties, or their squares. The operations themselves would have to be modeled as higher level relations, so that momentum addition would be modeled as a ternary relation between momenta, and there would be parallel algebraic laws for momentum addition and position addition. All this can be done, one operation at a time, but it’s not very elegant.

Wouldn’t it be more elegant if instead we thought of the attributes as partial functions? Thus, mass would be a partial function from objects to the positive real numbers (using a natural unit system) and both Newtonian position and momentum will be partial functions from objects to Euclidean three-dimensional space. One doesn’t need separate operations for the addition of positions and of momenta any more. Moreover, one doesn’t need to model addition as a ternary relation but as a function of two arguments.

There is a second reason to admit functions as first-class citizens into our metaphysics, and this reason comes from intuition. Properties make intuitive sense. But I think there is something intuitively metaphysically puzzling about relations that are not merely to be analyzed into a property of a plurality (such as being arranged in a ball, or having a total mass of 5kg), but where the order of the relata matters. I think we can make sense of binary non-symmetric relations in terms of the analogy of agents and patients: x does something to y (e.g. causes it). But ternary relations that don’t reduce to a property of a plurality, but where order matters, seem puzzling. There are two main technical ways to solve this. One is to reduce such relations to properties of tuples, where tuples are special abstract objects formed from concrete objects. The other is Josh Rasmussen’s introduction of structured mereological wholes. Both are clever, but they do complicate the ontology.

But unary partial functions—i.e., unary attributes—are all we need to reduce both properties and relations of arbitrary finate arity. And unary attributes like mass and velocity make perfect intuitive sense.

First, properties can simply be reduced to partial functions to some set with only one object (say, the number “1” or the truth-value “true” or the empty partial function): the property is had by an object provided that the object is in the domain of the partial function.

Second, n-ary relations can be reduced to n-ary partial functions in exactly the same way: x1, ..., xn stand in the relation if and only if the n-tuple (x1, ..., xn) lies in the domain of the partial function.

Third, n-ary partial functions for finite n > 1 can be reduced to unary partial functions by currying. For instance, a binary partial function f can be modeled as a unary function g that assigns to each object x (or, better, each object x such that f(x, y) is defined for some y) a unary function g(x) such that (g(x))(y)=f(x, y) precisely whenever the latter is defined. Generalizing this lets one reduce n-ary partial functions to (n − 1)-ary ones, and so on down to unary ones.

There is, however, an important possible hitch. It could turn out that a property/relation ontology is more easily amenable to nominalist reduction than a function ontology. If so, then for those of us like me who are suspicious of Platonism, this could be a decisive consideration in favor of the more traditional approach.

Moreover, some people might be suspicious of the idea that purely mathematical objects, like numbers, are so intimately involved in the real world. After all, such involvement does bring up the Benacerraf problem. But maybe we should say: It solves it! What are the genuine real numbers? It's the values that charge and mass can take. And the genuine natural numbers are then the naturals amongst the genuine reals.

Friday, February 1, 2019

God, probabilities and causal propensities

Suppose a poor and good person is forced to flip a fair and indeterministic coin in circumstances where heads means utter ruin and tails means financial redemption. If either Molinism or Thomism is true, we would expect that, even without taking into account miracles:

  1. P(H)<P(T).

After all, God is good, and so he is more likely to try to get the good outcome for the person. (Of course, there are other considerations involved, so the boost in probability in favor of tails may be small.)

The Molinist can give this story. God knows how the coin would come out in various circumstances. He is more likely to ensure the occurrence of circumstances in which the subjunctive conditionals say that tails would comes up. The Thomist, on the other hand, will say that God’s primary causation determines what effect the secondary creaturely causation has, while at the same time ensuring that the secondary causation is genuinely doing its causal job.

But given (1), how can we say that the coin is fair? Here is a possibility. The probabilities in (1) take God’s dispositions into account. But we can also look simply at the causal propensities of the coin. The causal propensities of the coin are equibalanced between heads and tails. In addition to the probabilities in (1), which take everything including God into account, we can talk of coin-grounded causal chances, which are basically determined by the ratios of strength in the causal propensities. And the coin-grounded causal chances are 1/2 for heads and 1/2 for tails. But given Molinism or Thomism, these chances are not wholly determinative of the probabilities and the frequencies in repeat experiments, since the latter need to take into account the skewing due to God’s preference for the good.

So we get two sets of probabilities: The all-things-considered probabilities P that take God into account and that yield (1) and the creatures-only-considered probabilities Pc on which:

  1. Pc(H)=Pc(T)=1/2.

Here, however, is something that I think is a little troubling about both the Molinist and Thomist lines. The creatures-only-considered probabilities are obviously close to the observed frequencies. Why? I think the Molinist and Thomist have to say this: They are close because God chooses to act in such ways that the actual frequencies are approximately proportional to the strengths of causal propensities that Pc is based on. But then the frequencies of coin toss outcomes are not directly due to the causal propensities of the coin, but only because God chooses to make the frequencies match. This doesn’t seem right and is a reason why I want to adopt neither Molinism nor Thomism but a version of mere foreknowledge.

Thursday, January 31, 2019

Can our cells be substances?

A standard Aristotelian principle says:

  1. No substance is a part of another substance.

I was just struck by how (1) says less than it seems to. One interesting philosophy of biology question is whether our symbiont bacteria are part of us. But:

  1. All bacteria are substances.

  2. We are substances.

  3. We are not bacteria.

  4. So no bacteria are parts of us. (By 1-5)

This argument is fine as far as it goes. But there is a metaphysical possibility that its conclusion leaves open which it is easy to forget.

Let’s grant that our symbiont bacteria are not a part of us. But perhaps their matter is a part of us. In other words, maybe the bacteria are matter-form composites just as we are, but their matter is a part of our matter, whereas their form is not a part of us at all, and hence they as wholes are not parts of us. They merely overlap us in matter.

And the point can be generalized. Before I noticed this point today, I used to think that the Aristotelian commitment to (1) requires us to deny that our cells are substances. But (1) leaves open the possibility that our cells are substances whose matter is a part of us, while the cells as wholes are not parts of us.

I don’t really want to say this. I would like to supplement (1) with this principle which has generally been a large part of my reason for affirming (1):

  1. The matter of one substance is never a part of another substance.

My reason for accepting (6) has been that the identify of the matter is grounded in its substance, and if the matter had its identity doubly grounded, it wouldn’t be one thing, but two, and so it wouldn’t be the same matter in each substance.

In fact, (6) is a special case of a stronger claim:

  1. No two substances have any matter in common.

Here is an argument that establishes (7) directly. Start with this plausible thesis:

  1. No two material substances have all of their matter in common.

But now if (7) is false, then it should be possible to have two plants that have some matter in common. We could further imagine that the non-common matter perishes, but both plants survive. If so, then we would have a violation of (8). So, it’s plausible that if (7) is false, so is (8).

Here is a different line of thought in favor of (7):

  1. Matter is grounded in the accidents of a substance.

  2. Two substances cannot have any accident in common.

  3. If x is entity grounded in a and y is an entity grounded in b and a ≠ b, then x ≠ y.

  4. So, two substances cannot have any matter in common.

So, all in all, while (1) leaves open the possibility of our cells and bacteria being substances and yet having their matter be a part of us, we have good reason to deny this possibility on other grounds.

It would be very neat if one could derive (1) from (7). From (7) we do directly get:

  1. No substance with matter is a part of another substance.

But it would take more argument to drop the “with matter” qualifier.

Can free will be grounded in quantum mechanics?

Robert Kane famously physicalistically grounds free will in quantum events in the brain. Free choice, on Kane’s view, is constituted by rational deliberation involving conflicting motivational structures with a resolution by an indeterministic causal process—a causal process that Kane thinks is in fact physical.

Here is a problem. Suppose Kane’s view is true. But now imagine a possible world with a physics that is like our quantum physics, but where panpsychism is true. The particles are conscious, and some of them engage in libertarian free choices, with chances of choices exactly matching up with what quantum mechanics predicts. The world still has people with brains, in addition to particle-sized people. The people with brains have particles that are persons in their brains. Moreover, it turns out that those indeterministic causal processes in the brains that constitute free choice are in fact the free actions of the particle-sized people in the breains.

All of Kane’s conditions for freedom will be satisfied by the people with brains. For the only relevant difference is that the quantum-style causal processes are choice processes (of the particle people). But these processes are just as indeterministic as in our world, and it’s the indeterminism that matters.

But the actions of the brain possessors in that world wouldn’t be free, because they would be under the control of the particle people in the brains. We could even suppose, if we like, that the particle people know about brains and want to direct the big people in some particular direction.

One could add to Kane’s account the further condition that the indeterministic causal processes in the brain are not constituted by the free choices of another person. But this seems ad hoc, and it is not clear why this one particular way for the indeterministic causal processes to be constituted is forbidden while any other way for them to be constituted is acceptable. The details of how quantum indeterministic processes work, as long as they are truly indeterministic and follow the quantum statistics, should not matter for free will.

This problem applies to any physicalist account on which free choices are grounded in quantum processes.

There is a way out of the problem. One could accept a pair of Aristotelian dicta:

  1. All persons are substances.

  2. No substance is a part of another substance.

But it is not clear whether the acceptance of these dicta is plausible apart from the fuller Aristotelian metaphysics which holds that all substances are partially made of non-physical forms. In other words, it is not clear that acceptance of (1) and (2) can be well motivated within a physicalist metaphysics.