This argument is sound. I am not sure if premise (2) is true, though.
- If God exists, then all fundamental entities are intrinsically good.
- Pain qualia are not intrinsically good.
- So, pain qualia are not fundamental entities.
This argument is sound. I am not sure if premise (2) is true, though.
Here is a simple reductive account of right and wrong that now seems to me to be obviously correct:
Think, after all, how easily we move between saying that someone acted badly and that someone acted wrongly.
If (1) is a correct reduction, then we can reduce facts about right and wrong to facts about the value of particular kinds of things, namely actions.
By the way, if we accept (1), then consequentialism is equivalent to the following thesis:
But it is quite strange to think that there be an entity that is non-instrumentally good if and only if it is on balance best.
Even though nobody thinks Strong AI has been achieved, we attribute beliefs to computer systems and software:
Microsoft Word thinks that I mistyped that word.
Google knows where I’ve been shopping.
The attribution is communicatively useful and natural, but is not literal.
It seems to me, however, that the difference in kind between the beliefs of computers and the beliefs of persons is no greater than the difference in kind between the beliefs of groups and the beliefs of persons.
Given this, the attribution of beliefs to groups should also not be taken to be literal.
In a performative, a social fact is instituted by a statement that simultaneously announces it:
I hereby apply for the position.
I dub this ship the Star of the South.
I promise to pay you back tomorrow.
It seems we can distinguish two cases of institution of a social fact. Some social facts do not essentially require any party besides the instituter be apprised of the fact, and it is only the current contingent convention that those facts are instituted by an announcement. For instance, naming of persons is done by a public act in our society, but we could imagine (as happens in some piece of science fiction I vaguely recall) a society where people name themselves mentally, and then only reveal the name to their intimates. In that case, name facts already would obtain prior to their announcement, being instituted by a purely private mental act. In fact, in our society we handle the naming of animals in this way. You don’t need to tell anybody—not even Goldy—that your goldfish’s name is Goldy for the name to be that.
In the case of social facts that do not require anybody besides the instituter to be apprised of them, if we in fact institute them by means of a performative, that is a mere accident.
But some social facts of their very nature seem to require that some relevant party besides the instituter be apprised of the fact. For instance, it seems one cannot apply for a position without informing the organization in charge of the position, and one cannot promise without communicating this to the promisee. In those cases, it seems that the fact must be instituted by a performative.
That’s not quite right, though. The social fact of applying for a position can also be instituted by a pair of things: a performative instituting a conditional application and the truth of the antecedent of the conditional. “I hereby apply if no other applications come in by Wednesday night.” And in that case, the social fact can obtain without anyone other than God being apprised of it: even if no one yet knows that no other applications have come in by Wednesday night, it is a fact that one has applied. It seems that every social fact that is instituted by a performative announcing that very fact could be instituted by an appropriate conditional performative plus the obtaining of the antecedent.
But perhaps we can say something weaker. There seem to be social facts that logically require that they be partially instituted by someone’s apprising someone of something—but not necessary of the social fact in question. So while perhaps no particular performative is essential to instituting a particular social fact, some social facts may require some performative or other.
There is a discussion among political theorists on whether religious liberty should be taken as special, or just another aspect of some standard liberty like personal autonomy.
Here’s an interesting line of thought. If God exists, then religious liberty is extremely objectively important, indeed infinitely important. Now maybe a secular state should not presuppose that God exists. There are strong philosophical arguments on both sides, and while I think the ones on the side of theism are conclusive, that is a controversial claim. However, on the basis of the arguments, it seems that even a secular state should think that it is a very serious possibility that God exists, with a probability around 1/2. But if there is a probability around 1/2 that religious liberty is infinitely important, then the religious liberty is special.
One formulation of Schellenberg’s argument from hiddenness depends on the premise:
(4) If for any capable finite person S and time t, God is at t open to being in a personal relationship with S at t, then for any capable finite person S and time t, it is not the case that S is at t nonresistantly in a state of nonbelief in relation to the proposition that God exists.
Schellenberg argues that God is always open to personal relationships if he exists, and that there are people nonresistantly in a state of nonbelief to the proposition that God exists, and so God doesn’t exist.
I want to worry about a logical problem behind (4). Schellenberg attempts to derive (4) from a principle he calls Not Open that says, with some important provisos that won’t matter for this post, that “if a person A … is … in a state of nonbelief in relation to the proposition that B exists” but B could have gotten A to believe that B exists, “then it is not the case that B is … open … to having a personal relationship with A”.
It seems that Schellenberg gets (4) by substituting “God” for “B” in Not Open. But “the proposition that B exists” creates a hyperintensional context for “B”, and hence one cannot blithely substitute equals for equals, or even necessarily coextensive expressions, in Not Open.
Compare: If I have a personal relationship with Clark Kent, I then automatically have a personal relationship with Superman, even if I do not believe the proposition that Superman exists, because Superman and Clark Kent are in fact the same person. It is perhaps necessary for a personal relationship with Superman is that I believe of Superman that he exists, but I need not believe it of him under the description “Superman”.
So it seems to me that the only thing Schellenberg can get from Not Open is something like:
(4*) If for any capable finite person S and time t, God is at t open to being in a personal relationship with S at t, then for any capable finite person S and time t, it is not the case that S is at t nonresistantly in a state where he does not believe of God that he (or it) exists.
Now, to believe of x that it exists is to believe, for some y such that in fact y = x, that y exists.
But then all that’s needed to believe of God that he exists is to believe in the existence of something that is in fact coextensive with God. For instance, suppose an atheist believes that her mother is the being that loves her most. Then she presumably believes that the being that loves her most exists. In doing so, she believes of the being that loves her most that it exists. But in fact, assuming theism is true, the being that loves her most is God. So she believes of God that it (or he) exists.
At this point it is really hard to find non-controversial cases of the relevant kind of nonbelief that (4*) expresses. By “non-controversial”, I mean cases that do not presuppose the non-existence of God. For if God does in fact exist, he falls under many descriptions: “The being who loves me most”, “The existent being that Jean Vanier loves the most”, “The most powerful conscious being active on earth”, etc.
It is true that Schellenberg needs only one case. So even if it is true, on the assumption that God exists, that the typical atheist or agnostic believes of God that he exists, perhaps there are some people who don’t. But they will be hard to find—most atheists, I take it, think there is someone who loves them most (or loves them most in some particular respect), etc. I think the most plausible cases of examples are small children and the developmentally challenged. But those aren’t the cases Schellenberg’s argument focuses on, so I assume that’s not the line he would want to push.
The above shows that the doxastic prerequisite for a personal relationship with B is not just believing of B that it exists, since that’s too easy to get. What seems needed (at least if the whole doxastic line is to get off the ground—which I am not confident it does) is to believe of B that it exists and to believe it under a description sufficiently relevant to the relationship. For instance, suppose Alice falsely believes that her brother no longer exists, and suppose that not only does Alice’s brother still exist but he has been working out in secret and is now the fastest man alive. Alice believes that the fastest man alive exists, and mistakenly thinks he is Usain Bolt rather than her brother. So she does count as believing of her brother that he exists, but because she believes this under the description “the fastest man alive”, a description that she wrongly attaches to Bolt, her belief doesn’t help her have a relationship with her brother.
So probably (4*) should be revised to:
(4**) If for any capable finite person S and time t, God is at t open to being in a personal relationship with S at t, then for any capable finite person S and time t, it is not the case that S is at t nonresistantly in a state where he does not believe of God that he (or it) exists, under a description relevant to his personal relationship with God.
This doesn’t destroy the hiddenness argument. But it does make the hiddenness argument harder to defend, for one must find someone who does not believe in anything that would be coextensive with God if God exists under a description that would be relevant to a personal relationship with God. But there are, plausibly, many descriptions of God that would be so relevant.
A different move is to say that there can be descriptions D that in fact are descriptions precisely of x but some cases of believing that D exists are not cases of believing of x that it exists. Again, one will need to introduce some relevance criterion for the descriptions, though.
[Note added later: This was, of course, written before the revelations about Jean Vanier's abusiveness. I would certainly have chosen a different example if I were writing this post now.]Here’s a fun argument for dualism.
What is a part of the body is a matter of social convention.
Persons are explanatorily prior to social conventions.
So, probably, persons are not bodies.
I think (2) is undeniable. And (1) is a not uncommon view among people thinking about prostheses, implants, transplants and the like.
That said, I think (1) is just false.
If God exists, there are many evils that God doesn’t prevent, even though it seems that we would have been obligated to prevent them if we could.
A sceptical theist move is that God knows something about the situations that we don’t. For instance, it may seem to us that the evil is pointless, but God sees it as interwoven with greater goods.
An interesting response to this is that even if we knew about the greater goods, we would be obligated to prevent the evil. Say, Carl sees Alice about to torture Bob, and Carl somehow knows (maybe God told him) that one day Alice will repent of the evil in response to a beautiful offer of forgiveness from Bob. Then I am inclined to think Carl should still prevent Alice from torturing Bob, even if repentance and forgiveness are goods so great that it would have been better for both Alice and Bob if the torture happened.
Here is an interesting sceptical theist response to this response. Normally, we don’t know the future well enough to know that great goods would arise from our permitting an evil. Because of this, our moral obligations to prevent grave evils have a bias in them towards what is causally closer to us. Moreover, this bias in the obligations, although it is explained by the fact that normally we don’t know the future very well, is present even in the exceptional cases where we do know the future sufficiently well, as in the Carl, Alice and Bob case.
This move requires an ethical system where a moral rule that applies in all circumstances can be explained by its usefulness in normal circumstances. Rule utilitarianism is of course such an ethical system. Divine command theory is as well: God can be motivated to issue an exceptionless rule because of the fact that normally the rule is a good one and it might not be good for us to be trying to figure out whether a case at hand is an exception to the rule (this is something I learned from Steve Evans). And St. Thomas Aquinas in his argument against nonmarital sex holds that natural law is also like that (he argues that typically nonmarital sex is bad for the offspring, and concludes that it is wrong even in the exceptional cases where it’s not bad for the offspring, because, as he says, laws are made with regard to the typical case).
Historically, this approach tends to be used to derive or explain deontic prohibitions (e.g., Aquinas’ prohibition on nonmarital sex). But the move from typical beneficiality of a rule to its holding always does not require that the rule be a deontic prohibition. A rule that weights nearer causal consequences more heavily could just as easily be justified in such a way, even if the rule did not amount to a deontic prohibition.
Similarly, one might use typical facts about our relationships with those closer to us—that we know what is good for them better than for strangers, that they are more likely to accept our help, that the material benefits of our help enhance the relationship—to explain why helping those closer to us should be more heavily weighted in our moral calculus than helping strangers, even in those cases where the the typical facts do not obtain. Once again, this isn’t a deontic case.
One might even have such typical-case-justified rules in prudential reasoning (perhaps a bias towards the nearer future is not irrational after all) and maybe even in theoretical reasoning (perhaps we shouldn’t be perfect Bayesian agents after all, because that’s not in our nature, given that normally Bayesian reasoning is too hard for us).
We have a simple procedure for recognizing finite sequences. We start at the beginning and go through the sequence one item at a time (e.g., by scanning with our eyes). If we reach the end, we are confident the sequence was finite. This procedure can be relied on if and only if there are no supertasks—i.e., if and only if it is impossible to have an infinite sequence of tasks started and completed.
How do we know that there are no supertasks? Either empirically or a priori. To know it empirically, we would have to know that the various tasks we’ve completed were finite. But how would we know of any tasks we’ve completed that it’s finite if not by the above procedure?
So we have to know it a priori.
And the only story I know of how we could do that is by a priori cognizing some anti-infinity principle like Causal Finitism.
I am not sure how strong the above argument is. It is a little too close to standard sceptical worries for comfort.
It is common in our culture to see religion as a matter of faith. Indeed, religions are sometimes even called “faiths”.
Here is a reason why one should be cautious with conceptualizing things in this way. Faith is a specifically Christian concept, with Christianity being centrally conceptualized as a matter of faith in Jesus Christ. To think about all religions in terms of faith is to presuppose that the Christian understanding of what is central to Christianity yields a correct way of understanding the life of other religions.
Either Christianity is or is not basically true.
If Christianity is basically true, then its self-understanding in terms of faith is likely correct. However, the truth of Christianity does not give one good reason to think other religions, with the possible exception of Judaism, would be rightly understood in terms of the concept of faith.
If Christianity is not basically true, then we should be cautious even about its own self-characterization. Self-understanding is an epistemic achievement, and if Christianity is not basically true, then we should not take it for granted that faith has the central role it is claimed to have. And we should certainly not expect that the self-characterization of a religion that is not true should also apply to other religions.
Three is a finite number. How do we know this?
Here’s a proof that three is finite:
0 is finite. (Axiom)
For all n, if n is finite, then n + 1 is finite. (Axiom)
3=0+1+1+1. (Axiom)
So, 0+1 is finite. (By a and b)
So, 0+1+1 is finite. (By b and d)
So, 0+1+1+1 is finite. (By b and e)
So, 3 is finite. (By c and f)
Let’s assume we can answer the difficult question of how we know axioms (a) and (b), and allow that (c) is just true by definition.
I want to raise a different issue. To know that three is finite by means of the above argument, it seems we have to know that the argument is a proof.
One might think this is easy: a proof is a sequence of statements such that each non-axiomatic statement logically follows from the preceding ones, and it’s clear that (d)-(g) each follow from the previous by well-established rules of logic.
One could ask about how we know these rules of logic to be correct—but I won’t do that here. Instead, I want to note that it is false that every sequence of statements such that each non-axiomatic statement logically follows from the preceding ones is a proof. This is the case only for finite sequences of statements. The following infinite sequence of statements is not a proof, even though every statement follows from preceding ones: “…, so I am Napoleon, so I am Napoleon, so I am Napoleon.”
Very well, so to know that (a)-(g) is a proof, I need to know that (a)-(g) are only finitely many statements. OK, let’s count: (a)-(g) are seven statements. So it seems we have to know that seven is finite (or something just as hard to know) in order to use the proof to know that three is finite.
This, of course, would be paradoxical. For to use a proof analogous to (a)-(g) to show that seven is finite, we would need a proof of eleven steps, and so we would need to know that eleven is finite to know that the proof is a proof.
Maybe we can just see that seven is finite? But then we gain nothing by (a)-(g), since the knowledge-by-proof will depend on just seeing that seven is finite, and it would be simpler and more reliable just to directly see that three is finite.
It might be better to say that we can just see that the proof exhibited above, namely (a)-(g), is finite.
It seems that knowledge-by-proof in general depends on recognition of the finite. Or else on causal finitism.
Pluralists about ways of being say that there are multiple ways to be (e.g., substance and accident, divine being and finite being, the ten categories, or maybe even some indefinitely extendible list) and there is no such thing as being apart from being according to one of the ways of being. Each way of being comes with its own quantifiers, and there is no overarching quantifier.
A part of the theory is that everything that exists exists in a way of being. But it seems we cannot state this in the theory, because the "everything" seems to be a quantifier transcending the quantifiers over the particular ways of being. (Merricks, for instance, makes this criticism.)
I think there is a simple solution. The pluralist can concede that there are overarching unrestricted quantifiers ∀ and ∃, but they are not fundamental. They are, instead, defined in terms of more fundamental way-of-being-restricted quantifiers in the system:
∀xF(x) if and only if ∀BWoBb∀bxF(x)
∃xF(x) if and only if ∃BWoBb∃bxF(x).
The idea here is that for each way of being b, there are ∀b and ∃b quantifiers. But, the pluralist can say, one of the ways of being is being a way of being (BWoB). So, to use Merricks’ example, to say that there are no unicorns at all, one can just say that no way of being b is such that a unicorn b-exists.
Note that being a way of a being is itself a way of being, and hence BWoB itself BWoB-exists.
The claim that everything that exists exists in a way of being can now be put as follows:
Of course, (3) will be a theorem of the appropriate ways-of-being logic if we expand out "∀x" in accordance with (1). So (3) may seem trivial. But the objection of triviality seems exactly parallel to worrying that it is trivial on the JTB+ account of knowledge that if you know something, you believe it. Whether we have triviality depends on whether the account of generic existence or knowledge, respectively, is stipulative or meant to be a genuine account of a pre-theoretic notion. And nothing constrains the pluralist to making (1) and (2) be merely stipulative.
Suppose, however, your motivations for pluralism are theological: you don’t want to say that God and humans exist in the same way. You might then have the following further theological thought: Let G be a fundamental way of being that God is in. Then by transcendence, G has to be a category that is special to God, having only God in it. Moreover, by simplicity, G has to be God. Thus, the only way of being that God can be in is God. But this means there cannot be a fundamental category of ways of being that includes divine and non-divine ways of being.
However, note that even apart from theological considerations, the BWoB-quantifiers need not be fundamental. For instance, perhaps, among the ways of being there might be being an abstract object, and one could hold that ways of being are abstract objects. If so, then ∀BWoBbG(b) could be defined as ∀BAb(WoB(b)→G(b)), where BA is being abstract and WoB(x) says that x is a way of being.
Coming back to the theological considerations, one could suppose there is a fundamental category of being a finite way of being (BFWoB) and a fundamental category of being a divine way of being (BDWoB). By simplicity, BDWoB=God. And then we could define:
∀BWoBbF(b) if and only if ∀BDWoBbF(b) and ∀BFWoBbF(b).
∃BWoBbF(b) if and only if ∃BDWoBbF(b) or ∃BFWoBbF(b).
Note that we can rewrite ∀BDWoBbF(b) and ∃BDWoBbF(b) as just F(God).
After reading O’Connor and Churchill’s piece on emergence, one of my very smart undergraduate students commented that it follows from such emergentist views that one could know the mental facts from the physical facts. Here I will argue for this and discuss an unhappy consequence for the causal emergentist.
The causal emergentist thinks that mental properties are not physical, but they causally emerge from complexes of physical properties of a physical entity.
So, now, suppose that physical entity e has a causal power C to produce mental property M when it has a complex P of physical properties. This causal power C will then either be a physical or a non-physical property of e. If it is a physical property of e, then by knowing the physical properties of e, one can know that e has the causal power to produce M. And that, in turn, means M is knowable from physical properties. On the other hand, if C is non-physical, then we do not have emergence of the mental from the physical: we have emergence of the mental from the physical and non-physical. So, if we have genuine emergence of the mental from the physical, then in knowing the physical, we will know the mental.
The unhappy consequence of this is that qualia-based epistemological gap arguments against physicalism apply against causal emergence, since we could suppose M is a quale, and then knowing all about C will include knowing all about M.
Causal emergence may fare a little better with respect to zombie-type arguments. If an entity has an exact duplicate of your physical properties, it will have an exact duplicate of the physically-based causal powers, and hence it will have the causal power to make mental properties emerge. However, it is logically possible that these mental properties will in fact fail to emerge, because it is logically possible that some external causal power blocks the causal powers of the duplicate from achieving their effects. One could even imagine a whole world that is an exact physical duplicate of this one but where nobody physical has mental powers, because some non-physical entity blocks the mental-emergence powers of all the physical beings. So I guess this does some justice to zombie intuitions. But note that if the possibility-of-zombies intuition is satisfied by a non-physical entity blocking mental powers, then a dispositional functionalist could do justice to the zombie intuition by imagining a world just like this one, but where a non-physical entity changes our dispositional properties in the way of Frankfurt’s neurosurgeon. And it’s not clear that that really does justice to the zombie intuition. Maybe.
The above argument against causal emergentism supposes that knowing a cause implies knowing the range of its effects. That is correct on causal powers views of causation. It is not true on Humean views of causation. So a causal emergentist could simply adopt a Humean view of causation. It is also not true on views on which causation depends on laws of nature extrinsic to the particular things in the world. But the causal powers view is the correct one. (And it is one that O’Connor and Churchill embrace.)
What if the emergence relation is not causal in nature? Then it is still a dispositional fact about our physical entity e that it comes to have mental property M when it comes to have a complex P of physical properties. This fact seems like it should be grounded in the properties of e. These properties had better be physical, because the motivation for the theory seems to be that our non-physical properties emerge from our physical ones. And now we still have the danger that by knowing these physical grounds, one can come to know the dispositional fact, and hence come to know M. Perhaps there is a way out of this danger.
Perhaps the best way out for the emergentist, causal or not, is to acknowledge a non-emergent non-physical property in each minded entity grounding the emergence dispositions.
Of course, none of this is a problem if one is unimpressed by qualia-based epistemological gap arguments.
After listening to a talk by Christopher Kaczor, and the ensuing discussion, I want to offer a defense of a moderate position on the state not compelling healthcare professionals to violate their conscience, even when their conscience is unreasonably mistaken. I think a stronger position than the moderate position may be true, but I won’t be defending that.
This is the central insight:
One reason that (1) is true is the Socratic insight is that it is much better to suffer wrong than to do wrong, together with the Conscience Principle that to act against conscience is always wrong.
My argument will need something a bit more precise than (1). For convenience, I will stipulate that I use “grave” for normative considerations, goods, bads and harms whose importance is at least of the order of magnitude of the value of a human life. The coincidence that “grave” not only means very serious but also place of burial in English—even though the etymologies are quite different—should remind us of this. When you read the following, whenever you read “grave” and cognates, don’t just read “serious”, but also imagine a grave.
Then what I need is this:
(I suspect this is true even if one drops the “conscientious” and “gravely”, but I am only defending a moderate position.) The reasons for (2) are moral and psychological. The moral reasons are based on the aforementioned Socratic insight about the importance of avoiding wrongdoing. But there are also psychological reasons. A conscientious person identifies with their conscience in such a way that gravely violating this conscience is shattering to the individual’s identity. It is a kind of death. It is no coincidence that the Catholic tradition talks of some sins as “mortal”.
Next, here is another reasonable principle:
For instance, the state should not require a healthcare professional to donate her own kidney to save a patient. For a less extreme case that I will consider some variations of, neither should the state require a professional who has a severe bee allergy to pass through a cloud of bees to help a patient when allergy reaction drugs are unavailable and when other professionals lacking such an allergy are available.
In order for (3) to be useful in pracice, we need some way of getting rid of the “Normally” in it.
Notice that (3) is true even when the grave cost to the professional results from the professional’s irrationality. For instance, normally a healthcare professional who has a grave phobia of bees should not be required to pass through the cloud of bees, even if it is known that the professional would not be seriously physically harmed. In other words, that the cost results from irrationality does count as an abnormality in (3).
Under what abnormal conditions, then, may the state require the professional to offer care that comes at grave cost to the professional? This is clearly a necessary condition:
But even if the need is grave, if someone else can offer the care for whom offering the care does not come at a grave cost, they should offer it instead. If the way to save a patient’s life is for one doctor to pass through a cloud of bees, and there is a doctor available who is not allergic to bee stings, then a doctor who is allergic should not be made to do it. Thus, we have this condition:
We can combine these two conditions into a neater condition (which may also be a bit weaker than the conjunction of (4) and (5)):
This suggests some principle like this:
Now we go back to (2), the claim about the grave cost of violating conscience. Let us charitably assume that most medical professionals are conscientious, so that any given medical professional is likely to be conscientious. Then we get something like this:
But this cannot be the whole story. For there are also conditions that render one incapable of doing central parts of one’s job. For instance, someone with a grave phobia of fires should not be allowed to be a fire fighter. And while a fire fighter with that grave phobia should not be made to fight a fire when someone else is available, if they had the phobia at the time of hiring, they should not have been hired in the first place. And if they hid this phobia at the time of hiring, they should be fired.
We have, however, a well-developed societal model for dealing with such conditions: the reasonable accommodations model of disability legislation like the Americans with Disabilities Act. It is reasonable to require an office building to put in a ramp for an employee in a wheelchair who is unable to walk; it would be unreasonable for a bank to have to hire a guard specially to watch a kleptomaniac teller. What is and is not a reasonable accommodation depends on the centrality of an aspect of a job, the costs to the employer, and so on.
So my moderate proposal says that we handle the worry that a particular conscientious objection renders a professional incapable of doing their job by analogy to the reasonable and unreasonable accommodations model, and qualify (8) by allowing in hiring or licensure the requirement that the accommodations for a conscientious restriction on practice would have be reasonable in ways analogous to reasonable disability accommodations. A healthcare professional who has only one hand could, I assume, be reasonably accommodated in a number of specialities, but likely not as a surgeon.
The disability case also should push us towards a less judgmental attitude towards a healthcare professional whose conscientious objections are unreasonably mistaken. That an employee became a paraplegic from unreasonable daredevil recreational activity does not render the employee uneligible for otherwise reasonable accommodations.
What about the worry about the rare cases where a healthcare professional has morally repugnant conscientious views that would require discriminatory care, such as refusing to care for patients of a particular race? Could one argue that if patients of that race are rare in a given area, then allowing a restriction of practice on the basis of race could be a reasonable accommodation? We might imagine an employee who has panic attacks triggered by a particular rare configuration of a client’s personal appearance, and that does seem like a case for reasonable accommodations, after all.
Here I think there is a different thing to be said. We want our healthcare professionals to have certain relevant moral virtues to a reasonable degree. Moral virtues go beyond obedience to conscience. Someone with a mistaken conscience may not be to blame, for the wrongs they do, but they may nonetheless lack certain virtues. The case of the conscientious racist is one of those. So it is not so much because the conscientious racist would refuse to care for patients of a particular race that they should not be a healthcare professional but it is because they fail to have the right kind of respect for the dignity of all human beings.
One may think that this consideration makes the account not very useful. After all, a pro-life individual is apt to be accused of not caring enough for women. Here I just think we need to be honest and reasonably charitable. Caring about the embryo and fetus has human dignity does not render it less likely that one cares about women. Compare this case: A vegan physician believes that all higher animal life is sacred, and hence refuses to prescribe medication whose production essentially involves serious suffering of higher animals. Even if such a physician’s actions might cause harm to patients who need such (hypothetical?) medication, the belief that all higher animal life is sacred is not evidence that the physician does not care about such patients–indeed, it seems to render it more likely that the physician thinks the patients’ lives to be sacred as well, and hence to be cared for. There may be specialties where accommodation is unreasonable, but the mere fact of the belief is not evidence of lack of relevant virtues.
Let #s be the Goedel number of s. The following fact is useful for thinking about the foundations of mathematics:
Proposition. There is a finite fragment A of Peano Arithmetic such that if T is a recursively axiomatizable theory, then there is an arithmetical formula PT(n) such that for all arithmetical sentences s, A → PT(#s) is a theorem of FOL if and only if T proves s.
The Proposition allows us to replace the provability of a sentence from an infinite recursive theory by the provability of a sentence from a finite theory.
Sketch of Proof of Proposition. Let M be a Turing machine that given a sentence as an input goes through all possible proofs from T and halts if it arrives at one that is a proof of the given sentence.
We can encode a history of a halting (and hence finite) run of M as a natural number such that there will be a predicate HM(m, n) and a finite fragment A of Peano Arithmetic independent of M (I expect that Robinson arithmetic will suffice) such that (a) m is a history of a halting run of M with input m if and only if HM(m, n) and (b) for all m and n, A proves whether HM(m, n).
Now, let PT(n) be ∃mHM(m, n). Then A proves PT(#s) if and only if there is an m0 such that A proves HM(m0, n). (If A proves PT(#s), then because A is true, there is an m such that HM(m, #s), and then A will prove HM(m0, #s). Conversely, if A proves HM(m0, #s), then it proves ∃mHM(m, #s).) And so A proves PT(#s) if and only if T proves s.
The relativity of FOL-validity is the fact that whether a sentence ϕ of First Order Logic is valid (equivalently, provable from no axioms beyond any axioms of FOL itself) sometimes depends on the axioms of set theory, once we encode validity arithmetically as per Goedel.
More concretely, if Zermelo-Fraenkel-Choice (ZFC) set theory is consistent, then there is an FOL formula ϕ that is FOL-provable according to some but not other models of ZFC. So which model of ZFC should real provability be relativized to?
Here is a putative solution that occurred to me today:
If this solution works, then the relativity of proof is quite innocent: it doesn’t matter in which model of ZFC our proofs live, because proofs in any ZFC model do the job for us.
It follows from incompleteness (cf. the link above) that real provability is strictly weaker than provability, assuming ZFC is true and consistent. Therefore, some really provable ϕ will fail to be valid, and hence there will be models of the falsity of ϕ. The idea that one can really prove a ϕ such that there is a model of the falsity of ϕ seems to me to show that my proposed notion of “really provable” is really confused.
Once one absorbs the lessons of the Goedel incompleteness theorems, a formalist view of mathematics as just about logical relationships such as provability becomes unsupportable (for me the strongest indication of this is the independence of logical validity). Platonism thereby becomes more plausible (but even Platonism is not unproblematic, because mathematical Platonism tends towards plenitude, and given plenitude it is difficult to identify which natural numbers we mean).
But there is another way to see post-Goedelian mathematics, as an empirical and even experimental inquiry into the question of what can be proved by beings like us. While the abstract notion of provability is subject to Goedelian concerns, the notion of provability by beings like us does not seem to be, because it is not mathematically formalizable.
We can mathematically formalize a necessary condition for something to be proved by us which we can call “stepwise validity”: each non-axiomatic step follows from the preceding steps by such-and-such formal rules. To say that something can be proved by beings like us, then, would be to say that beings like us can produce (in speech or writing or some other relevantly similar medium) a stepwise valid sequence of steps that starts with the axioms and ends with the conclusion. This is a question about our causal powers of linguistic production, and hence can be seen as empirical.
Perhaps the surest way to settle the question of provability by beings like us is for us to actually produce the stepwise valid sequence of steps, and check its stepwise validity. But in practice mathematicians usually don’t: they skip obvious steps in the sequence. In doing so, they are producing a meta-argument that makes it plausible that beings like us could produce the stepwise valid sequence if they really wanted to.
This might seem to lead to a non-realist view of mathematics. Whether it does so depends, however, on our epistemology. If in fact provability by beings like us tracks metaphysical necessity—i.e., if B is provable by beings like us from A1, ..., An, then it is not possible to have A1, ..., An without B—then by means of provability by beings like us we discover metaphysical necessities.
Exclusion arguments against dualism, and sometimes against nonreductive physicalism, go something like this.
Every physical effect has a sufficient microphysical cause.
Some microphysical effects have non-overdetermined mental causes.
If an event E has two distinct causes A and B, with A sufficient, it is overdetermined.
So, some mental causes are identical to microphysical causes.
But (3) is just false as it stands. It neglects such cases of non-overdetermining distinct causes A and B as:
A is a sufficient cause of E and B is a proper part of A, or vice versa. (Example: E=window breaking; A=rock hitting window; B=front three quarters of rock hitting window.)
A is a sufficient cause of B and B is a sufficient cause of E, or vice versa, with these instances of sufficient causation being transitive. (Example: E=window breaking; A=Jones throwing rock at window; B=rock impacting window.)
B is an insufficient cause of A and A is a sufficient cause of B, with these instances of causation being transitive. (Example: E=window breaking; B=Jones throwing rock in general direction of window; A=rock impacting window.)
A and B are distinct fine-grained events which correspond to one coarse-grained event.
To take care of (6) and (7), we could replace “cause” with “immediate cause” in the argument. This would require the rejection of causation by a dense sequence of causes (e.g., the state of a Newtonian system at 3 pm is caused by its state at 2:30 pm, its state at 2:45 pm, at 2:52.5 pm, and so on, with no “immediate” cause). I defend such a rejection in my infinity book. But the price of taking on board the arguments in my infinity book is that one then has very good reason to accept the Kalaam argument, and hence to deny (1) (since the first physical state will then have a divine, and hence non-microphysical, cause).
We could take care of (5) and (8) by replacing “distinct” with “non-overlapping” in (3). But then the conclusion of the argument becomes much weaker, namely that some mental causes overlap microphysical causes. And that’s something that both the nonreductive physicalist and hylomorphic dualist can accept for different reasons: the nonreductive physicalist may hold that mental causes totally overlap with microphysical causes; the hylomorphist will say that the form is a part of both the mental cause and of the microphysical cause. Maybe we still have an argument against substance dualism, though.
There are two kinds of functionalism about the mind.
One kind upholds the thesis that if two systems exhibit the same overall function, i.e., the same overall functional mapping between sequences of system inputs and sequences of system outputs, then they have the same mental states if any. Call this systemic functionalism.
The other kind says that mental properties depend not just on overall system function, but also on the functional properties of the internal states and/or subsystems of the system. Call this subsystemic functionalism. The subsystemic functionalist allows that two systems may have the same overall function, but because the internal architecture (whether software or hardware) that achieve this overall function are different, the mental states of the systems could be different.
Systemic functionalism allows for a greater degree of multiple realizability. If we have subsystemic functionalism, we might meet up with aliens who behave just like we do, but who nonetheless have no mental states or mental states very different from ours, because the algorithms that are used to implement the input-to-output mappings in them are sufficiently different.
If subsystemic functionalism is true, then it seems impossible for us to figure out what functional properties constitute mental states, except via self-experimentation.
For instance, we would want to know whether the functional properties that constitute mental states are neuronal-or-above or subneuronal. If they are neuronal-or-above, then replacing neurons with prostheses that have the same input-to-output mappings will preserve mental states. If they are subneuronal, such replacement will only preserve mental states if the prostheses not only have the same input-to-output mappings, but also are functionally isomorphic at the relevant (and unknown to us) subneuronal level.
But how could we figure out which is the case? Here is the obvious thing to try: Replace neurons with prostheses whose internal architecture does not have much functional resemblance to neurons but which have the same input-to-output mappings. But assuming standard physicalist claims about there not being “swervy” top-down causation (top-down causation that is unpredictable from the microphysical laws), we know ahead of the experiment that the subject will behave exactly as before. Yet if we have rejected systemic functionalism, sameness of behavior does not guarantee sameness of mental states, or any mental states at all. So doing the experiment seems pointless: we already know what we will find (assuming we know there is no swervy top-down causation), and it doesn’t answer our question.
Well, not quite. If I have the experiment done on me, then if I continue to have conscious states after complete neuronal prosthetic replacement, I will know (in a Cartesian way) that I have mental states, and get significant evidence that the relevant system level is neuronal-or-above. But I won’t be able to inform anybody of this. If I tell people: “I am still conscious”, if they have rejected systemic functionalism, they will just say: “Yeah, he/it would say that even if he/it weren’t, because we have preserved the systemic input-to-output mappings.” And there will be significant limits to what even I can know. While I could surely know that I am conscious, I doubt that I would be able to trust my memory to know that my conscious states haven’t changed their qualia.
So with self-experimentation, I could know tht the relevant system level is neuronal-or-above. Could I know even with self-experimentation that the relevant system level is subneuronal. That’s a tough one. At first sight, one might consider this: Replace neurons with prostheses gradually and have me observe whether my conscious experiences start to change. Maybe at some point I stop having smell qualia, because the neurons involved in smell have been replaced with subsystemically functionally non-isomorphic systems. Oddly, though, given the lack of swervy top-down causation, I would still report having smell qualia, and act as if I had them, and maybe even think, albeit mistakenly, that I have them. I am not sure what to make of this possibility. It’s weird indeed.
Moreover, a version of the above argument shows that there is no experiment that we could do that would persons other than at most the subject know whether systemic or subsystemic functionalism is true, assuming there is no swervy top-down causation.
Things become simpler in a way if we adopt systemic functionalism. It becomes easier to know when we have strong AI, when aliens are conscious, whether neural prostheses work or destroy thought, etc. The downside is that systemic functionalism is just behaviorism.
On the other hand, if there is swervy top-down causation, and this causation meshes in the right way with mental functioning, then we are once again in the experimental philosophy of mind business. For then neurons might function differently when in a living brain than what the microphysical laws predict. And we could put in prostheses that function outside the body just like neurons, and see if those also function in vivo just like neurons. If so, then the relevant functional level is probably neuronal-or-above; if not, it's probably subneuronal.
The more I think about the foundations of mathematics, the more wisdom I see in Kronecker’s famous saying: “God made the natural numbers; all else is the work of man.” There is something foundationally deep about the natural numbers. We see this in the way theories of natural numbers is equivalent (e.g., via Goedel encoding) to the theories of strings of symbols that are central to logic, and in the way that when we fix our model of natural numbers, we fix the foundational notion of provability.
One problem for functionalism is the problem of defect. David Lewis, for instance, talks of a madman for whom pain is triggered by something other than damage and whose pain triggers something other than avoidance. Lewis’s functionalist solution is to define the function of a mental state in terms of the role it normally plays in the species.
Here is a problem with this. Suppose that in mammals pains is realized by C-fiber firing. But now take the C-fibers inside a living mammalian skull, disconnect their outputs and connect external electrodes to their inputs. Make the C-fibers fire. Since the C-fiber outputs are disconnected, causing them to fire does not cause any of the usual pain behaviors, the formation of memories of pain, etc. In fact, it seems very plausible that there is no pain at all. Yet according to Lewisian functionalism, there is pain, because it is the normal connections of the C-fibers that define their functional role.
This thought experiment shows that the physical realizers of mental states need to occur in their proper context. But this bumps up against Lewis’s madman, in whom the pain states, and presumably their physical realizers, do not occur in their proper context.
It seems that what the functionalist needs to say is that in order to realize a mental state, a physical state must occur in a sufficient approximation to its proper context. If it’s too far, as in the case of the C-fibers with severed outputs, there is no mental state. If it’s close enough, as in a moderate version of the madman case (I don’t know what to say about Lewis’s more extreme one), the mental state occurs.
But how is the line to be drawn?
Perhaps there is no problem. Pain is not in fact C-fiber firing. Perhaps enough of the brain needs to be involved in conscious states that one cannot plausibly remove the states from their normal functional context? Still, this is worth thinking about.
Let me tell a story. Some neuroscientists detected a physically novel form of radiation, M-rays, that is emitted by brains of subjects who are thinking consciously. They discovered this much like X-rays were discovered, namely by finding something impinging on equipment in the lab in a way that could be explained by conventional physics. Further experiment showed that M-rays are not emitted by anything that clearly isn’t conscious. Moreover, the line between animals that emitted M-rays and animals that didn’t seemed to correspond to a noticeable difference in cognitive sophistication. Finally, in humans the M-rays turned out to be modulated in a way that has a natural one-to-one correspondence with the phenomenal states reported by the conscious subjects, so that the scientists eventually learned to discern from the M-rays what the subject’s conscious state is. (The CIA was very interested.)
In this case, it would be eminently reasonable for a physicalist to conclude that consciousness is the emission of M-rays.
This thought experiment shows that mysterians like Colin McGinn are mistaken in holding that no discovery we could make would solve the hard problem of consciousness.
But of course few physicalists actually expect to find a physically novel phenomenon in the brain.
A sentence ϕ of a dialect of First Order Logic is FOL-valid if and only if ϕ is true in every non-empty model under every interpretation. By the Goedel Completeness Theorem, ϕ is valid if and only if ϕ is a theorem of FOL (i.e., has a proof from no axioms beyond any axioms of FOL). (Note: This does not use the Axiom of Choice since we are dealing with a single sentence.)
Here is a meta-logic fact that I think is not as widely known as it should be.
Proposition: Let T be any consistent recursive theory extending Zermelo-Fraenkel set theory. Then there is a sentence ϕ of a dialect of First Order Logic such that according to some models of T, ϕ is FOL-valid (and hence a theorem of FOL) and according to other models of T, ϕ is not FOL-valid (and hence not a theorem of FOL).
Note: The claim that ϕ is FOL-valid according to a model M is shorthand for the claim that a certain complex arithmetical claim involving the Goedel encoding of ϕ is true according to M.
The Proposition is yet another nail in the coffins of formalism and positivism. It tells us that the mere notion of FOL-theoremhood has Platonic commitments, in that it is only relative to a fixed family of universes of sets (or at least a fixed model of the natural numbers or a fixed non-recursive axiomatization) does it make unambiguous sense to predicate FOL-theoremhood and its lack. Likewise, the very notion of valid consequence, even of a finite axiom set, carries such Platonic commitments.
Proof of Proposition: Let G be a Rosser-tweaked Goedel sentence for T with G being Σ1 (cf. remarks in Section 51.3 here). Then G is independent of T. In ZF, and hence in T, we can prove that there is a Turing machine Q that halts if and only if G holds. (Just make Q iterate over all natural numbers, halting if the number witnesses the existential quantifier at the front of the Σ1 sentence G.) But one can construct an FOL-sentence ϕ such that one can prove in ZF that ϕ is FOL-valid if and only if Q halts (one can do this for any Turing machine Q, not just the one above). Hence, one can prove in T that ϕ is FOL-valid if and only if I holds.
Thus, in T it is provable that ϕ is FOL-valid if and only if G holds. But T is a consistent theory (otherwise one could formalize in T the proof of its inconsistency). Since G is independent of T, it follows that the FOL-validity of ϕ is as well.
Theodicies according to which sufferings make possible greater moral goods are often subjected to this objection: If so, why should we prevent sufferings?
I am not near to having a full answer to the question. But I think this is related to a question everyone, and not just the theist, needs to face up to. For everyone should accept Socrates’ great insight that moral excellence is much more important than avoiding suffering, and yet we should often prevent suffering that we think is apt to lead to the more important goods. I don’t know why. That’s right now one of the mysteries of the moral life for me. But it is as it is.
Famously, persons with disabilities tend to report higher life satisfaction than persons without disabilities. But we all know that accepting this data should not keep us from working to prevent disability-causing car accidents. While higher life satisfaction is not the same as moral excellence, the example is still instructive. Our reasons to prevent disability-causing car accidents do not require us to refute the empirical data suggesting that persons with disabilities lead more satisfying lives. I do not know why exactly we still have on balance reason to prevent such accidents, but it is clear to me that we do.
Mother Teresa thought that the West is suffering from a deep poverty of relationships, with both God and neighbor. Plausibly she was right. We probably are not in a position to know that affluence is a significant cause of this deep poverty, but we can be open to the real epistemic possibility that it is, and we can acknowledge the deep truth that the riches of relationship are far more important than physical goods, without this sapping our efforts to improve the material lot of the needy.
Or suppose you are witnessing Alice torturing Bob, and an oracle informed you that in ten years they will be reconciled, with Bob beautifully forgiving Alice and Alice deeply repenting, with the goods of the reconciliation being greater than the bads in the torture. I think I should still stop Alice.
A quick corollary of the above cases is that consequentialism is false. But there is a deep paradox here that cuts more deeply than consequentialism. I do not know how to resolve it.
Here are some stories, none of which are fully satisfying to me in their present state of development.
Perhaps it is better if humans have a special focus on the relief of suffering and improvement of material well-being of the patient. An opposite focus might lead to an unhealthy condescension.
Perhaps it has something to do with our embodied natures that a special focus on the bodily good of the other is a particularly fitting way for humans to express love for one another. While letting another suffer in the hope of greater on-balance happiness might be better for the patient, it could well be worse for the agent and the relationship. Maybe we should think of what Catholics call the “corporal works of mercy” as a kind of kiss, or maybe even something like a sacrament.
Perhaps there is something about respect for the autonomy of the other. Maybe others’ physical good is also our business while moral development is more their own business.
I think there is more. But the point I want to make is just that this is not a special question for theism and theodicy. It is a paradox that all morally sensitive people should see both sides of.
Coming back to theodicy, note that the above speculative considerations may not apply to God as the agent. (God cannot but condescend, being infinitely above us. God is not embodied, except in respect of the Incarnation. And we have no autonomy rights against God, as God is closer to us than we are to ourselves.)
I’ve had a grad student, Nathan Mueller, do an independent study in social epistemology in the hope of learning from him about the area (and indeed, I have learned much from him), so I’ve been thinking about group stuff once a week (at least). Here’s something that hit me today during our meeting. There is an interesting disanalogy between individuals and groups. Each group is partly but centrally defined by a role, with different groups often having different defining roles. The American Philosophical Association has a role defined by joint philosophical engagement, while the Huaco Bowmen have a role defined by joint archery. But this is not the case for individuals. While individuals have roles, the only roles that it is very plausible to say that they are partly and centrally defined by are general roles that all human beings have, roles like human being or child of God.
This means that if we try to draw analogies between group and individual concepts such as belief or intention, we should be careful to draw the analogy between the group concept and the concept as it applies not just to an individual but to an individual-in-a-role. Thus, the analogy is not between, say, the APA believing some proposition and my believing some proposition, but between the APA believing some proposition and my believing that proposition qua father (or qua philosopher or qua mathematician).
If this is right, then it suggests an interesting research program: Study the attribution of mental properties to individuals-in-roles as a way of making progress on the attribution of analogous properties to groups. For instance, there are well-founded worries in the social epistemology literature about simple ways of moving from the belief of the members of the group to the belief of the group (e.g., attributing to the group any belief held by the majority of the members). These might be seen to parallel the obvious fact that one cannot move from my believing p to my believing p qua father (or qua mathematician). And perhaps if we better understand what one needs to add to my believing p to get that I believe p qua father, this addition will help us understand the group case.
(I should say, for completeness, that my claim that the only roles that human beings are partly and centrally defined by are general roles like human being is controversial. Our recent graduate Mengyao Yan in her very interesting dissertation argues that we are centrally defined by token roles like child of x. She may even be right about the specific case of descent-based roles like child of x, given essentiality of origins, but I do not think it is helpful to analyze the attribution of mental properties to us in general in terms of us having these roles.)
This morning I find myself feeling the force of presentism. I am finding it hard to see my four-dimensional worm theory as adequately explaining why my experience only includes what I am experiencing now, instead of the whole richness of my four-dimensional life. I am also finding it difficult to satisfactorily explain the sequentiality of my experiences: that I will have different experiences from those that I have now, some of which I dread and some of which I anticipate eagerly.
When I try to write down the thoughts that make me feel the force of presentism, the force of the thoughts is largely drained. After all, to be fair, when I wrote that I have am having trouble “explaining why my experience only includes what I am experiencing now”, shouldn’t I have written: “explaining why my present experience only includes what I am experiencing now”, a triviality? And that mysterious sequentiality, is that anything beyond the fact that some of my experiences are in the future of my present experience?
The first part of the mystery is due to the chopped up nature of my consciousness on a four-dimensional view. Instead of seeing my life as a whole, as God sees it, I see it in very short (but probably not instantaneous) pieces. It is puzzling how my consciousness can be so chopped up, and yet be all mine. But we have good reason to think that this phenomenon occurs outside of temporality. Split brain patients seem to have such chopped up consciousnesses. And if consciousness is an operation of the mind’s, then on orthodox Christology, the incarnate Christ, while one person, had (and still has) two consciousnesses.
Unfortunately, both the split brains and the Incarnation are mysterious phenomena, so they don’t do much to take away the feeling of mystery about the temporal chopping up of the consciousness of my four-dimensional life. But they do make me feel that there is no good argument for presentism here.
The second part of the mystery is due to the sequentiality of the experiences. As the split brain and Incarnation cases show, the sequentiality of experiences in different spheres of consciousness is not universal. The split brain patient has two non-sequential, simultaneous spheres of consciousness. Christ has his temporal sphere (or spheres, if we take the four-dimensional view) of consciousness and his divine atemporal sphere of consciousness. But seeing the contingency of the sequentiality does not remove the mystery in the sequentiality.
It makes me feel a little better when I recall that the presentist story about the sequentiality has its own problems. If my future experiences aren’t real—on presentism they are nothing but stuff in the scope of a modal “will” operator that doesn’t satisfy the T axiom—then what am I anticipating or dreading? It seems I am just here in the present, and when I think about this, it feels just as mysterious as on four-dimensionalism what makes the future impend. Of course, the presentist can give a reductive or non-reductive account of the asymmetry between past and future, but so can the four-dimensionalist.
So what remains of this morning’s presentist feelings? Mostly this worry: Time is mysterious and our theories of time—whether eternalist or presentist—do not do justice to its mysteriousness. This is like the thought that qualia are mysterious, but when we give particular theories of them—whether materialist or dualist—it feels like something is left out.
But what if I forget about standard four-dimensionalism and presentism, and just try to see what theory of time fits with my experiences? I then find myself pulled towards a view of time I had when I was around ten years old. Reality is four-dimensional, but we travel through it. Future sufferings I dread are there, ahead of me. But I am not just a temporal part among many: there is no future self suffering future pains and enjoying future pleasures. The past and future have physical reality but it’s all zombies. As for me, I am wholly here and now. And you are wholly here and now. We travel together through the four-dimensional reality.
But these future pains and pleasures, how can they be if they are not had by me or anyone else? They are like the persisting smile of the Cheshire cat. (I wasn’t worried about this when I was ten, because I was mainly imagining myself as traveling through events, and not philosophically thinking about my changing mental states. It wasn’t a theory, but a way of thinking.) Put that way, maybe it’s not so crazy. After all, the standard Catholic view of the Eucharist is that the accidents of bread and wine exist without anything having them. So perhaps my future and past pains and pleasures exist without anyone having them—but one day I will have them.
Even this strange theory, though, does not do justice to sequentiality. What makes it be the case that I am traveling towards the future rather than towards the past?
And what about Relativity Theory? Why don’t we get out of sync with one another if we travel fast enough relative to one another? Perhaps the twin who travels at near light speed comes back to earth and meets only zombies, not real selves? That seems absurd. Maybe though the internal flow of time doesn’t work like that.
I do not think this is an attractive theory. It is the theory that best fits most of my experience of temporality, and that is a real consideration in favor of it. But it doesn’t solve the puzzle of sequentiality. I think I will stick with four-dimensionalism. For now. (!)
There are at least two reasons to think we are simple:
It is difficult to explain how a non-simple thing can have a unity of consciousness.
There is David Barnett’s “pairs” argument.
But we are clearly extended.
So, we are extended simples.
So, there are extended simples.
(That said, while I am happy with the idea that we are extended simples, I am suspicious of both 1 and 2.)
We can own dogs, trees, forests, cars, chairs, computers and cupcakes, but of these examples, only dogs and trees really exist. Many of the things we own do not really exist. This makes me sceptical of the idea that there are strong property rights independent of positive law.
You might stop me by saying that my ontology is simply too restrictive. Maybe forests, cars, chairs, computers and cupcakes all really exist. I doubt it, but the examples of non-existent things we can in principle own can be multiplied. It is just as reasonable to talk of owning the vacuum inside a flask as it is to talk of owning the cocoa inside a cup. In both cases, labor was needed to generate the “thing” owned, and there is a reasonable moral expectation of non-interference with respect to it. (I would be destroying your property if I beamed a gas into your vacuum flask.)
What does this have to do with scepticism of strong property rights independent of positive law? First, it becomes very difficult to draw a principled line between ownables and non-ownables. Second, once we recognize that we can own things that don’t exist, such as vacua, it becomes difficult to distinguish “things” we have created and own from other kinds of outcomes of our activity. It then becomes plausible that the relevant right is one that should apply to outcomes of activity without much regard for whether that outcome is a thing that exists, a “thing” that doesn’t exist, or some other kind of outcome, such as a mountain’s being enchanted. There seems to be some kind of a right not to have the intended outcome of one’s virtuous activity destroyed without good reason. But how good the reason has to be will vary widely from case to case, so it is unlikely that this kind of a right will ground a strong view of property rights independent of positive law.
But the difficult is not the impossible. For it may be that although it would be difficult to make the needed distinctions, these distinctions could be grounded in highly detailed facts encoded in our natures.
Here’s a familiar kind of argument:
A spatial arrangement of ingredients of a mental life would not yield a unity of consciousness.
We have a unity of consciousness.
Our mental life is not constituted by a spatial arrangement of ingredients.
So, our minds are not spatially extended entities (and in particular they are not brains).
(The last step requires some additional premises about extension and mereology.)
But our unity of consciousness also includes ingredients that take time. We are aware of motion, and motion takes time. We consciously think temporally extended thoughts. If we take the argument (1)-(4) seriously, it looks like we should similarly conclude that our souls are not temporally extended entities.
This might be a reductio ad absurdum of the line of argument (1)-(4). For it seems that even the dualist will recognize the essentially temporally extended nature of many of our conscious states.
Or maybe it’s an argument for a Kantian view on which we have a noumenal self that is beyond space and time as the physicists conceive of them.
Let S(t) be the state our universe actually has t units of time after the Big Bang.
Let’s suppose that our universe begins at time 0, i.e., at the Big Bang. Then we can ask this question:
If naturalism is true, the universe’s beginning as it does is probably a brute (unexplained) contingent fact. And while the state S(0) is rather different in physical arrangement from the states S(t) for t > 0, it does not seem different with respect to the likelihood of brutely coming into existence ex nihilo. So the question (1) is not easily to be dismissed, in the way that one might dismiss the question “Why is the third digit of the gravitational constant in SI units a 7 instead of some other digit?” by saying “Well, it had to something, and there is nothing special about 7.” For there is something special about S(0), namely that it is a singularity state that blocks retrodiction, but this something special does not seem relevant to the likelihood of brutely coming into existence—if only because brute coming into existence cannot be probabilistically quantified.
Theism, on the other hand, provides a satisfying answer to (1). First, as a warmup, we might have a good theistic story why the initial state isn’t S(t) for a very large t: perhaps the universe will no longer be habitable after t, and God has good reason to create a habitable universe. Second, we have a good reason why the initial state should be S(0): a singularity is a natural barrier to retrodiction, and so creating a universe whose past goes back to a singularity is creating a universe where the retrodictive knowledge of beings like us is maximized (cf. Robin Collins’ ideas on our world being optimized for science). And knowledge is good. So God has reason to create a world in initial state S(0).
Technical note: Perhaps there is no singularity state S(0), but instead there are states S(t) for t > 0. If so, then replace S(t) in the above argument by something facts about limits from above of S(t). Abstractly, we can form such units as disjunctions of shrinking conjunctions: let U(t)=⋁δ>0⋀0<ϵ<δS(t+ϵ), and say that the universe “begins to exist in state U(t)” provided that it doesn’t exist time t or earlier, but that U(t) correctly describes an opening interval of the universe’s existence. Then replace (1) with the question:
I’ve long been puzzled by materiality.
Here’s a thought: What if materiality isn’t characterized by anything deeply metaphysical, but by a physical quality? Perhaps to be material just is to have something like inertia, or mass, or energy?
(I think that to have zero of some quality like mass is still to have mass. A mass of x is a determinate of the determinable mass even if x = 0. Photons have mass, while numbers don’t.)
Suppose first a countably infinite line of blindfolded people standing on tiles numbered 0,1,2,…, with the ones on a tile whose number is divisible by 10 having a red hat, and the others having blue hats. Suppose you’re in the line, with no idea where, but apprised of the above. It seems you should reasonably think: “Probably my hat is blue.”
But then the blindfolded people are shuffled, without any changes of hats, so that now it is the tiles with numbers divisible by 10 that have the blue hatters and the others have the red hatters. Such mere shuffling shouldn’t change what you think. So after being informed of the shuffle, it seems you should still think: “Probably my hat is blue.” It is already puzzling, though, why the first arrangement defined the probabilities and not the second. (What does temporal order have to do with these probabilities?)
Now suppose you gather the nine people after you (in the tile order—even though you are blindfolded, I suppose you can tell which direction the tile number numbers increase) along with yourself into a group of ten. In any group of ten successive people on the line, there is exactly one blue hat and nine red hats. Yet each of the ten of you thinks: “Probably my hat is blue.” And by a reasonable closure, you each also think: “Probably the other nine all have red hats.” You talk about it. You argue about it. “No, I am probably the one with the blue hat!” “No, my hat is probably the blue one.” “No, you’re probably both wrong: It’s probably mine.” I submit there is no rational room for any resolution to the disagreement, and indeed no budging of probabilities, no matter how much you pool your data, no matter how completely you recognize your epistemic peerhood, no matter how you apply exactly the same reasonable principles of reasoning. For nothing you learn from the other people is evidentially relevant. This is paradoxical.
We have two tenure-track jobs at Baylor. Both are open, but we have different preferred (but not required) specializations for them:
Consider forty rational people each individually keeping track of the ethnicities and virtue/vice of the people they interact with and hear about (admittedly, one wonders why a rational person would do that!). Even if there is no statistical connection—positive or negative—between being Polish and being morally vicious, random variation in samples means that we would expect two of the forty people to gain evidence that there is a statistically significant connection—positive or negative—between being Polish and being morally vicious at the p = 0.05 level. We would, further, intuitively expect that one in the forty would come to conclude on the basis of their individual data that there is a statistically significant negative connection between Polishness and vice and one that there is a statistically significant positive connection.
It seems to follow that for any particular ethnic or racial or other group, at the fairly standard p = 0.05 significance level, we would expect about one in forty rational people to have a rational racist-type view about any particular group’s virtue or vice (or any other qualities).
If this line of reasoning is correct, it seems that it is uncharitable to assume that a particular racist’s views are irrational. For there is a not insignificant chance that they are just one of the unlucky rational people who got spurious p = 0.05 level confirmation.
Of course, the prevalence of racism in the US appears to be far above the 1/40 number above. However, there is a multiplicity of groups one can be a racist about, and the 1/40 number is for any one particular group. With five groups, we would expect that approximately 5/40=1/8 (more precisely 1 − (39/40)5) of rational people to get p = 0.05 confirmation of a racist-type hypothesis about one of the groups. That’s still presumably significantly below the actual prevalence of racism.
But in any case this line of reasoning is not correct. For we are not individual data gatherers. We have access to other people’s data. The widespread agreement about the falsity of racist-type claims is also evidence, evidence that would not be undercut by a mere p = 0.05 level result of one’s individual study.
So, we need social epistemology to combat racism.
I think it is possible for one mind to have multiple spheres of consciousness. One kind of case is diachronic: there need be no unity of consciousness between my awareness at t1 and my awareness at t2. Split brain patients provide synchronic example. (I suppose in both cases one can question whether there is really only one mind, but I’ll assume so.)
What if, then, it turned out that we do not actually have any unconscious mental states? Perhaps what I call “unconscious mental states” are actually conscious states that exist in a sphere of consciousness other than the one connected to my linguistic productions. Maybe it is the sphere of consciousness connected to my linguistic productions that I identify as the “conscious I”, but both spheres are equally mine.
An advantage of such a view would be that we could then accept the following simple reductive account of consciousness:
Of course, this is only a partial reduction: the conscious is reduced to the mental. I am happy with that, as I doubt that the mental can be reduced to the non-mental. But it would be really cool if the mystery of the conscious could be reduced.
However, the above story still doesn’t fully solve the problem of consciousness. For it replaces the puzzle as to what makes some of my mental states conscious and others unconscious with the puzzle of what makes a plurality of mental states co-conscious, i.e., a part of the same sphere of consciousness. Perhaps this problem is more tractable than the problem of what makes a state conscious was, though?
I rarely take myself to know that someone is culpable for some particular wrongdoing. There are three main groups of exception:
my own wrongdoings, so many of which I know by introspection to be culpable
cases where others give me insight into their culpability through their testimony, their expressions of repentance, etc.
cases where divine revelation affirms or implies culpability (e.g., Adam and David).
In type 2 cases, I am also not all that confident, because unless I know a lot about the person, I will worry that they are being unfair to themselves.
I am amazed that a number of people have great confidence that various infamous malefactors are culpable for their grave injustices. Maybe they are, but it seems easier to believe in culpability in the case of more minor offenses than greater ones. For the greater the offense, the further the departure from rationality, and hence the more reason there is to worry about something like temporary or permanent insanity or just crazy beliefs.
I don’t doubt that most people culpably do many bad things, and even that most people on some occasion culpably do something really bad. But I am sceptical of my ability to know which of the really bad things people do they are culpable for.
The difficulty with all this is how it intersects with the penal system. Is there maybe a shallower kind of culpability that is easier to determine and that is sufficient for punishment? I don’t know.
Consider:
Add that I am opinionated on what I believe:
Finally, add:
Now I either believe (1) or not. If I do not believe (1), then I don’t believe that I don’t believe (1), by closure. But thus, by (2), I do believe that I do believe (1). Hence in this case:
Now suppose I do believe (1). Then I believe that I don’t believe (1), by closure and by what (1) says. So, (4) is still true.
Thus, we have an argument that if I am opinionated on what I believe and my beliefs are closed under entailment, then I am mistaken as to what I believe.
(Again, we need some way of getting God out of this paradox. Maybe the fact that God’s knowledge is non-discursive helps.)
Socrates thought it was important that if you didn't know something, you knew you didn't know it. And he thought that it was important to know what followed from what. Say that an agent is Socratically perfect provided that (a) for every proposition p that she doesn't know, she knows that she doesn't know p, and (b) her knowledge is closed under entailment.
Suppose Sally is Socratically perfect and consider:
If Sally knows the proposition expressed by (1), then (1) is true, and so Sally doesn’t know the proposition expressed by (1). Contradiction!
If Sally doesn’t know the proposition expressed by (1), then she knows that she doesn’t know it. But that she doesn’t know the proposition expressed by (1) just is the proposition expressed by (1). So Sally doesn’t know the proposition expressed by (1). So Sally knows the proposition expressed by (1). Contradiction!
So it seems it is impossible to have a Socratically perfect agent.
(Technical note: A careful reader will notice that I never used closure of Sally’s knowledge. That’s because (1) involves dubious self-reference, and to handle that rigorously, one needs to use Goedel’s diagonal lemma, and once one does that, the modified argument will use closure.)
But what about God? After all, God is Socratically perfect, since he knows all truths. Well, in the case of God, knowledge is equivalent to truth, so (1)-type sentences just are liar sentences, and so the problem above just is the liar paradox. Alternately, maybe the above argument works for discursive knowledge, while God’s knowledge is non-discursive.
Scoring rules measure the distance between a credence and the truth value, where true=1 and false=0. You want this distance to be as low as possible.
Here’s a fun paradox. Consider this sentence:
(If you want more rigor, use Goedel’s diagonalization lemma to remove the self-reference.) It’s now a moment before t1, and I am trying to figure out what credence I should assign to (1) at t1. If I assign a credence less than 0.1, then (1) will be true, and the epistemic distance between 0.1 and 1 will be large on any reasonable scoring rule. So, I should assign a credence greater than or equal to 0.1. In that case, (1) will be false, and I want to minimize the epistemic distance between the credence and 0. I do that by letting the credence be exactly 0.1.
So, I should set my credence to be exactly 0.1 to optimize epistemic score. Suppose, however, that at t1 I will remember with near-certainty that I was setting my credence to 0.1. Thus, at t1 I will be in a position to know with near-certainty that my credence for (1) is not less than 0.1, and hence I will have evidence showing with near-certainty that (1) is false. And yet my credence for (1) will be 0.1. Thus, my credential state at t1 will be probabilistically inconsistent.
Hence, there are times when optimizing epistemic score leads to inconsistency.
There are, of course, theorems on the books that optimizing epistemic score requires consistency. But the theorems do not apply to cases where the truth of the matter depends on your credence, as in (1).
Consider this modified version of William James’ mountaineer case: The mountaineer’s survival depends on his jumping over a crevasse, and the mountaineer knows that he will succeed in jumping over the crevasse if he believes he will succeed, but doesn’t know that he will succeed as he doesn’t know whether he will come to believe that he will succeed.
James used his version of the case to argue that pragmatic reasons can legitimately override lack of epistemic reasons.
But what is interesting to me in my variant is the way it provides a counterexample to evidentialism. Evidentialists say that you epistemically should form your beliefs only on the basis of evidence. But notice that although the belief that he will succeed at the jump needs to be formed in the absence of evidence for its truth, as soon as it is formed, the belief itself becomes its own evidence to the point that it turns into knowledge. The belief is self-guaranteeing. So there seems to be nothing to criticize epistemically about the formation of the belief, even though the formation is independent of evidence. In fact, it seems, there is a good epistemic reason to believe, since by believing the mountaineer increases the stock of his knowledge.
Moreover, we can even make the case be one where the evidence on balance points against the proposition. Perhaps the mountaineer has attempted, in safer circumstances, to get himself to believe that he can make such a jump, and seven times out of ten he has failed at both self-induction of belief, and also at the jump. But in the remaining three times out of ten, he succeeded at both. So, then, the mountaineer has non-conclusive evidence that he won’t manage to believe that he will succeed (and that he won’t succeed). If he comes to believe that he will succeed, he comes to believe this against the evidence—but, still, in doing, he increases his stock of knowledge, since the belief, once believed, is self-guaranteeing.
(This phenomenon of self-guaranteeing belief reminds me of things that Kierkegaard says about faith, where faith itself is a miracle that hence is evidence for its truth.)
Interestingly, we might also be able to construct cases of well-evidenced but self-defeating beliefs. Consider a jeweler who has noticed that she is successful at cutting a diamond if and only if she believes she will be unsuccessful. Her theory is that belief in her success makes her insufficiently careful. Over time, she has learned to suspend judgment in her success, and hence to be successful. But now she reflects on her history, and she finds herself with evidence that he will be successful in cutting the next diamond. Yet if she believes on this evidence, this will render her overconfident, and hence render the belief false!
This is related to the examples in this paper on lying.
So perhaps what the evidentialist needs to say is that you epistemically may believe p if and only if the evidence says that if you believe p, p is true?