Wednesday, February 29, 2012
Philosophy of cosmology blog
Sketches towards a theory of quantifiers and quantifier variance
Quantifier variance theorists think that there can be multiple kinds of quantifiers. Thus, there could be quantifiers that range over only fundamental entities, but there could also be quantifiers that range over arbitrary mereological sums. I will call all the quantifiers with a particular range a "quantifier family". A given quantifier family will include its own "for all x (∀x)" and "for some x (∃x)", but may also include "for most x", "for at least two x", and similar quantifiers. I will, however, not include any negative quantifiers like "for no x" or "for at most seven x", or partially negative ones like "for exactly one x". I will also include "singular quantifiers", which we express in English with "for x=Jones". In fact, I will be working with a language that has no names in it as names are normally thought of in logic. Instead of names, there will be a full complement of singular quantifiers, one for each name-as-ordinarily-thought-of; I am happy to identify names with singular quantifiers for the purposes of logic.
Say that quantifier-candidates are operators that take a variable and a formula and return a formula in which that variable is not open. Consider a set F of quantifier-candidates with a partial ordering ≤, where I will read "Q≤R" as "R is at least as strong as Q", and with a symmetric "duality" relation D on F. There is also a subset N of elements of F which will be called "singular". Then F will be a quantifier family provided that
- There is a unique maximally strong operator ∀
- There is an operator ∃ dual to ∀
- If Q is dual to R then it can be proved that QxP iff ~Rx~P
- ∃ is minimally strong
- If R in F is at least as strong as Q in F, then from RxP one can prove QxP
- From P one can prove QxP for any Q
- From ∀xP one can prove P (note: open formulae can stand in provability relations)
- If Q is singular, then Q is self-dual and Qx(A&B) can be proved from QxA and QxB
One can set up a model-theory as well. A domain-model for a quantifier family will include a set O of "objects", and a set S of sets of subsets of O, such that if E is a member of S, then E is an upper set, i.e., if a subset A of O is in E, then so is any superset of A as long as it is still a subset of O. A member of S will be called an "evaluator". To get a model from a domain-model, we add the usual set of relations. An interpretation I in a given model for a language with a quantifier family will then involve an ordinary interpretation of the language's predicates, plus an assignment of quantifiers to members of S subject to the constraints that (a) if Q≤R, then I(R) is a subset of I(Q), (b) I(∀) is the evaluator {O}, (c) if Q is dual to R, then A is a member of I(Q) if and only if the complement O−A is not a member of I(R), (d) if Q is singular, then I(Q) is a filter-base. We can then define truth under I using the basic idea that QxP is true if and only if the set of objects o such that o satisfies P when put in for x is a member of I(Q) (this should be all done more carefully, of course).
(The interpretation of a name is always an ultrafilter. If we wanted to, we could restrict names to being interpreted as principal ultrafilters, in which case names would correspond to objects, but I think things are more interesting like it is.)
Ideally, we'd want to make sure we have soundness and completeness at this point. I'm basically just making this up as I go along, so there may be a literature on this, and if there is, there will presumably be results about soundness and completeness in it. And maybe we need more rules of inference and maybe I screwed up above. This is just a blog post. Moreover, we might want some further restrictions about how particular quantifiers, like "for most", are interpreted (the above just constrains it to having an evaluator between the evaluators for ∀ and ∃). The point of the above is more to give an example of what a formal characterization of a quantifier family might look like than to give the correct one.
But now it is time for some metaphysics. The notion of a quantifier family is a purely formal one. Moreover, the model-theoretic notion of interpretation that I used above won't be helpful for quantifier variance discussions because it talks of "sets of objects", whereas what "metaphysically counts as an object" varies between quantifier families.
It is easy to come up with quantifier families and perverse interpretations such that under such an interpretation, we would not want to count the members of the quantifier family "quantifiers". Nor would it be a quantifier variance thesis to say that there are many such families and interpretations, since that there are such is not controversial.
I think a Thomist can give an answer: a quantifier family in the formal sense is a bona fide quantifier family provided that the family is analogous to some privileged family of quantifiers, say quantifiers over substances. In other words, the different kinds of existence are defined by analogy to existence proper. This won't satisfy typical quantifier variance folk, as I think they don't want a privileged family of quantifiers. But that's the best I can do right now.
Tuesday, February 28, 2012
Conscience, authority and moral intuition
A former student of mine wrote to me with a query on about how institutional Church authority could co-exist with the authority of individual conscience. She argued that ultimately my conscience will decide whether the authority is to be trusted, and quoted Anscombe as saying that one cannot help but be one's own pilot.
This made me think a bit more about conscience and authority. I had recently been reading about the Charles Bonnet and Musical Ear syndromes. In these, visual or hearing loss, respectively, apparently causes the brain to confabulate visual or auditory data, respectively, to fill in the sensorily deprived blanks. In Charles Bonnet Syndrome, the sufferers see things like colored patterns, faces, cartoons, etc. In Musical Ear Syndrome, they are apt to hear music. The significant thing about both syndromes is that the sufferers are quite sane and fully realize that the incorrect sensory data they are receiving is mere hallucination (that the hallucinations are limited to a single faculty must help there). They may, however, be distressed due to worries that they are insane, particularly if they are misdiagnosed by a psychiatrist, as in a case I recall hearing of.
A reasonable sufferer from one of these two syndromes will accept the testimony of reliable others that what she visually or auditorily perceives isn't there. In so doing, she is genuinely being her own pilot. Indeed, if she were to uncritically accept the visual or auditory data, she wouldn't be being her own responsible pilot: she would be replacing considered judgment with the flow of experience. Likewise, my colorblind son defers to the color judgments of others; an object may look light green to him, but when others testify that it is light pink, he accepts their judgment, and in so doing exercises his epistemic autonomy.
I think something similar can and does happen in moral matters. We have moral intuitions. These moral intuitions can be more or less reliable. But of course raw moral intuitions do not have a final say. Even apart from authority, moral intuitions need to be harmonized. And it may turn out that the best moral theory fitting the bulk of one's moral intuitions can go against some of one's moral intuitions, and then a judgment must be made.
Moreover, there is nothing contrary to being one's own pilot in making a reasonable judgment that a family of one's moral intuitions, or even all of one's moral intuitions, are less reliable than the testimony of an individual or institution one has reason to trust. That is just much an exercise of one's epistemic autonomy as it would be to accept the moral intuitions over that testimony.
I think that sometimes we confuse conscience with moral intuitions. The deliverance of conscience is an all-things-considered judgment of what is morally to be done. It may take moral intuitions into account, but it may also take other relevant data into account as well. The deliverance of moral intuition is not, as such, the deliverance of conscience, though of course in the absence of evidence against the moral intuition, conscience is apt to reasonably accept the content of the moral intuition as true.
It is quite possible for one to reasonably come to the conclusion that one's moral intuitions are less reliable than the teaching of an authority. In such a case, when there is a conflict between one's moral intuition and a teaching of the authority, one's considered moral judgment will at least typically go with the teaching. (I say "at least typically" to leave open the possibility that, say, a particularly strong moral intuition might be judged more likely to be accurate than a teaching that the authority gives quite low weight to.) In so doing, one may very well be a responsible pilot of one's self, if the reasons for accepting the authority as reliable were very good ones.
And one is not going against conscience then. On the contrary, in such a case, it would go against conscience to follow the moral intuitions, because one's considered judgment is that the authority is more reliable than the intuitions.
Our moral intuitions while being a genuine source of moral knowledge are often distorted by the desire to find excuses for our own faults or, more excusably, those of friends. Moral intuitions should not be glorified with the name "conscience". Like a Charles Bonnet Syndrome patient, one can be reasonable in judging that one ought to submit to the judgment of another, and then the other's judgment is the deliverance of one's conscience.
At the same time, I should note that normally our moral intuitions will play a significant role in figuring out that a putative authority should be listened to. When the putative authority's teachings harmonize particularly with those moral intuitions that we take to be more reliable, that will count in favor of the claim to authority, and when they disagree, that will count against the claim to authority. Here I think there is a useful rule of thumb: moral intuitions that something is permissible are less to be trusted than moral intuitions that something is impermissible. An action is impermissible provided there is a conclusive moral reason not to do it. An action is permissible provided that there is no conclusive moral reason not to do it. Generally, perceptions of absence are less to be trusted than perceptions of presence. Moreover, the space of reasons is large, and to judge that none of the infinitely many considerations in that space gives conclusive reason not to do A is fraught witih difficulty. (Of course, judgments about permissibility are very often right, but perhaps only because of the base rate: most actions people perform are right.)
Monday, February 27, 2012
Do riches lead to vice?
Consolidating evidence
Here's something that surprised me when I noticed it, though it probably shouldn't have surprised me. The following can happen: My evidence favors p. Your evidence disfavors p. I know you are rational and competent. After talking with you, and consolidating evidence, I rationally increase my evidence for p.
Here's a case. Suppose we have a coin which is either biased 3:1 in favor of heads or biased 3:1 in favor of tails. We don't know which. I have observed a few coin tosses, and they included four tails and seven heads. My evidence supports the hypothesis that the coin is biased in favor of heads. You have observed a few coin tosses, and they were four tails and two heads. Your evidence supports the hypothesis that the coin is biased in favor of tails. Intuitively, I should lower my credence in the heads-bias hypothesis when I learn of your evidence.
But imagine further that the four tail tosses you observed are the same four tail tosses that I observed, but the two heads tosses you observed were not among the seven heads tosses I observed. Then consolidating our evidence, we get four tails and nine heads, which supports the heads-bias hypothesis.
This is humdrum: When we consolidate evidence, we need to watch out for double counting in either direction. The above case makes this striking, because when we eliminate double counting, we get confirmation in the opposite direction to what we would initially have expected.
There is a very practical moral of the above story. It is important not only to remember one's credence in the propositions one believes and cares about, but also the evidence that gave rise to this credence. For if one does not remember this evidence, it will be difficult to avoid double counting (or subtler failures of independence).
By the way, I think it is helpful to think of the disagreement literature as well as discussions about the nature of arguments and other social epistemology stuff as interpersonal data consolidation problems. Getting clear on what we are aiming at should help. You have data, I have data, we all have data. What we are aiming for are methods (algorithms, virtues, etc.) that help us consolidate data across persons to get a better picture of reality than we are likely to have individually.
Moreover, I think that morally speaking it is very important when engaging in argumentation to remember what we are doing: the telos of arguing is to consolidate data across persons in order to get to truth and understanding. This telos is social, as befits social animals. It is not the telos of an argument that I convince you of the argument's conclusion. Rather it is that I convince you of the truth or show you how truth hangs together. If instead of convincing you of the argument's conclusion I convince you by modus tollens of the falsity of one of the premises, and in fact the conclusion is false and so is that premise, then the point of arguing has perhaps been fulfilled. And if in a case where the conclusion is false my argument convinces you of that conclusion, then the argument is a failure qua argument.
Saturday, February 25, 2012
Reasons of trust
Suppose you promise me to do something and suppose I should trust you. Then I have a moral reason not to check whether you did what your promised. Of course, if I have a special responsibility for it, I may also have a moral reason to check. But generally speaking, I think we have an imperfect duty not to check up on people when we should trust them. Moreover, we should trust people unless we have good reason to the contrary. I would be wronging a colleague if, out of the blue, I were to start running his papers through TurnItIn.com to look for plagiarism. Such an action would be a failure to show required trust. It would thus be contrary to collegial love.
Natural love, thus, requires natural faith of us. But our supernatural love for Christ requires supernatural faith of us.
Thursday, February 23, 2012
Infinite lotteries and infinitesimal probabilities
The argument in this post is based on a construction by Dubins (see Example 2.1 here) that I've switched into an infinitesimal case.
Suppose you can have an infinite lottery with ticket numbers 1,2,3,... and each ticket has infinitesimal probability (perhaps the same one for each). Then really weird stuff can happen. Say I toss a fair coin, but don't show you the result. Instead, you know for sure that I will do this:
- If the coin was tails, I run an infinite lottery with ticket numbers 1,2,3,... and with each ticket having infinitesimal probability
- If the coin was heads, I run an infinite lottery with the same ticket numbers, but now the probability of ticket n is 2^{−n}.
Here's the oddity. No matter what my announcement, you will end up all but certain—i.e., assigning a probability infinitesimally short of 1—that the coin was heads. Here's why. Suppose I announce ticket n. Now, P(n|heads)=2^{−n} but P(n|tails) is infinitesimal. Plugging these facts into Bayes' theorem, and assuming that your prior probability for heads was 1/2 (actually, all that's needed is that it be neither zero nor infinitesimal), your posterior probability P(heads|n) ends up equal to 1−a where a is infinitesimal.
So I can rationally force you to be all but certain that it was heads, simply by telling you the result of my lottery experiment. And by reversing the arrangement, I could force you to be all but certain that it was tails. Thus there is something pathological about the infinite lottery with infinitesimal probabilities.
This is, to me, yet another of the somewhat unhappy results that show that probability theory has a quite limited sphere of epistemological application.
Tuesday, February 21, 2012
How likely am I to be misled?
Suppose a hypothesis H is true, and I assign a positive prior probability P(H)>0 to H. I now perform some experiment such that I have a correct estimate of how likely the various possible outcomes of the experiment are given H as well as how likely they are given not-H. (For simplicity, I will suppose all the possible outcomes of the experiment have non-zero probability.) It could turn out that the experiment supports not-H, even though H is true. That's the case of being misled by the outcome of the experiment.
How likely am I to be misled? Perhaps very likely. Suppose that I have a machine that picks a number from 1 to 100, and suppose that the only two relevant hypotheses are H, the hypothesis that the machine is biased in favor of the number 1, which it picks out with probability 2/100, while all the other numbers are picked out with probability 98/9900, and not-H, which says that all the numbers are equal. My experiment is that I will run the machine once. Suppose that H is in fact true. Then, by Bayes' theorem, if I get the number 1, my credence for H will go up, while if I get anything else, my credence for H will go down. Since my probability of getting something other than 1 is 98/100, very likely I will be misled. But only by a little—the amount of confirmation to the fairness hypothesis not-H that is given by a result other than 1 is very small.
Could one cook up cases where I am likely to be misled to a significant degree?
It turns out that one can prove that one cannot. For we can measure the degree to which I am misled by the Bayes' factor B=P(E|H)/P(E|~H), where E is the outcome of the experiment. When B<1, then I am misled by the result of the experiment into downgrading my confidence in the truth H, and the smaller B is, the more I am misled. But it turns out that there is a very elegant inequality saying how likely I am to be misled to any particular degree:
- P(B≤y)≤y.
We can also state the result in terms of probabilities. Suppose I start with assigning probability 1/2 to H. I then perform an experiment and perform a Bayesian update. How likely is it that I end up with a posterior probability at or below p? It turns out that the probability that I will end up with a posterior probability at or below p is no more than p/(1−p). So, for instance, the probability that an experiment will lead me to assign probability 1/10 is at most 11%, no matter how carefully the experiment is set up to be unfavorable to H, as long as I have correct estimates of the likelihoods of the experiment's outcomes on H and on not-H.
This has an interesting corollary. We know that it need not be the case that a series of experiments will yield convergence of posterior probabilities to truth, even if we have correct likelihood estimates. For the experiments might get less and less discriminating, and if they do so fast enough, the posteriors for the true hypothesis H will not converge to 1. But could things be so bad that the posteriors would be sure to converge to 0 (i.e., would do so with probability one)? No, it cannot be the case that with probability one the posteriors will converge to 0, because we could aggregate a sequence of experiments into a single large experiment (whose outcome is an n-tuple of outcomes) and use inequality (1).
So even though Bayesian convergence doesn't always happen, even if one has correct likelihoods, nonetheless we can be confident that Bayesian mis-convergence won't happen if we have correct likelihoods.
Monday, February 20, 2012
Gentler structuralisms about mathematics
According to some standard structuralist accounts, a mathematical claim like that there are infinitely many primes, is equivalent to a claim like:
- Necessarily, for any physical structure that satisfies the axioms A_{1},...,A_{n}, the structure satisfies the claim that there are infinitely many primes.
The difficulty with this sort of structuralism is that while it may be fine for a good deal of "ordinary mathematics", such as real analysis, finite-dimensional geometry, dealing with prime numbers, etc., it is not clear that there are enough possible physical structures to model the axioms of such systems as transfinite arithmetic. And if there aren't, then antecedents in claims like (1) will be false, and hence the necessary conditional will hold trivially. One could bring in counterpossibles but that would be explaining the obscure with the obscurer.
I want to drop the requirement that the structures we're talking about are physical structures. Thus, instead of (1), we should say:
- Necessarily, for any structure that satisfies the axioms A_{1},...,A_{n}, the structure satisfies the claim that there are infinitely many primes.
Next, restrict the theory to being about what modern mathematics typically means by its mathematical claims. If we do this, the claim becomes logically compatible with Platonism about numbers. Let us suppose that there really are numbers, and our ordinary language gets at them. Nonetheless, I submit, when a modern number theorist is saying that there are infinitely many primes, she is likely not making a claim specifically about them. Rather, she is making a claim about every system that satisfies the said axioms. If the natural numbers satisfy the axioms, then her claims have a bearing on the natural numbers, too.
Here is one reason to think that she's saying that. Mathematical practice is centered on getting what generality you can. What mathematician would want to limit a claim to being about the natural numbers, when she could, at no additional cost, be making a claim about every system that satisfies the Peano axioms?
Now, if we go for this gentler structuralism, and allow abstract entities, we can easily generate structures that satisfy all sorts of axioms. For instance, consider plural existential propositions. These are propositions of the form of the proposition that the Fs exist, where "the Fs" directly plurally refers to a particular plurality. We can define a membership relation: x is a member of p if and only if x is said by p to exist. Add an "empty proposition", which can be any other proposition (say, that cats hate dogs) and say that nothing is its member. Then plural existential propositions, plus the empty proposition, with this membership relations should satisfy the axioms of a plausible set theory with ur-elements. If all one wants is Peano axioms, we can take them to be satisfied by the sequence of propositions that there are no cats, that there is a unique cat, that there are distinct cats x and y and every cat is x or is y, that there are distinct cats x and y and z and every cat is x or is y or is z, and so on.
I am not completely convinced that this sociological thesis about modern mathematics is correct. Maybe I can retreat to the claim that this is what modern mathematics ought to claim.
Saturday, February 18, 2012
How little we know
At times I am struck by just how little we know (and I don't even put much emphasis on the "know"). I work, inter alia, in philosophy of time and I can't answer my six-year-old's questions about the nature of time. We humans really aren't very smart at all, except at asking questions.
It is not surprising that our ability to ask questions would outpace our ability to find answers. But it is, I think, surprising just how far it outpaces it.
And yet we can know the important thing: that we are made to know and love God.
Thursday, February 16, 2012
Gutting on Church authority
Gary Gutting has an interesting opinion piece where he argues that the Bishops don't have the right to define the teachings of the Catholic Church for the purposes of American political discussion, because most American Catholics disagree with them on matters like contraception.
Imagine the Tall Persons' Club, where by well-established and generations-old tradition, the executive council is made up of the three tallest members, and the president is the tallest member. I voluntarily join the Tall Persons' Club, because I love many of its traditionally established activities, such as the annual cleaning of the giraffe enclosure in the local zoo, the discounted tickets to basketball games and the spectacular fireworks on Robert Pershing Wadlow's birthday.
However, I believe that the governing structure is an unfortunate one, because I think (a) height does not correlate with intelligence, (b) a focus on absolute rather than group-relative height is unfair to some ethnic groups, and (c) we should also do more for ostrich conservation than the present leadership does. Moreover, many members are with me on this. But nonetheless, by voluntarily joining the club, I have given its three tallest members a certain right to speak on my behalf on club-related matters. This is particularly true if there are other clubs that engage in similar activities but have a governing structure closer to what I like.
There are a number of important disanalogies, of course. For instance, one might believe that membership in the Catholic Church is necessary for eternal salvation. If one believes that, then one will have a very serious reason to be a member of the Church no matter how much one disagrees with the Magisterium, and the voluntariness that was essential to my story about the Tall Persons' Club is decreased. However, I don't know of any Catholics who disagree with the Magisterium on contraception who think that membership in the Catholic Church is necessary for salvation.
Another disanalogy is that many people become members of the Catholic Church not by their own choice, but by infant baptism (which, as I think Augustine notes, emphasizes that salvation is not by works). However, given a pluralistic society like ours, they are at least typically remaining in the Church voluntarily.
What counts as "the opinion of a group" is a really tough question. But it certainly isn't determined by looking at what the majority believe. For instance, it is false to say that it is the opinion of the Music Department that the earth goes around the sun, though no doubt that is the opinion of the majority of the members of the Music Department. It is not the opinion of the Music Department because the Music Department has not come to this view by the established methods for forming a corporate view of a matter proper to the Music Department. So majority opinion is not a sufficient condition for group opinion. Nor is it a necessary condition for something to be the opinion of a group that the majority believe it, even in the case of an institution whose traditional governance is by simple majority vote. A group can come up with a joint compromise proposition, approved by a majority vote, where in fact no one individual in the group endorses the proposition in its entirety (whether it is ever morally licit to vote in favor of a group resolution to endorse a proposition one takes to be false is a different question).
(Also, the following rather interesting thing can happen in a group. There may be two groups with the same or almost the same membership but with different governance structures, and opinions, preferences and decisions will then be differently attributable to the two groups. For instance, there may be the Music Department as an academic department and the Music Department as a social group. Perhaps the Music Department as a social group likes a particular brand of beer, but that preference is not of the Music Department as an academic group unless they vote for it in a Department meeting. It could be that there is the Tall Persons' Club as such and the Tall Persons' Club as a majority-governed group of individuals. We should then say that ostrich protection is a goal of the second group but not of the first.)
Furthermore, those of us who at least in principle like the idea of constitutional democracies (or monarchies, for that matter--I am Canadian, after all) should not say that the authority of a group derives from synchronic endorsement by the members. For it is a crucial feature (and very important for protecting minorities) of a constitutional system that it persists in authority even when at a particular time the majority fail to respect that authority (in this way, it is like marriage; one also thinks of Ulysses tied to the mast). The military oath in the United States is, importantly, an oath to protect the Constitution, not the present preferences and choices of the American people.
But I am out of my depth in the social/political philosophy stuff.
Wednesday, February 15, 2012
Some bad arguments from authority
Only a minority of scientists believe that the speed of light is 299792458 m/s.
So, probably, the speed of light is not 299792458 m/s.
Argument 2:
If you asked most mathematicians about their credence that the 20th digit of pi is 4, they'd say it's 1/10.
So, probably, the 20th digit of pi is not 4.
Argument 3:
Only a minority of mathematicians believe Bayes' Theorem.
So, probably, Bayes' Theorem is false.
:-)
Tuesday, February 14, 2012
Augustine's problem
Augustine raises this problem: What was God doing for the infinite amount of time prior to creation? Why didn't God create the world earlier?[note 1] Augustine reports the joke answer that God was busy creating a hell for people who ask such questions, and then goes on to give his famous answer that God created the universe and time simultaneously.
Augustine's answer is a good one. The start of time is a non-arbitrary answer to the question of when to create the universe. However, Augustine's answer can only be adopted if God has an atemporal existence. So if Augustine's answer is the only one or the best one, we have an argument that God has an atemporal existence.
But could someone who takes God to be only a temporal being—say, an open theist—give an answer to Augustine's problem? If God is a temporal being, then time has infinite age, as God then does (the suggestion that God is a temporal being that has finite age is incompatible with divine eternity, even if it is compatible with the claim that God exists at all times). Hence no answer that depends on time's having a beginning will do.
One way to see a problem is to imagine God deliberating annually, say a million years before creation. God has good reason to create that year—after all it, is good that there be a created world. Maybe he has good reason not to create as well (maybe creation entails that there is imperfection; at least, creation makes reality less simple). But in any case, there intuitively should be some moderate probability, say 1/2, that he would create that year. But he doesn't. That's fine: he also had a probability of 1/2 that he wouldn't. But likewise he doesn't next year. He had probability 1/4 of creating in neither year. That's fine: events of that probability aren't very surprising. But keep on running this. He doesn't create in any of a 1000 years. That's much less likely. The probability of that is 2^{−1000}. So, it seems, on the assumption that God is in time and there is an infinite past, we have very good reason to expect that God would have created the world earlier than he did—no matter when he created it!
There are two difficulties with this line of thought. The first is that numerical probabilities can't be assigned too divine deliberation. That's fine: they are still heuristic and highlight the extreme unlikeliness of the scenario of God waiting for an infinite amount of time before creating. The second is that it presupposes a particular model of how God deliberates whether to create, namely that he continually deliberates whether to create there and then.
Can we solve Augustine's problem if, instead, we accept a model on which God from eternity (i.e., at every past time) decides on a particular time t at which to create? Well, if God is changeable, that still leaves open the question of why God didn't change his mind—why God kept on waiting, even though he had reason to change his mind (namely, the reason that creation could come earlier if he changed his mind). If he had probability one in a million of changing his mind in any given year, we'd expect that over a million years he'd have changed his mind, and a again an intuitive argument like before can be run. Maybe, though, a changing God can unchangeably determine his will (by making a promise to himself, maybe?), and at every time he always already had done so, given that he knew there would never be any new information becoming available? Or maybe God, while in time, is unchangaeble, and hence his decisions cannot change? Or maybe God, from eternity, efficaciously willed that creation should occur at t_{0}—God's efficacious willing need not be contemporaneous with what is willed.
So there are some things that can be said about the changeability subproblem. Our best model right now for a God-in-time story is that God from eternity has unchangeably decided that creation would come at t_{0}. It is tempting to say "And then God waited until t_{0}." But waiting is what you do when you have little else worth doing, and God is ever infinitely active in his intratrinitarian communion. So we shouldn't say that. Let's focus, however, on a different subproblem. Why did God eternally choose t_{0}, rather than say t_{0}−1 year, for the date of creation? Absent distinctions coming from notable events within creation, all times are, presumably, exactly alike. I suppose two answers are available. First, that it is a reasonless divine choice. Second, that there is a reason, in that there is a particular incommensurable value associated with each possible time for creation. I think these are tenable answers, but it must be noted that neither is uncontroversial. The subproblem of why God chose this time rather than another is a hard one.
There is another subproblem, however, related to the changeability subproblem. Take an answer to the changeability subproblem. Consider the proposition p_{n} which says that in year n B.C., God was decided that he would create at t_{0}. What explains p_{n}? Presumably it's p_{n+1}. I.e., God was decided in year n B.C. because he was decided in year n+1 B.C. But this generates a vicious regress.
All of this suggests that Augustine's answer is the best answer to Augustine's problem. And we have reason to reject views on which God is a temporal being.
This leaves two kinds of views. The first are ones on which God is solely atemporal (except in virtue of an Inarnation) and the second are ones on which God exists atemporally, but with creation comes to be omnitemporally present as well, as an aspect of his omnipresence. I do not know if either view is compatible with the A-theory. Certainly, I don't think presentists can make sense of atemporal divine existence. So theists (at least ones who, like Christians and most relevant scientists, think that the universe has finite age) shouldn't be presentists.
Friday, February 10, 2012
Rawls and rationally intractable disagreement
Let me preface by saying I am not a political philosopher, and this may be off-base. Start by granting this claim for the sake of the argument:
- The disagreement between comprehensive views is very long-standing and there is no progress to agreement, except when non-rational, coercive methods are applied to generate agreement or for other merely sociological reasons there happens to be cultural homogeneity.
- People's idiosyncratic or culturally-based preferences, as well as their presently-held comprehensive views, often significantly bias them in their disagreements between comprehensive views.
- It is not possible to resolve the disagreement between comprehensive views by reason alone.
- both (2) and (3) explain (1)
- (2) by itself explains (1)
But now the question whether (1) is explains by (2) and (3), or simply by (2), is to a significant degree an empirical question.
And there is an obvious experiment to test between these options. Take a bunch of intelligent and rational people without idiosyncratic and culturally-based preferences who do not adhere to any comprehensive views, and see if they come to agree to on a comprehensive view or against all of them--if they do, then (3) is not a part of an explanation of (1), and if they don't, then (3) is a part of an explanation of (1). And we cannot at present rule out the possibility that such an experiment would rule in favor of the hypothesis that (2) by itself explains (1).
But now note that this experiment is precisely the original situation of deliberation under the veil of ignorance. And note that we can say directly this. If it is an empirically open possibility that agreement on a comprehensive view or against all of them would arise in the original situation, then it seems to be an open possibility that the delegates would legislate in accordance with a comprehensive view or in ways that significantly impugn the freedom to follow comprehensive views. And that's unacceptable to Rawls.
Sound-bite version: Please don't infer that a debate would be unsettled in an idealized situation from the fact that it's unsettled in the real world.
But I probably don't know what I'm talking about.
Thursday, February 9, 2012
A method for testing definitions
I have a new method for testing definitions. Read a definiens to someone, out of context, and ask her what she thinks the definiendum is. If she doesn't come up with something pretty close to the definiendum, you've got reason to think the definition is bad.
One can also do this as a thought experiment, though it's probably less effective that way. What does "justified true belief with no false lemmas" define? Answer: nothing other than justified true belief with no false lemmas. (Maybe you were trying to define knowledge?) What does "Sex between two people at least one of whom is married and who are not married to each other" define? Answer: adultery. (Right!)
Probabilities, scoring functions, and an argument that it is infinitely worse to be certain that a truth is false than it is good to be certain that that truth is true
It turns out there is a nice solution to this, apparently due to Alan Turing, which I had fun rediscovering yesterday. Define
- Ï†(H) = −log(1/P(H) − 1) = log(P(H)/P(~H)), and
- Ï†(H|E) = −log(1/P(H|E) − 1) = log(P(H|E)/P(~H|E)).
But here is something else that's neat about Ï†. It lets you rewrite Bayes' theorem so it becomes:
- Ï†(H|E) = Ï†(H) + C(E,H),
And it gets better. Suppose E_{1},...,E_{n} are pieces of evidence that are conditionally independent given H and conditionally independent given ~H. (One can think of these pieces of evidence as independent tests for H versus ~H. For instance, if our two hypotheses are that our coin is fair or that it is biased 9:1 in favor of heads, then E_{1},...,E_{n} can be the outcomes of successive tosses.) Then:
- Ï†(H|E_{1}&...&E_{n}) = Ï†(H)+C(E_{1},H)+...+C(E_{n},H).
Jim Hawthorne tells me that L. J. Savage used Ï† to prove a Bayesian convergence theorem, and it's not that hard to see from the above formulae how might go about doing that.
Moreover, there is a rather interesting utility-related fact about Ï†. Suppose we're performing exactly similar independent tests for H versus ~H that provide only a very small incremental change in probabilities. Suppose each test has a fixed cost to perform. Suppose that in fact the hypothesis H is true, and we start with a Ï†-value of 0 (corresponding to a probability of 1/2). Then, assuming that the conditional probabilities are such that one can confirm H by these tests, the expected cost of getting to a Ï†-value of y by using such independent tests turns out to be, roughly speaking proportional to y. Suppose, on the other hand, that you have a negative Ï†-value y and you want to know just how unfortunate that is, in light of the fact that H is actually true. You can quantify the badness of the negative Ï†-value by looking at how much you should expect it to cost to perform the experiments needed to get to the neutral Ï†-value of zero. It turns out that the cost is, again roughly speaking, |y|. In other words, Ï† quantifies experimental costs.
This in turn leads to the following intuition. If H is true, the epistemic utility of having a negative Ï†-value of y is going to be proportional to y, since the cost of moving from y to 0 is proportional to |y|. Then, assuming our epistemic utilities are proper, I have a theorem that shows that this forces (at least under some mild assumptions on the epistemic utility) a particular value for the epistemic utility for positive y.
Putting this in terms of credences rather than Ï†-values, it turns out that our measure of the epistemic utility of assigning credence r to a truth is proportional to:
- −log(1/r − 1) for r≤1/2
- 2 − 1/r for r≥1/2.
The plot to the right shows the above two-part function. (It may be of interest to note that the graph is concave--concavity is a property discussed in the scoring-rule literature.) Notice how very close to linear it is in the region between around 0.25 and 0.6.
Wednesday, February 8, 2012
Virtues and skills, optional and not
Being a coward is an unhappy fate, even if you know you will never need to face danger. Courage is worth having whether or not you ever use it. On the other hand, the ability to get to Waterloo Station seems to be a useless skill if you're never going to be in London.[note 1] Of course there may be some incidental value in being able to get to Waterloo Station (an eccentric employer whose formative experiences have been around Waterloo Station may require the ability of all her employees) but there could also be similar incidental value in being unable to get to Waterloo Station (maybe an eccentric employer who hates Waterloo Station uses a polygraph to rule out all employees who know how to get there). And it may also be that in gaining the skill of getting to Waterloo Station, one might gain some other useful skill, but that's incidental, too.
Now, maybe, there is some non-instrumental value in being able to get to Waterloo Station. I have a certain pull to say there is. But the following seems clear: there is nothing unfortunate about not being able to get to Waterloo Station, unless you need to get to Waterloo Station or something odd (like an eccentric employer story) is the case.
Are there any virtues that are like being able to get to Waterloo Station, so that it need not be unfortunate that one lacks them? Or is it a mark of a virtue that lacking it is unfortunate, no matter whether one needs to exercise the virtue or not? Let's call any virtues that it is not unfortunate to lack "optional virtues". Thus, virtues can be divided into the optional and non-optional. Plausibly, central general virtues like prudence, courage, patience, generosity and appropriate trust are non-optional. But there may be some optional virtues.
I don't know if there are any optional virtues. Maybe, though, there are some virtues that are tied to particular vocations that it is not unfortunate to lack if you don't have that vocation? I am not sure.
Interestingly, I am inclined to think there are also non-optional skills, skills which it is unfortunate to lack, whether or not you need to exercise them or not. For instance, it is unfortunate to lack interpersonal skills even if you are going to live on a desert island, for then you are lacking something centrally human. (It is, I think, unfortunate to lack legs even if you're going to spend the rest of your life in a coma. That's part of why it's wrong to steal a permanently comatose patients' legs.)
When I started writing this post, I thought that the question of what state is unfortunate to have might neatly delineate between virtues and skills. But I think it doesn't. It may be an orthogonal distinction.
Tuesday, February 7, 2012
Inferring an "is" from an "ought"
You tell me that you saw a beautiful sunset last night. I conclude that you saw a beautiful sunset last night. You are talking about Mother Teresa. I conclude that you won't say that she was a sneaky politician. You promise to bake a pie for the party tomorrow. I conclude that you will bake a pie for the party tomorrow or you will have a good reason for not doing so. I tell a graduate student to read a page of Kant for next class. I conclude that she will read a page of Kant for next class or will have a good reason for not doing so.
All of these are inferences of an "is" from an "ought". You ought to refrain from telling me you saw a beautiful sunset last night, unless of course you did see one. You ought not say that Mother Teresa was a sneaky politician, as she was not. You ought not fail to bake the promised cake, unless you have good reason. The student ought not fail to read the Kant, unless she has good reason.
All of these are of a piece. We have prima facie reason to conclude from the fact that something ought to be so that it is so. In particular, belief on testimony is a special case of the is-from-ought inference.
In a fallen world, all of these inferences are highly defeasible. But defeasible or not, they carry weight. And there is a virtue—both moral and intellectual—that is exercised in giving these inferences their due weight. We might call this virtue (natural) faith or appropriate trust. We also use the term "charity" to cover many of the cases of the exercise of this virtue: To interpret others' actions in such ways as make them not be counterinstances to the is-from-ought inference is to charitably interpret them, and we have defeasible reason to do so.
The inference may generalize outside the sphere of human behavior. A sheep ought to have four legs. Sally is a sheep. So (defeasibly) Sally has four legs.
I used to think that testimony was epistemically irreducible. I am now inclined to think it is reducible to the is-from-ought inference. Seeing it as of a piece with other is-from-ought inferences is helpful in handling testimonial-like evidence that is not quite testimony. For instance, hints are not testimony strictly speaking, but an inference from a hint is relevantly like an inference from testimony. We can say that an inference from a hint is a case of an is-from-ought inference, but a weaker one because the "ought" in the case of a hint is ceteris paribus weaker than the "ought" in the case of assertion. Likewise, inference from an endorsement of a person to the person's worthiness of the endorsement is like inference from testimony, but endorsement of a person is not the same as testimony (I can testify that a person is wonderful without endorsing the person, and I can endorse a person without any further testimony). Again, inference from endorsement is a special case of is-from-ought: one ought not endorse those who are not worthy of endorsement.
If is-from-ought is a good form of inference, the contraposition may-from-is will also be a good form of inference. If someone is doing something, we have reason to think she is permitted to do it. Of course, there are many, many defeaters.
It is an interesting question whether the is-from-ought inference is at all plausible apart from a view like theism or Plato's Platonism on which the world is ultimately explanatorily governed by values. There may be an argument for theism (or Plato's Platonism!) here.
An interesting question about evolutionary theory
Current evolutionary theory is normally taken to assume that there is no correlation between mutations and fitness. Now, take some appropriate measure of correlation (if there is no measure of correlation at all, it is hard to see what scientific meaning there is in saying that there is no correlation between mutation and fitness), and let E(c) be a theory just like evolutionary theory, but where the no-correlation assumption is replaced by the assumption that the correlation has degree c. Thus, orthodox evolutionary theory is E(0), while optimistically-skewed evolutionary theories (such as those we'd expect if Molinism is true and God exists, for instance) will be E(c) for c>0, and pessimistically-skewed ones will be E(c) for c<0.
It is clear that for c sufficiently close to 0, E(c) will fit the same empirical data as E(0) fits. Simplicity suggests that c=0, but the resurrection of the cosmological constant is a reminder that a constant can be very close to zero but eventually positing a non-zero value may be justified.
It is an interesting question as to what upper and lower bounds can be found for c, given a particular measure of correlation. It is also an interesting question what value of c gives the best fit to our observations. If the best-fit value of c is significantly positive or negative, that would lend credence to Intelligent Design (of an optimistic or pessimistic sort, respectively).
In toy situations, this is the sort of thing that is amenable to computer studies—maybe people have even done this? My intuition is that even small departures of c from 0 would produce very noticeable results. But of course it could be that c is very, very tiny, in the way the cosmological constant is.
Monday, February 6, 2012
Can presentists say someone will have infinitely many descendants?
In an earlier post, I showed that presentists can count infinities—i.e., that presentists can give a paraphrase for sentences like "There have ever been aleph-0 horses." I did this by an ersatzist construction. I then left it open whether some such construction could work in general to give presentist paraphrase.
The problem is basically the problem of transtemporal quantification. If haecceitism is true, then it's easy. The presentist just replaces talk transtemporal talk of individuals with talk of haecceities. Likewise, if the presentist accepts the impossibility of exact intrinsic duplicates—for then one can replace talk of individuals with talk of individual-types. The interesting question is whether this can be done if you're a presentist who is not a haecceitist and who thinks there can be exact intrinsic duplicates.
I have a sentence that a non-haecceitist presentist who accepts intrinsic duplicates may have difficulty giving finite truth conditions for:
- Somebody will have infinitely many descendants.
I don't know if presentist truth conditions for (1) are possible.
If we allow infinite sentences, it can be done. But that's cheating. :-) Or is it?
Friday, February 3, 2012
How likely are the laws of nature?
This is one of those annoying loosey-goosey big-picture posts.
Consider the Newtonian law of gravitation: F=Gmm'/r^{2}. What should be the prior probability of that law?
Humean line of thought: Zero. After all, consider the continuum of laws of nature of the form F=Gmm'/r^{p}, where p is some real number. The case where p=2 is just one case out of a continuum. And of course the schema F=Gmm'/r^{p} is just one schema out of a continuum of schemata (consider, for instance, replacing the multiplication operations on the right hand side with a continuum of other operations). So the prior probability of F=Gmm'/r^{2} is simply zero.
Complexity line of thought: Moderate. After all, the elegant formula "F=Gmm'/r^{2}" is by far simpler than the vast majority of its alternatives (most laws of the form F=Gmm'/r^{p} have no finite expression, since in most cases the number p will be a real number with no finite mathematical description).
If the Humean line of thought is right, Bayesianism has no hope as a model of how scientific reasoning works. The Complexity line of thought allows for a Bayesian picture of scientific reasoning.
The Humean and Complexity lines of thought come with different pictures of how probabilities are to be assigned to situations. The Humean picture is based on the idea that you've got a bunch of fundamental physical entities, say particles, and then you generate situations by assigning them random fundamental physical properties. From that point of view, clearly the Newtonian law of gravitation has probability zero.
The Complexity line of thought presupposes a different picture. The picture is that probabilities are tied to linguistic expressions. That to generate the probability of a situation, you generate a random complete linguistic description of a world, and identify the probability of a situation with the probability that such a random description entails the situation.
But what a strange thing the Complexity line of thought is! It is as if our picture of the world was that the really central thing about the world wasn't the physical stuff and its fundamental physical properties, but the descriptions. It is as if reality were fundamentally linguistic, or at least explained by something linguistic, as if the cosmos came from a being who said: "Let it be the case that s", and we then assigned probabilities to different values of "s".
In other words, the Complexity line of thought is at heart not naturalistic. But of the two lines of thought, it is the one that is needed for a Bayesian picture of scientific reasoning.
The Complexity line of thought has technical problems, too. Suppose I perform some experiment and the result can be any real number between 0 and 1. The Complexity line of thought will, I think, assign probability one to the hypothesis that the result of the experiment is a finitely describable real number. But surely other real numbers are possible. So what is to be done? It is, I think, to move from the Complexity line of thought to a theistic line of thought focused on value (and a certain autonomy in nature can then a value, and that could allow for randomness and hence for indescribable real number outcomes).
Thursday, February 2, 2012
Anti-Humean intuitions
I asked my six-year-old for an example of a bad reason for an action. Answer: "Someone wants to do it." A good reason: It's right and you know it (or something like that—I don't have the wording quite right).
I asked him if the fact that something is fun and harmless was a good reason to do it. He thought it wasn't, because it could still be a bad action. I asked him if the fact that something is fun and not bad was a good reason. He thought it could still be harmful to you. So I asked if the fact that something is fun, not bad and harmless was a good reason. He said it's neither good nor bad.
Leibnizian explanations
Say that p is a Leibnizian explanation of why q rather than r provided that p explains why q and not r and it is not possible for p to explain r.
I am inclined to think that a Leibnizian explanation of why q rather than r is a contrastive explanation of why q rather than r. But does the converse hold? Are contrastive explanations always Leibnizian?
The answer may depend on what we do about background assumptions in explanations—whether we count them as part of the explanation. I ask why you are wearing a watch on your right wrist rather than your left. You say:
- I didn't want to be like everyone else.
But perhaps we should take the background assumptions to be tacitly a part of the explanans in (1). Thus, maybe the real explanans is:
- I didn't want to be like everyone else, and everyone else was wearing watches on their left wrist.
- I didn't want to be like everyone else, and everyone else was wearing watches on their left wrist, and I had no reason to frustrate my minor preferences.
So it looks like it's hard to defend the claim that contrastive explanations are Leibnizian. But perhaps we can defend the claim that contrastive explanations are weakly Leibnizian, where p is a weakly Leibnizian explanation of why q rather than r provided p explains q and not r, but p does not explain r in close worlds where it is true that r. I like the context-sensitivity of the "in close worlds". But if one doesn't like it, one could instead go for:
- p is a weakly Leibnizian explanation of why q rather than r if and only if p explains why q and not r, and were r to hold, it would be false that p explains r.
It is now fairly plausible that contrastive explanations are weakly Leibnizian. Is it plausible that weakly Leibnizian explanations are contrastive? I think so.
Wednesday, February 1, 2012
The Leibniz Principle for explanation
Wes Salmon thinks the following "Leibniz Principle" is incompatible with the explanation of indeterministic phenomena:
if, on one occasion, the fact that circumstances of type C obtained is taken as a correct explanation of the fact that an event of type E occurred, then on another occasion, the fact that circumstances of type C obtained cannot correctly explain the fact that an event of type E' (incompatible with E) occurred.Salmon thinks that the Leibniz Principle is incompatible with explanations in indeterministic cases and hence false.
I don't know if the Leibniz Principle is false. But I do have an argument that it is compatible with explanations in indeterministic cases.
Consider an electron in a mixed (3/5)|up>+(4/5)|down> state. The electron then undergoes a process whereby it is measured whether it is in an up or down state, thereby requiring collapse. It has probability 9/25 of collapsing into |up> and 16/25 of collapsing into |down>. In fact it collapses into |up>. This is pretty much the hardest kind of real-life case for explanations of indeterministic cases, since it is the less likely outcome that happens. But it is also one where an explanation can be given that satisfies the Leibniz Principle.
Consider now the following circumstances:
- The electron is in a state such that the squared modulus of its probability amplitude for |up> is at least 9/25, and it was collapsed into |up> or |down>.
- The electron is in a state such that the squared modulus of its probability amplitude for |up> is 9/25 and the squared modulus of its probability amplitude for |down> is 16/25, and it was collapsed into |up> or |down>.
Now, if we accepted (2) as an explanation of the electron's collapsing to |up>, we would also have to accept it in another case as an explanation of the electron's collapsing to |down> (an even better one, since that is a likelier result), contrary to the Leibniz Principle. This is the sort of reason for which Salmon rejects the Leibniz Principle.
But (1) has no such unfortunate result. For while (1) does explain why the electron collapsed into |up>, it cannot explain why an electron collapsed into |down>.
One could also weaken the Leibniz Principle and take it to be a constraint on contrastive explanation (cf. this paper). If so, then the above would show that we can satisfy at least one desideratum for contrastive explanation in indeterministic cases (for a different approach, see this paper).