Friday, June 28, 2013
Our backyard is full of life
We have a pair of Carolina anoles, in addition to geckos, squirrels and cardinals. Quite delightful!
Even more on infinitesimals and uniform probabilities
Let [0,1) be the set of numbers x such that 0≤x<1. Suppose X is a number uniformly chosen on [0,1) such that every number has equal probability. Let f be a function from [0,1) to [0,1) defined as follows: f(0.a1a2a3...)=0.a2a3a4..., where 0.a1a2a3... is a decimal expansion. In other words, f(x) removes the first digit after the decimal point. Observe that f is a 10-to-1 function. E.g., f(0.01)=f(0.11)=f(0.21)=...=f(0.91)=0.1.
Now, let Y=f(X). Observe that the probability that Y is equal to any particular value is ten times the probability that X is equal to any particular value: P(Y=x)=10P(X=x). For suppose x=0.a2a3.... Then: P(Y=0.a2a3...)=P(X=0.0a2a3...)+P(X=0.1a2a3...)+...+P(X=0.9a2a3...)=10P(X=x), since all values of X have equal probability.
If the probability of every particular value of X is zero, as classical probability says, there is nothing odd here. We quite properly get P(Y=x)=10P(X=x) as both sides are zero.
But if the probabilities are non-zero (say, because they are infinitesimal), then we have something quite odd. We have two uniformly randomly chosen numbers, X and Y, in [0,1) such that for any x in [0,1) we have P(Y=x)>P(X=x). (The construction is basically that of Williamson.)
Thus, if infinitesimal probabilities of individual outcomes in continuous lotteries are allowed, then it is possible to have two single-winner lotteries such that for every ticket, that ticket is more likely to be the winner on the second lottery. That seems absurd.
Of course, the point is also true for discrete lotteries. Suppose W is chosen from among 0,1,2,3,... with every point having equal probability. Let g(x) be the integer part of x/10 (i.e., we divide x by ten, and drop everything after the decimal point). Then g is a 10-to-1 function. Let Z=g(W). Then for every nonnegative integer n, the probability that Z equals n is 10 times the probability that W equals that integer.
Wednesday, June 26, 2013
Indifference is dead, again
I've noted that given the Axiom of Choice, the Hausdorff Paradox kills the principle of indifference. But we don't need the Axiom of Choice to kill off Indifference in this way! Hausdorff's proof of his paradox[note 1] also showed, without at this point in the proof using the Axiom of Choice, that:
- There are disjoint countable subsets A, B and C of the (surface of the) sphere and a subgroup G of rotations about the center such that: (a) U=A∪B∪C is invariant under G, and (b) A, B, C and B∪C are all equivalent under rotations from G.
Tuesday, June 25, 2013
Commands and requests
One way to help out a sceptic about something is by arguing that what she is sceptical about is way more widespread than she thinks. Of course, that's a risky strategy, since instead of her scepticism disappearing, she might just widen its scope. Here's one example of this risky strategy.
Authority sceptics are sceptical that another person's commands can generate obligations for us, except in certain unproblematic ways (a command from someone trustworthy might provide epistemic reason to think that the commanded action is independently obligatory; when one has reason to believe that others will follow the command, there might be reasons to coordinate one's activity with theirs; etc.). There is something seemingly magical about generating obligations for another just by commanding.
But isn't it equally magical that we can generate non-obligating reasons for another just by requesting, or obligating reasons for self just by promising? Yet it seems quite absurd to be a request or promise sceptic.
Scoring rules and outcomes of betting behavior
The literature has two ways of measuring the fit between one's credence in a proposition and reality. The "epistemic way" uses a scoring rule to measure the distance between one's credence and the truth (if the proposition is true, then a credence of 0.8 is closer to truth than a credence of 0.6). The "pragmatic way" looks at how well one is going to do if one bets in accordance with one's credence.
A standard condition imposed on scoring rules is propriety. A proper scoring rule is one where you don't expect to improve your by shifting your credence without evidence.
I think the two ways come to the same thing, at least in the case of a single proposition. Any appropriate (yeah, some more precision is needed) betting scenario gives rise to a proper scoring rule, where your score for a credence is minus the expected utility on the assumption that you bet according to your credence in the scenario. And, conversely, any proper scoring rule can be generated in this way from an appropriate betting scenario (or at least a limit of them—this is where the details get a bit sketchy).
Sunday, June 23, 2013
Cannonball and regress
From my first year of graduate school, I've been pushing a cannonball argument against Hume's idea that a regress of causes provides a complete explanation. The argument isn't very complex, but it has some complexity (it talks of complete states and all that), and it has occurred to me that it can be simplified.
At noon the cannonball is at rest and precisely then a cannon is fired. At every time after noon, the cannonball is moving. (Maybe the whole thing takes place in space.) We now have this dialogue:
- You: Why is the cannonball ball moving at one minute after noon?
- Me: Because it's moving at half a minute after noon and there is inertia.
- You: But why is the cannonball ball moving at half a minute after noon?
- Me: Because it's moving at a quarter of a minute after noon and there is inertia.
- You: But why is the cannonball ball moving at a quarter of a minute after noon?
- Me: Because it's moving at an eighth of a minute after noon and there is inertia. Don't you see the pattern?
- You: I do see the pattern, but why is it moving at any of these times: a minute after noon, half a minute after noon, a quarter of a minute after noon, an eighth of a minute after noon and so on? Why is the cannonball moving at all at any time after noon?
- Me: Don't you see, I've explained each item in the chain, and so I've explained the chain!
Now suppose that there was no such time as noon and no cannon. Would the infinite chain then explain itself? No! For the chain is no more explanatory if there is
no such time as noon and no cannon. Taking away the real explanation does not turn the chain into an explanation.
Thursday, June 20, 2013
Deflation of truth and doxastic evolutionary teleology
Our beliefs are states of a faculty that aims to represent the truth by means of them. This is a part of what makes these mental states be beliefs (as opposed to, say, desires). Now on evolutionary theories of teleology, this will have to be explained in something like this way: There is a faculty that we have that (a) in fact tends to hold true beliefs, and (b) that it tended to hold true beliefs conferred a survival advantage on our ancestors that led to us having it. This is what makes the faculty count as aimed at truth.
But if one has a deflationary view of truth, one doesn't get to say this. On a deflationary theory of truth, truth is wholly characterized by something like instances of the Tarski schema, such as:
- "Snow is white" is true if and only if snow is white.
- (b') a survival advantage was conferred on our ancestors by the fact that the faculty tended to hold a belief provided that: (i) it was the belief that snow is white, and snow is white, or (ii) it was the belief that snow is not white, and snow is not white, or (iii) it was the belief that tigers are dangerous, and tigers are dangerous, or (iv) it was the belief that tigers are not dangerous, and tigers are not dangerous, or (v) it was the belief that there is life on other planets, and there is life on other planets, or (vi) it was the belief that there is no life on other planets, and there is no life on other planets, or ....
The anti-deflationist does better. For she sees truth as a genuine property and can say (b) as is, if she takes truth to be a fundamental property, or can replace (b) by some natural analysis. But the deflationist does not see truth as a genuine property and hence cannot use (b), and thus cannot give an evolutionary account of the truth-directedness of our faculties.
So what? Well, I think it is undeniable that our doxastic faculties are truth-directed. It is, I think, particularly crucial for naturalists to be able to say this, since it will be needed for an account of intentionality. But a naturalist (in the contemporary, non-Aristotelian sense) can only give an evolutionary account of teleology. Thus, a naturalist cannot be a deflationist about truth.
But neither can a naturalist be an anti-deflationist about truth. For if a naturalist were an anti-deflationist about truth, she would have to hold that truth is to be analyzed in terms of the sorts of properties science speaks of. But science speaks only of first-order properties, and truth is not a first order property (here's one quick argument based on an idea by Leon Porter: if it were, then the liar paradox could be formulated in entirely first-order scientific terms, yielding a contradiction in the language of science).
So naturalism is in trouble.
Wednesday, June 19, 2013
More details on Banach-Tarski and Axiom of Choice
Earlier I claimed that even without the Axiom of Choice, the proof of the Banach-Tarski Paradox yields paradoxical results, so that the Axiom of Choice isn't the source of paradoxicality.
Here is something more precise. Without the Axiom of Choice, one can prove the following.
Almost-BT. Suppose B is a punctured ball, i.e., a solid ball without a center point, and v is a translation sufficiently large that vB does not intersect B. Then there are rigid motions (combinations of rotations and translations) α and β such that B can be partitioned into an infinite collection U of countable subsets with the property that each member A of the partition U can be further partitioned into four subsets A1, A2, A3 and A4, such that (a) A1 and αA2 are a partition of A and (b) vA3 and βA4 are a partition of vA. (This follows from the method of proof of Theorem 4.5 in the Wagon book on Banach-Tarski.)
In other words, one can partition B into a bunch of sets, each of which can then be divided into four pieces and reassembled into two copies of itself oriented the same way. Moreover, all the reassembly can be done using the same rigid motions. When we add the Axiom of Choice to the mix, we can basically take Bi to be the union of "the" sets Ai ("the" is in quotation marks because the sets Ai aren't unique given A; if they were, we wouldn't need Choice), and then B=B1∪B2∪B3∪B4=B1∪αB2 and vB=vB3∪βB4, and we have Banach-Tarski for punctured balls. (It's a bit of extra work to get Banach-Tarski for non-punctured balls, but that extra work doesn't need Choice.) But even without the Axiom of Choice, Almost-BT seems pretty counterintuitive.
Monday, June 17, 2013
Blowguns and their darts
Saturday, June 15, 2013
Spoof arguments against the Axiom of Choice
As a first year graduate student, I wrote this pseudonymous spoof, inspired by the Sokal affair. It's rather immature in places, but enjoy!
Friday, June 14, 2013
The Banach-Tarski Paradox and the Axiom of Choice
The Banach-Tarski theorem (BTT) says that, given the Axiom of Choice, a continuous ball can be decomposed into a finite number of pieces that can be rearranged to form two balls of equal size. That's weird, and is taken by some to be an argument against the Axiom of Choice.
I don't think we should take it as such an argument. Sure, BTT is paradoxical. But when one looks at the proof, one notes that the proof makes use of paradoxical results that do not depend on the Axiom of Choice. For instance, a lemma in standard proofs of BTT is the surprising fact that you can take any circle that's missing a countable number of points, decompose that circle into two disjoint (messy!) pieces, and reassemble the pieces, without overlap, to get a complete circle. That lemma is about as weird as BTT, but it doesn't use the Axiom of Choice at all.[note 1] Moreover, the proof uses paradoxical decompositions of various countable sets, and these are, well, paradoxical, but do not involve the Axiom of Choice. Using the Axiom of Choice lets you put all the paradoxicality together into a neat package, but when I think about the proof of the result, I just don't see Choice as the source of paradoxicality. In fact, once one sees all the other ingredients of the proof of BTT, the Axiom of Choice step seems quite intuitive.
Another way to put the point is this: Once one reflects enough on all the pieces of the proof of BTT that do not use Choice and accepts them, BTT no longer seems very surprising. When you cut things up into strangely scattered pieces, it's not that surprising that you can put them back together in various ways.
Thursday, June 13, 2013
Popper functions and null sets
Let's go back to the problem that I keep on thinking about: How to distinguish possibilities that are classically of null probability. For instance, given a uniform choice of a point on some nice set (say, a ball) in Euclidean space, we want to say something like P({x,y})>P({x}), when x and y are distinct: it's more likely that one would hit one of two points than that one would hit a particular point. A series of my blog posts (and at least one article) showed that infinitesimals aren't the way. What about conditional probabilities?
For various reasons, instead of taking unconditional probabilities to be fundamental and defining conditional probabilities in terms of them, one may want to take conditional probabilities as fundamental. The standard method is to use Popper functions (I'll assume the linked axioms below). One might hope to use Popper functions to do things like make sense of the difference between the probability of two points in a continuous case (say, where a point is uniformly chosen in some nice subset of a Euclidean space) and the probability of a single point. For instance, one might hope that P({x,y}|{x,y})>P({x}|{x,y}) whenever x and y are distinct.
This won't work, however. Instead of working with propositions, I will work with sets—the definitions of Popper functions neatly adapt. Let Ω be a solid three-dimensional ball. Assume that P(A|B) is defined for all A and B in some algebra F of subsets of Ω.
Say that a set A in F is P-trivial provided that P(C|A)=1 for all C (including for the empty set). The empty set is trivial, of course. In order for us to have any hope of saying things like P({x,y}|{x,y})>P({x}|{x,y}), we better have sets with two points be non-trivial. Now, it's not hard to show that a finite union of trivial sets is trivial and that the subsets of a trivial set are trivial, so in order for all the finite sets to be non-trivial, it's necessary and sufficient that singletons be non-trivial.
Moreover, we want rotational symmetry. Say that F and P are rotationally symmetric provided that for any rotation r around the origin and A and B in F, rA and rB are in F, and P(rA|rB)=P(A|B).
Theorem. If P is rotationally symmetric and F includes all countable subsets of Ω, then for any sphere of positive radius around the origin lying in Ω there is at least one P-trivial countably infinite set on the surface of
that sphere. Thus, all finite sets not containing the origin are trivial.
If we are to be extending something like classical probabilities, we do want countable subsets of Ω to be in F. The triviality of finite sets follows from the fact that some singleton on every sphere of non-zero radius about the origin is trivial (since any subset of a trivial set is trivial) and if one singleton on that sphere is trivial, then by rotational invariance, they all are, while finite unions of trivial sets are trivial. So all we need to prove is the existence of that trivial countably infinite set.
The proof is easily based on standard ideas from the proof of the Banach-Tarski Paradox.
Proof. Let SO(3) be the rotation group around the origin. Famously, there is a subgroup G that is isomorphic to the free group F2 on two elements. Choose a point ω in Ω such that ρ(ω)=ω only if ρ is the identity rotation. (This is a counting argument: G is a countable group, and each non-identity member of G has two fixed points on any given sphere around the center, so for any fixed sphere there will be only countably many fixed points of non-identity members of G on it, and hence there will be a point on the sphere that isn't a fixed point of any non-identity member.) Let H={ρ(ω):ρ∈G}. Then H is a countable subset of Ω.
For a reductio ad absurdum, suppose H is non-trivial. Then P(−|H) is a finitely additive probability measure on H. Moreover, H=ρH for any ρ∈G, so by rotational invariance P(ρK|H)=P(ρK|ρH)=P(K|H) for any ρ∈G, and so P(−|H) is a finitely additive G-invariant measure on H. Using the bijection between G and H given by f(ρ)=ρ(ω) (we use the choice of ω to see that f is one-to-one), we can then get a finitely additive G-left-invariant measure on G. But G is isomorphic to F2 and hence is not amenable and hence has no such invariant measure. (One could also neatly demonstrate a paradoxical decomposition here.) That's a contradiction, so H must be trivial. QED
Even though this argument uses ideas from the proof of Banach-Tarski, and famously the latter uses the Axiom of Choice, this argument does not use the Axiom of Choice.
One can perhaps get out of this by having a more onerous requirement on the sets A and B that are on either side of the bar in "P(A|B)" than that they are fit in a single algebra F. We want any countable set to be an acceptable A, but perhaps we don't want to allow every countable set as an acceptable B. I don't know what natural requirement could be put here.
Wednesday, June 12, 2013
Yet another account of omnipotence
The following account of omnipotence runs into the McEar objection:
- x is omnipotent iff x can do anything whose doing is consistent with the nature of x.
The Pearce-Pruss account of omnipotence escapes this. But so does this minor twist on (1):
- x is omnipotent iff x can do anything whose doing is consistent with the nature of a perfect being.
Perhaps, though, there is a circularity problem. For a perfect being has all perfections. And one of the perfections is omnipotence. However, I do not know that this is fatal. Compare:
- a fully self-knowledgeable person is one who knows all her mental attributes.
Tuesday, June 11, 2013
A funny uniform distribution?
Let X and Y be independent random variables uniformly distributed over [0,1]. Let Z=max(X2,Y2). Then it's easy to check that Z is also uniformly distributed over [0,1].
But now suppose we think that uniform random variables have equal infinitesimal probabilities of hitting every point. Thus, P(X=a)=P(Y=a)=α for every a, where α is some infinitesimal. What, then, is P(Z=a)? Well Z=a if and only if one of three mutually exclusive possibilities occurs:
- X<a1/2 and Y=a1/2
- X=a1/2 and Y<a1/2
- X=Y=a1/2.
In other words, Z is a uniformly distributed random variable by standard probabilistic criteria, but the probability of Z hitting different points is different: P(Z=a) is basically an infinitesimal multiple of √a.
What is happening here is that if one attempts to attach infinitesimal probabilities to the individual outcomes of bona fide classical probabilities, the infinitesimal individual outcome probabilities float free from the distribution. You can have the same individual outcome probabilities and different distributions or, as in this post, different (nonuniform) individual outcome probabilities and the same (uniform!) distribution.
Thursday, June 6, 2013
Uniform distributions
Looking at the graph, it is tempting to say things like this: X is a uniform distribution and has equal probability of having any value between 0 and 1, while values closer to 0 are much less likely than values close to 1 for Y. We might even look at the graph and say things like: P(X=0.1)=P(X=0.2) while P(Y=0.2)>P(Y=0.1).
Of course, with these continuous distributions, classical probability theory assigns equal zero probability to every value: P(X=a)=P(Y=a)=0 for all a. But this seems wrong, and so we may want to bring in infinitesimals to remedy this, assigning to P(Y=0.2) an infinitesimal twice as big as the one we assign to P(Y=0.1), while P(X=0.2)=P(X=0.1).
Or we might attempt to express the pointwise non-uniformity of Y by using conditional probability P(Y=0.2|Y=0.1 or Y=0.2)=2/3 and P(Y=0.1|Y=0.1 or Y=0.2)=1/3, while P(X=0.2|X=0.1 or X=0.2)=1/2=P(X=0.1|X=0.1 or X=0.2).
In other words, it is tempting to say: X is pointwise uniform while Y is not.
Such pointwise thinking is problematic, however. For I could have generated Y by taking our uniformly distributed random variable X and setting Y=X1/2. (It's an easy exercise to see that if X is uniform then the probability density of X1/2 is given by p(x)=2x.) Suppose that I am right in what I said about the uniformity of pointwise and conditional probabilities for X. Then P(Y=0.1)=P(X=0.01)=P(X=0.04)=P(Y=0.2). And P(Y=0.2|Y=0.1 or Y=0.2)=P(X=0.04|X=0.01 or X=0.04)=1/2=P(X=0.01|X=0.01 or X=0.04)=P(Y=0.1|Y=0.1 or Y=0.2), since Y=0.1 if and only if X=0.01 and Y=0.2 if and only if X=0.04.
So in fact, Y could have the nonuniform distribution of the red line in the graph and yet be just as pointwise uniform as X.
Lesson 1: It is a mistake to describe a uniform distribution on a continuous set as one "where every outcome is equally likely". For even if one finds a way of making nontrivial sense of this, by infinitesimals or conditional probabilities say (and I think similar arguments will work for any other plausible characterization), a nonuniform distribution can satisfy this constraint just as happily.
Lesson 2: One cannot characterize continuous distributions by facts about pointwise probabilities. It is tempting to characterize the uniform distribution by P(X=a)=P(X=b) (infinitesimal version, but similarly for conditional probabilities) and the nonuniform one by P(Y=a)=(a/b)P(Y=b). But in fact both could have the same pointwise properties. I find this lesson deeply puzzling. Intuitively, it seems that chances of aggregate outcomes (like the chance that X is between 0.1 and 0.2) should come out of pointwise chances. But no.
The converse characterization would also be problematic: pointwise facts can't be derived from the distribution facts. For imagine a random variable Z which is such that Z=X unless X=1/2, and Z=1/4 if X=1/2 (cf. this paper). This variable has the same distribution as X, but it has obviously different pointwise probability facts.
Wednesday, June 5, 2013
Simple and full induction
A followup on the previous post.
Simple induction: F1 is G, F2 is G, ..., Fn is G, so probably: Fn+1 is G.
Full induction: F1 is G, F2 is G, ..., Fn is G, so probably: Fk is G for all k.
Intuitively, simple induction seems to be always the better inference than full induction. Indeed, in cases where there are rare exceptions that didn't occur for Fk where k≤n, simple induction typically gives the right answer but full induction gives the wrong answer. Moreover, the conclusion of the full induction is logically stronger (modulo the existence of Fn+1), so it seems clear that simple induction is the better inference.
But no! Let's say that I, Jon and Trent (and a number of others!) entered a raffle held for a charity where there is only one prize. That Jon and Trent didn't win is some weak evidence that nobody won the raffle—namely, that the charity raffle was crooked. So we do have some evidence for the full inductive conclusion. But that Jon and Trent didn't win is also some evidence that I won. This is true even if we admit the possibility that nobody won, as long as we insist that it is certain that there is only one prize, and hence at most one person won. For P(Jon and Trent didn't win | I won) = 1, but P(Jon and Trent didn't win | I didn't win) < 1, and so that they didn't win supports that I won.
On Bayesian grounds, if the existence of all the Fk is in the background, that F1 is G, F2 is G, ..., Fn is G will never be evidence against that all the Fk are G, and in contingent regular cases will be evidence for the universal claim. But it could well be evidence against that Fn+1 is G.
Lightbulbs and induction
My colleague Trent Dougherty brought to me the very interesting question of how we inductively confirm that the sun will rise tomorrow given background knowledge that the sun one day won't rise.
This makes me think of an oddity. If I know that a lightbulb worked yesterday, that gives me reason to think it will work today. But if I know that it worked for the hundred preceding days, that gives me less reason to think it will work today, because it also gives me evidence that a burnout is due.
So given appropriate background knowledge—in this case, that lightbulbs burn out—more inductive cases do not necessarily raise the probability of the outcome, but can even lower it.
Burnout cases aren't the only ones like this. If I bought a lottery ticket, the more people I learn did not win, the more likely it is that I won.
Nothing greatly exciting here, except that we need to be careful to avoid flatfooted statements of how induction works.
A probabilistic argument against panexperientialism
Let panexperientialism be the view that all fundamental particles have fundamental experiential or protoexperiential properties.
There is good reason to doubt this. Fundamental particles differ as to whether they have fundamental properties like mass, charge and spin. Thus, we should expect them to differ as to whether they have experiential or protoexperiential properties, and hence we should not expect all fundamental particles to have such properties.
A variant argument. For any subset S of types of fundamental particles, there is S-experientialism, which holds that all and only the fundamental particles from S have the fundamental experiential or protoexperiential properties. Panexperientialism then is S-experientialism where S contains all fundamental particle types. But there are many values of S for which S-experientialism explains our consciousness as well (or as badly) as panexperientialism—for instance, S might be all fermions, or all leptons. So what reason do we have to think that of all these, panexperientialism is true? Well, we might think it's the simplest version. Yes, but the simplicity argument is defeated by the inductive considerations of the previous paragraph.
Tuesday, June 4, 2013
Two thoughts about the surprise exam paradox
The standard version of the surprise exam paradox is that the teacher announces that next week there will be a surprise exam: an exam whose occurrence will surprise the students. The students are smart and reason: it can't be on Friday, since if Monday through Thursday pass without an exam, they'd know that it would be on Friday and wouldn't be surprised. But likewise it can't be on Thursday, since they know it can't be on Friday, and so if Monday through Wednesday pass without an exam, they'll know by Thursday that it's on Thursday, and won't be surprised. Repeating this reasoning, the exam has to be on Monday, but then that won't be a surprise either. So a surprise exam is impossible, which is paradoxical. Besides, then, despite all the reasoning, the exam happens on, say, Tuesday and the students really are surprised.
The above version is over a span of 5 days. Generalize by supposing a span of n days for the surprise exam. I really don't know the surprise exam literature, so there may be nothing new here.
First thought: The n=1 case is already a bit paradoxical. The teacher announces: "We will have a surprise exam on Monday." Students are puzzled. Since they know that it will be on Monday, how can it be a surprise to them when it happens on Monday? Should they conclude that the teacher has just told them something obviously false? But if so, then they don't know that there will be an exam on Monday. And then when Monday rolls around, they are quite open to being surprised by the occurrence of the exam. So maybe what the teacher told them isn't obviously false. So charity suggests that they believe the teacher—there really will be a surprise exam on Monday. But if so, then once again they won't be surprised, and so the teacher is telling them something false, and so they should dismiss it. And so on: They keep on thinking this through, and Monday rolls around, and they fail the exam because instead of studying for it, they were thinking about whether there would be an exam! So the n=1 case is paradoxical. It is interesting to ask: Does the n=1 case contain all of the paradoxicality of the n=5 case?
Second thought. Suppose we have some probability cutoffs the define assertibility and surprise: something is assertible provided it has probability at least α and it's surprising provided it had probability less than β. Maybe the values are α=0.9 and β=0.1. I'll do the examples with those. Suppose now that the teacher genuinely will set up an exam at a random date, with some probability distribution on the days 1,2,...,n. First, suppose the distribution is known to the students to be uniform. If the exam is on day n, there is no surprise. But that isn't enough to undercut the assertibility of the teacher's statement. For the probability that the exam would end up on day n is only 1/n, and as long 1−α≥1/n, the teacher might not be taking an undue risk. But we do get a constraint here: n≥1/(1−α). With our sample numbers, this means n≥1/(1−0.9)=10. So we don't have assertibility in the original version where n=5.
Let's keep ploughing through and see what other constraints there are. On the 1/β (rounded down) last days, the probability each day that there would be an exam on that day, if we get to that day examless, is greater than or equal to β, and so there is no surprise if the exam is then. So for assertibility, we better have approximately (I am ignoring rounding) (1/β)/n≤1−α, or n≥1/(β(1−α)). And if we have that approximately, then the teacher has assertibility in saying "There will be a surprise exam over the next n class days." With our sample numbers, the constraint is n≥100. So on our probabilistic understanding of surprise and assertibility (and rounding issues don't come up since in our case 1/β=10 exactly), you can honestly announce a surprise exam if there are at least 100 days that the exam might be on, and then just choose a day uniformly randomly, and even tell the students that's how you're choosing.
It would be fun to see if other distributions than the uniform one might not allow one to bring down the n≥100 constraint.
Charity filter
- Do not attribute to malice, selfishness or incompetence what you can attribute to a reasonable but mistaken judgment.
- Do not attribute to malice or selfishness what you can attribute to incompetence.
- Do not attribute to malice what you can attribute to selfishness.
Sunday, June 2, 2013
Salmon's argument against S4
Start with:
- If x originates from chunk α of matter and β is a non-overlapping chunk of matter, then x couldn't have originated from β.
- If x originates from chunk α of matter and α' is a chunk of matter that almost completely overlaps α, then x could have originated from α'.
But this is a mistaken line of thought. For (2) is not significantly more plausible than:
- If x could have originated from chunk α of matter and α' is a chunk of matter that almost completely overlaps α, then x could have originated from α'.
But given (3), Salmon's argument can be run without S4—all we need is T (what is actually true is possible). Iterating uses of (3) and modus ponens, we conclude that (1) is false. In other words, we cannot hold both (1) and (3). And since (2) has little plausibility apart from (3), we shouldn't hold both (1) and (2). Thus, Salmon's argument is not an argument against S4, but an argument against the conjunction of (1) and (2). And I say we should reject (2).
Saturday, June 1, 2013
Material beings
What is a material being? Suggestion:
- x is material if and only if x is in space.
- x is material if and only if x occupies a proper part of space.
- x is material if and only if possibly x occupies a proper part of space.
- x is material if and only if necessarily it is abnormal for x to occupy only a proper part of space.