Showing posts with label philosophy of science. Show all posts
Showing posts with label philosophy of science. Show all posts

Monday, September 30, 2024

Four philosophy / adjacent jobs at Baylor

We have four jobs in philosophy or closely adjacent areas at Baylor, with most of the deadlines coming in mid-October:

Friday, September 10, 2021

Comparing the epistemic relevance of measurements

Suppose P is a regular probability on (the powerset of) a finite space Ω representing my credences. A measurement M is a partition of Ω into disjoint events E1, ..., En, with the result of the experiment being one of these events. In a given context, my primary interest is some subalgebra F of the powerset of Ω.

Note that a measurement can be epistemically relevant to my primary interest without any of the events in in the measurement being something I have a primary interest in. If I am interested in figuring out whether taller people smile more, my primary interest will be some algebra F generated by a number of hypotheses about degree to which height and smiliness are correlated in the population. Then, the measurement of Alice’s height and smiliness will not be a part of my primary interest, but it will be epistemically relevant to my primary interest.

Now, some measurements will be more relevant with respect to my primary interest than others. Measuring Alice’s height and smiliness will intuitively be more relevant to my primary interest about height/smile correlation, while measuring Alice’s mass and eye color will be less so.

The point of this post is to provide a relevance-based partial ordering on possible measurements. In fact, I will offer three, but I believe they are equivalent.

First, we have a pragmatic ordering. A measurement M1 is at least as pragmatically relevant to F as a measurement M2, relative to our current (prior) credence assignment P, just in case for every possible F-based wager W, the P-expected utility of wagering on W after a Bayesian update on the result of M1 is at least as big as that of wagering of W after updating on the result of M2, and M1 is more relevant if for some wager W the utility of wagering after updating on the result of M1 is strictly greater.

Second, we have an accuracy ordering. A measurement M1 is at least as accuracy relevant to F as a measurement M2 just in case for every proper scoring rule s on F, the expected score of updating on the result of M1 is better than or equal to the expected score of updating on the result of M2, and M1 is more relevant when for some scoring rule the expected score is better in the case of M1.

Third, we have a geometric ordering. Let HP, F(M) be the horizon of a measurement M, namely the set of all possible posterior credence assignments on F obtained by starting with P, conditionalizing on one of the possible events in that M partitions Ω into, and restricting to F. Then we say that M1 is at least as (more) geometrically relevant to F as M2 just in case the convex hull of the horizon of M1 contains (strictly contains) the convex hull of the horizon of M2.

I have not written out the details, but I am pretty sure that all three orderings are equivalent, which suggests that I am on to something with these concepts.

An interesting special case is when one’s interest is binary, an algebra generated by a single hypothesis H, and the measurements are binary, i.e., partitions into two sets. In that case, I think, a measurement M1 is at least as (more) relevant as a measurement M2 if and only if the interval whose endpoints are the Bayes factors of the events in M1 contains (strictly contains) the interval whose endpoints are the Bayes factors of the events in M2.

Saturday, November 18, 2017

Bayesianism and anomaly

One part of the problem of anomaly is this. If a well-established scientific theory seems to predict something contrary to what we observe, we tend to stick to the theory, with barely a change in credence, while being dubious of the auxiliary hypotheses. What, if anything, justifies this procedure?

Here’s my setup. We have a well-established scientific theory T and (conjoined) auxiliary hypotheses A, and T together with A uncontroversially entails the denial of some piece of observational evidence E which we uncontroversially have (“the anomaly”). The auxiliary hypotheses will typically include claims about the experimental setup, the calibration of equipment, the lack of further causal influences, mathematical claims about the derivation of not-E from T and the above, and maybe some final catch-all thesis like the material conditional that if T and all the other auxiliary hypotheses obtain, then E does not obtain.

For simplicity I will suppose that A and T are independent, though of course that simplifying assumption is rarely true.

I suspect that often this happens: T is much better confirmed than A. For T tends to be a unified theoretical body that has been confirmed as a whole by a multitude of different kinds of observations, while A is a conjunction of a large number of claims that have been individually confirmed. Suppose, say, that P(T)=0.999 while P(A)=0.9, where all my probabilities are implicitly conditional on some background K. Given the observation E, and the fact that T and A entail its negation, we now know that the conjunction of T and A is false. But we don’t know where the falsehood lies. Here’s a quick and intuitive thought. There is a region of probability space where the conjunction of T and A is false. That area is divided into three sub-regions:

  1. T is true and A is false

  2. T is false and A is true

  3. both are false.

The initial probabilities of the three regions are, respectively, 0.0999, 0.0009999 and 0.0001. We know we are in one of these three regions, and that’s all we now know. Most likely we are in the first one, and the probability that we are in that one given that we are in one of the three is around 0.99. So our credence in T has gone down from three nines (0.999) to two nines (0.99), but it’s still high, so we get to hold on to T.

Still, this answer isn’t optimistic. A move from 0.999 to 0.99 is actually an enormous decrease in confidence.

But there is a much more optimistic thought. Note that the above wasn’t a real Bayesian calculation, just a rough informal intuition. The tip-off is that I said nothing about the conditional probabilities of E on the relevant hypotheses, i.e., the “likelihoods”.

Now setup ensures:

  1. P(E|A ∧ T)=0.

What can we say about the other relevant likelihoods? Well, if some auxiliary hypothesis is false, then E is up for grabs. So, conservatively:

  1. P(E|∼A ∧ T)=0.5
  2. P(E|∼A ∧ ∼T)=0.5

But here is something that I think is really, really interesting. I think that in typical cases where T is a well-established scientific theory and A ∧ T entails the negation of E, the probability P(E|A ∧ ∼T) is still low.

The reason is that all the evidence that we have gathered for T even better confirms the hypothesis that T holds to a high degree of approximation in most cases. Thus, even if T is false, the typical predictions of T, assuming they have conservative error bounds, are likely to still be true. Newtonian physics is false, but even conditionally on its being false we take individual predictions of Newtonian physics to have a high probability. Thus, conservatively:

  1. P(E|A ∧ ∼T)=0.1

Very well, let’s put all our assumptions together, including the ones about A and T being independent and the values of P(A) and P(T). Here’s what we get:

  1. P(E|T)=P(E|A ∧ T)P(A|T)+P(E|∼A ∧ T)P(∼A|T)=0.05
  2. P(E|∼T)=P(E|A ∧ ∼T)P(A|∼T)+P(E|∼A ∧ ∼T)P(∼A|∼T) = 0.14.

Plugging this into Bayes’ theorem, we get P(T|E)=0.997. So our credence has crept down, but only a little: from 0.999 to 0.997. This is much more optimistic (and conservative) than the big move from 0.999 to 0.99 that the intuitive calculation predicted.

So, if I am right, at least one of the reasons why anomalies don’t do much damage to scientific theories is that when the scientific theory T is well-confirmed, the anomaly is not only surprising on the theory, but it is surprising on the denial of the theory—because the background includes the data that makes T “well-confirmed” and would make E surprising even if we knew that T was false.

Note that this argument works less well if the anomalous case is significantly different from the cases that went into the confirmation of T. In such a case, there might be much less reason to think E won’t occur if T is false. And that means that anomalies are more powerful as evidence against a theory the more distant they are from the situations we explored before when we were confirming T. This, I think, matches our intuitions: We would put almost no weight in someone finding an anomaly in the course of an undergraduate physics lab—not just because an undergraduate student is likely doing it (it could be the professor testing the equipment, though), but because this is ground well-gone over, where we expect the theory’s predictions to hold even if the theory is false. But if new observations of the center of our galaxy don’t fit our theory, that is much more compelling—in a regime so different from many of our previous observations, we might well expect that things would be different if our theory were false.

And this helps with the second half of the problem of anomaly: How do we keep from holding on to T too long in the light of contrary evidence, how do we allow anomalies to have a rightful place in undermining theories? The answer is: To undermine a theory effectively, we need anomalies that occur in situations significantly different from those that have already been explored.

Note that this post weakens, but does not destroy, the central arguments of this paper.

Saturday, November 4, 2017

Neo-Aristotelian Perspectives on Contemporary Science

The collection Neo-Aristotelian Perspectives on Contemporary Science (eds: Simpson, Koons and Teh) is now available. It's divided into a physical sciences and a life sciences part.

My piece on the Traveling Forms interpretation is in the physical sciences part (interestingly, though, that interpretation is more about us than about physics).

Thursday, February 23, 2017

Flatness of priors

I. J. Good is said to have said that we can know someone’s priors by their posteriors. Suppose that Alice has the following disposition with respect to the measurement of an unknown quantity X: For some finite bound ϵ and finite interval [a, b], whenever Alice would learn that:

  1. The value of X + F is x where x is in [a, b], where
  2. F is a symmetric error independent of the actual value of X and certain to be no greater than ϵ according to her priors, and
  3. the interval [x − ϵ, x + ϵ] is a subset of [a, b]

then Alice’s posterior epistemically expected value for X would be x.

Call this The Disposition. Many people seem to have The Disposition for some values of ϵ, a and b. For instance, suppose that you’re like Cavendish and you’re measuring the gravitational constant G. Then within some reasonable range of values, if your measurement gives you G plus some independent symmetric error F, your epistemically expected value for G will be probably be equal to the number you measure.

Fact. If Alice is a Bayesian agent who has The Disposition and X is measurable with respect to her priors, then Alice’s priors for X conditional on X being in [a, b] are uniform over [a, b].

So, by Good’s maxim about priors, someone like the Cavendish-like figure has a uniform distribution for the gravitational constant within some reasonable interval (there is a lower bound of zero for G, and an upper bound provided by the fact that even before the experiment we know that we don’t experience strong gravitational attraction to other people).

Tuesday, January 17, 2017

Vertical uniformity of nature

One often talks of the “uniformity of nature” in the context of the problem of induction: the striking and prima facie puzzling fact that the laws of nature that hold in our local contexts also hold in non-local contexts.

That’s a “horizontal” uniformity of nature. But there is also a very interesting “vertical” uniformity of nature. This is a uniformity between the types of arrangements that occur at different levels like the microphysical, the chemical, the biological, the social, the geophysical and the astronomical. The uniformity is different from the horizontal one in that, as far as we know, there are no precisely formulable laws of nature that hold uniformly between levels. But there is still a less well defined uniformity whose sign is that same human methods of empirical investigation (“the scientific method”) work in all of them. Of course, these methods are modified: elegance plays a greater role in fundamental physics than in sociology, say. But they have something in common, if only that they are mere refinements of ordinary human common sense.

How much commonality is there? Maybe it’s like the commonality between novels. Novels come in different languages, cultural contexts and genres. They differ widely. But nonetheless to varying degrees we all have a capacity to get something out of all of them. And we can explain this vague commonality quite simply: all novels (that we know of) are produced by animals of the same species, participating to a significant degree in an interconnected culture.

Monotheism can provide an even more tightly-knit unity of cause that explains the vertical uniformity of nature—one entity caused all the levels. Polytheism can provide a looser unity of cause, much more like in the case of novels—perhaps different gods had different levels in nature delegated to them. Monotheism can do something similar, if need be, by positing angels to whom tasks are delegated, but I don’t know if there is a need. We know that one artist or author can produce a vast range of types of productions (think of a Michelangelo or an Asimov).

Any case, the kind of vague uniformity we get in the vertical dimension seems to fit well with agential explanations. It seems to me that a design argument for a metaphysical hypothesis like monotheism, polytheism or optimalism based on the vertical uniformity might not have some advantages over the more standard argument from the uniformity of the laws of nature. Or perhaps the two combined will provide the best argument.

Friday, September 9, 2016

Are the laws of nature first order?

I think it's a pretty common to think that the laws of nature should be formulated in a first-order language. But I think there is some reason to think this might not be true. We want to formulate the laws of nature briefly and elegantly. In a previous post, I suggested that this might require a sequence of stipulations. For instance, we might define momentum as the product of mass and acceleration, and then use the concept of momentum over and over in our laws. If each time we referred to the momentum of an object a we had to put something like "m(a)⋅dx(a)/dt", our formulation of the laws wouldn't have the brevity and elegance we want. It is much better to stipulate the momentum p(a) of a as "m(a)⋅dx(a)/dt" once, and then just use p(x) each time.

But our best-developed logical formalism for capturing such stipulations is the λ-calculus. So our fundamental laws might be something like:

  • p(pa(m(a)⋅dx(a)/dt)→(L1(p)&...&Ln(p)))
instead of being a rather longer expression which contains a conjunction of n things in each of which "m(a)⋅dx(a)/dt" occurs at least once. But the λ-calculus is a second-order language. In fact, it seems very plausible that encoding stipulation is always going to use a second-order tool, since stipulation basically specifies a rewrite rule for a subsequent sentence.

So what if the language of science is second order? Well, two things happen. First, Leon Porter's argument against naturalism fails, since it assumes the language of science to be first-order. Second, I have the intuition that this line of thought supports theism to some degree, though I can't quite justify it. I think the idea is that second-order stuff is akin to metalinguistic stuff, and we would expect the origins of this sort of stuff to be an agent.

Friday, November 13, 2015

Bayesian divergence

Suppose I am considering two different hypotheses, and I am sure exactly one of them is true. On H, the coin I toss is chancy, with different tosses being independent, and has a chance 1/2 of landing heads and a chance 1/2 of landing tails. On N, the way the coin falls is completely brute and unexplained--it's "fundamental chaos", in the sense of my ACPA talk. So, now, you observe n instances of the coin being tossed, about half of which are heads and half of which are tails. Intuitively, that should support H. But if N is an option, if the prior probability of N is non-zero, we actually get Bayesian divergence as n increases: we get further and further from confirmation of H.

Here's why. Let E be my total evidence--the full sequence of n observed tosses. By Bayes' Theorem we should have:

P(H|E) = P(E|H)P(H)/[P(E|H)P(H) + P(E|N)P(N)].
But there is a problem: P(E|N) is undefined. What shall we do about this? Well, it is completely undefined. Thus, we should take it to be an interval of probabilities, the full interval [0,1] from 0 to 1. The posterior probability P(H|E), then, will also be an interval between:
P(E|H)P(H)/[P(E|H)P(H) + (0)·P(N)] = 1
and
P(E|H)P(H)/[P(E|H)P(H) + (1)·P(N)] ≤ P(E|H)/P(N) = 2n / P(N).
(Remember that E is a sequence of n fair and independent tosses if H is true.) Thus, as the number of observations increases, the posterior probability for the "sensible" hypothesis H gets to be an interval [a,1], where a is very small. But something whose probability is almost the whole interval [0,1] is not rationally confirmed. So the more data we have, the further we are from confirmation.

This means that no-explanation hypotheses like N are pernicious to Bayesians: if they are not ruled out as having zero or infinitesimal probability from the outset, they undercut science in a way that is worse and worse the more data we get.

Fortunately, we have the Principle of Sufficient Reason which can rule out hypotheses like N.

Friday, March 13, 2015

Two motivations for Bohmian quantum mechanics

There are two different motivations for the Bohm interpretation of quantum mechanics. One comes from a philosophical affinity for determinism. The other comes from the desire to have the Schroedinger equation, with all its mathematical elegance, hold without the exceptions that collapse leads to, while avoiding the multiverse excesses of Everettian quantum mechanics.

Now, deterministic hidden variable theories like Bohm's match up with the stochastic predictions of indeterministic quantum mechanics by supposing that the initial state is chosen according to a "special" probability distribution. But there are serious philosophical problems with justifying the assumption of that special probability distribution.

Interestingly, if all one is after is avoiding the Scylla of collapse and the Charybdis of an Everettian multiverse, one can find indeterministic hidden variable theories that avoid the initial distribution problem that deterministic hidden variable theories suffer from. A dualist example is what I call the "Traveling Minds" interpretation. But one should also be able to cook up physicalist hidden-variable theories that mimic something like the dynamics of the Traveling Minds interpretation.

It may seem silly to have indeterministic hidden variable theories, given the history of positing hidden variables in order to regain determinism. But I see no good reason to try to regain determinism, while I do see good reason to try to keep unitarity, i.e., to avoid collapse. And there is good reason to avoid the Everett multiverse, because of the serious probabilistic problems facing it. And so there is actually good reason to consider indeterministic theories. (I understand that there already is a Bohmian field theory with stochastic particle creation/destruction.)

Wednesday, November 12, 2014

A Metaphysicality Index

A grad student was thinking that Platonism isn't dominant in philosophy, so I looked at the PhilPapers survey and indeed a plurality of the target faculty (39%) accepts or leans towards Platonism. Then I got to looking at how this works across various specializations: General Philosophy of Science, Philosophy of Mind, Normative Ethics, Metaethics, Philosophy of Religion, Epistemology, Metaphysics, Logic / Philosophy of Logic and Philosophy of Mathematics. And I looked at some other views: libertarianism (about free will), theism, non-physicalism about mind, and the A-theory of time.

Loosely, the five views I looked at are "metaphysical" in nature and their denials tend to be deflationary of metaphysics. I will say that someone is "metaphysical" to the extent that she answers all five questions in the positive (either outright or leaning). We can then compute a Metaphysicality Index for an individual, as the percentage of "metaphysical" answers, and then an average Metaphysicality Index per discipline.

Here's what I found. (The spreadsheet is here.) I sorted my selected M&E specialities from least to most metaphysical in the graph.


On each of the five questions, the Philosophers of Science were the least metaphysical. This is quite a remarkably un-metaphysical approach.

With the exception of Platonism, the Philosophers of Religion were the most metaphysical. (A lot of Philosophers of Religion are theists and may worry about the fit between theism and Platonism, and may think that God's ideas can do the work that Platonism is meant to do.)

Unsurprisingly, the Metaphysicians came out pretty metaphysical, though not as metaphysical as the Philosophers of Religion. (And this isn't just because the Philosophers of Religion believe in God by a large majority: even if one drops theism from the Metaphysicality Index, the Philosophers of Religion are at the top.

Interestingly, the Philosophers of Mathematics were almost as metaphysical as the Metaphysicians (average Metaphysicality Index 29.2 vs. 29.8). They were far more Platonic than anybody else. I wonder if Platonism is to Philosophy of Mathematics like Theism is to Philosophy of Religion. The Philosophers of Mathematics were also more theistic and more non-physicalistic than any group other than the Philosophers of Religion.

It's looking to me like the two fields where Platonism is most prevalent are Logic (and Philosophy of Logic) and Philosophy of Mathematics. This is interesting and significant. It suggests that on the whole people do not think one can do mathematics and logic in a nominalist setting.

For the record, here's where I stand: Platonism: no; Libertarianism: yes; God: yes; Non-physicalism: yes; A-theory: no. So my Metaphysicality Index is 60%.

Saturday, September 15, 2012

Deflation of the foundations of probability

I don't really want to commit to the following, but it has some attraction.

Question 1: What is probability?

Answer: Any assignment of values that satisfies the Kolmogorov axioms or an appropriate analogue of them (say, a propositional one).

Question 2: Are probabilities to be interpreted along frequentist, propensity or epistemic/logical lines?

Answer: Frequency-based, propensity-based and epistemically-based assignments of weights are all probabilities when the assignments satisfy the axioms or an appropriate analogue of them. In particular, improved frequentist probabilities are genuine probabilities when they can be defined, but so are propensity-based objective probabilities if they satisfy the axioms, and likewise logical probabilities. Each of these may have a place in the world.

Question 3: But what about the big metaphysical and epistemological questions, say about the grounds of objective tendencies and epistemic probabilities?

Answer: Those questions are intact. But they are not questions about the interpretation of probability as such. They are questions about the grounds of objective propensity or about the grounds of epistemic assignments. Thus, the former question belongs to the philosophy of science and the metaphysics of causation and the latter to epistemology.

Question 4: But surely one of the interpretations of probability is fundamental.

Answer: Maybe, but do we need to think so? Take the axioms of group theory. There are many kinds of structures that satisfy these axioms. Why think one kind of structure satisfying the axioms of group theory is fundamental?

Question 5: Still, couldn't there be connections, such as that logical probabilities ultimately derive from propensities via some version of the Principal Principle, or the other way around?

Answer: Maybe. But even if so, that doesn't affect the deflationary theory. There are plenty more structures that satisfy the probability calculus that do not derive from propensities.

Question 6: But shouldn't we think there is a focal Aristotelian sense of probability from which the others derive?

Answer: Maybe, but unlikely given the wide variety of things that instantiate the axioms. Maybe instead of an Aristotelian pros hen analogy, all we have is structural resemblance.