Showing posts with label teleological argument. Show all posts
Showing posts with label teleological argument. Show all posts

Wednesday, November 10, 2021

Online talk: A Norm-Based Design Argument

Thursday November 11, 2021, at 4 pm Eastern (3 pm Central), the Rutgers Center for Philosophy of Religion and the Princeton Project in Philosophy of Religion present a joint colloquium: Alex Pruss (Baylor), "A Norm-Based Design Argument".

The location will be https://rutgers.zoom.us/s/95159158918

Friday, October 4, 2019

A tension in some theistic Aristotelian thinkers

Here is a tension in the views of some theistic Aristotelian philosophers. On the one hand, we argue:

  1. The mathematical elegance and discoverability of the laws of physics is evidence for the existence of God

but we also think:

  1. There are higher-level (e.g., biological and psychological) laws that do not reduce to the laws of physics.

These higher-level laws, among other things, govern the emergence of higher-level structures from lower-level ones and the control that the higher-level structures exert over the lower-level ones.

The higher-level laws are largely unknown except in the broadest outline. They are thus not discoverable in the way the laws of physics are claimed to be, and since no serious proposals are yet available as to their exact formulation, we have no evidence as to their elegance. But as evidence for the existence of God, the elegance and discoverability of a proper subset of the laws is much less impressive. In other words, (1) is really impressive if all the laws reduce to the laws of physics. But otherwise, (1) is rather less impressive. I’ve never never seen this criticism.

I think, however, there is a way for the Aristotelian to still run a design argument.

Either all the laws reduce to the laws of physics or not.

If they all reduce to the laws of physics, pace Aristotelianism, we have a great elegance and discoverability design argument.

Suppose now that they don’t. Then there is, presumably, a great deal of complex connection between structural levels that is logically contingent. It would be logically possible for minds to arise out of the kinds of arrangements of physical materials we have in stones, but then the minds wouldn’t be able to operate very effectively in the world, at least without massively overriding the physics. Instead, minds arise in brains. The higher-level laws rarely if ever override the lower-level ones. Having higher-level laws that fit so harmoniously with the lower-level laws is very surprising a priori. Indeed, this harmony is so great as to be epistemically suspicious, suspicious enough that the need for such a harmony makes one worry that the higher-level laws are a mere fiction. But if they are a mere fiction, then we go back to the first option, namely reduction. Here we are assuming the higher level stuff is irreducible. And now we have a great design argument from their harmony with the lower-level laws.

Tuesday, January 17, 2017

Vertical uniformity of nature

One often talks of the “uniformity of nature” in the context of the problem of induction: the striking and prima facie puzzling fact that the laws of nature that hold in our local contexts also hold in non-local contexts.

That’s a “horizontal” uniformity of nature. But there is also a very interesting “vertical” uniformity of nature. This is a uniformity between the types of arrangements that occur at different levels like the microphysical, the chemical, the biological, the social, the geophysical and the astronomical. The uniformity is different from the horizontal one in that, as far as we know, there are no precisely formulable laws of nature that hold uniformly between levels. But there is still a less well defined uniformity whose sign is that same human methods of empirical investigation (“the scientific method”) work in all of them. Of course, these methods are modified: elegance plays a greater role in fundamental physics than in sociology, say. But they have something in common, if only that they are mere refinements of ordinary human common sense.

How much commonality is there? Maybe it’s like the commonality between novels. Novels come in different languages, cultural contexts and genres. They differ widely. But nonetheless to varying degrees we all have a capacity to get something out of all of them. And we can explain this vague commonality quite simply: all novels (that we know of) are produced by animals of the same species, participating to a significant degree in an interconnected culture.

Monotheism can provide an even more tightly-knit unity of cause that explains the vertical uniformity of nature—one entity caused all the levels. Polytheism can provide a looser unity of cause, much more like in the case of novels—perhaps different gods had different levels in nature delegated to them. Monotheism can do something similar, if need be, by positing angels to whom tasks are delegated, but I don’t know if there is a need. We know that one artist or author can produce a vast range of types of productions (think of a Michelangelo or an Asimov).

Any case, the kind of vague uniformity we get in the vertical dimension seems to fit well with agential explanations. It seems to me that a design argument for a metaphysical hypothesis like monotheism, polytheism or optimalism based on the vertical uniformity might not have some advantages over the more standard argument from the uniformity of the laws of nature. Or perhaps the two combined will provide the best argument.

Wednesday, February 11, 2015

The argument from partial theodicy

The following would be a superb teleological argument for the existence of God if only we had good reason to accept (1) without relying on theism:

  1. Every evil has a theodicy.
  2. If every evil has a theodicy, then probably God exists.
  3. So, probably God exists.
I can think of two (perhaps not ultimately different) ways of making (2) plausible. First, the best explanation of (1) would be that God exists. Second, that an evil has a theodicy means that it's the sort of thing that God would have a reason to permit if God existed. But it would be very odd if all evils had this hypothetical God-involving property without God existing. It would be a cosmic coincidence.

But as I said, (1) is the rub. However, what about this version:

  1. Most evils happening to humans have a theodicy.
  2. If most evils happening to humans have a theodicy, then probably God exists.
  3. So, probably, God exists.
And while we're at it, let's add:
  1. If God exists, all evils have a theodicy.
  2. So, probably, all evils have a theodicy.
Premise (5) is harder to justify than (2), but I think the reasoning behind (2) still contributes to the plausibility of (5). The best alternative to theism is a form of naturalism, and we just wouldn't expect most evils, or even most evils happening to people, to have a theodicy on naturalism, so our best explanation for why most such evils have a theodicy is that God exists.

I want to say something about why I am restricting (4) and the antecedent of (5) to evils happening to humans. The reason is that we have much better epistemic access to evils happening to humans, and so we are better able to judge of both the magnitude of the evils and the theodicies and lack thereof.

And (4) is much easier to justify than (1). All we need is enough partial theodicies. Plausibly, for instance, many evils—perhaps it's already most evils—are moral evils that are sufficiently non-horrendous that a free will theodicy directly applies to them. Many evils have a good theodicy in terms of the exercise of virtue they enable. And when I reflect on the evils that have befallen me in my life, it's easy to see that I deserve punishment for them all by my sins, and would have deserved a lot more than I got. Granted, I've lived a charmed life, so the applicability of this will be limited. But between freedom, virtue and punishment, it is plausible that the majority of evils happening to people have been covered.

A somewhat different argumentative route is:

  1. Most evils happening to humans have a theodicy.
  2. The best explanation of (9) is that all evils have a theodicy.
  3. So, probably, all evils have a theodicy.
  4. If all evils have a theodicy, then probably God exists.
  5. At least somewhat probably, God exists.

Finally, there will be first-person versions that make use of a premise like:

  1. Every evil (or: most evils) that happened to me has a theodicy.

Sunday, March 25, 2012

How large a boost should the priors of simpler laws receive?

Simpler laws should have higher prior probabilities. Otherwise, the curve-fitting problem will kill scientific theorizing, since any set of data can be fitted with infinitely many curves. If, however, we give higher probabilities to simpler laws, then the curves describable have a hope of winning out, as they should.

So simpler formulae need to get a boost? How much of a boost? Sometimes a really big one.

Here's a case. Let's grant that Newton was justified in accepting that the force of gravitation was F=Gmm'/r2. But now consider the uncountably many force laws of the form F=Gmm'/ra, where a is a real number. Now in order for Newton to come to be justified on Bayesian grounds in thinking that the right value is a=2, he would have to have a non-zero prior probability for a=2. For the only way you're going to get out of a zero prior probability of a hypothesis would be if you had evidence with zero prior probability. And it doesn't seem that Newton did.

So Newton needed a positive prior for the hypothesis that a=2. But a finitely additive probability function can only assign a positive probability to countably many incompatible hypotheses. Thus, if Newton obeyed the probability axioms, he could only have positive priors for countably many values of a. Thus for the vast majority of the uncountably many possible positive values of a, Newton had to assign a zero probability.

Thus, Newton's prior for a=2 had to be infinitely greater than his prior for most of the other values of a. So the simplicity boost can be quite large.

Presumably, the way this is going to work is that Newton will have to have non-zero priors for all the "neat" values of a, like 2, 1, π, 1/13, etc. Maybe even for all the ones that can be described in finite terms. And then zero for all the "messy" values.

Moreover, Newton needs to assign a significantly larger prior to a=2 than to the disjunction of all the uncountably many other values of a in the narrow interval between 2−10−100 and 2+10−100. For every value in that interval generates exactly the same experimental predictions within the measurement precision available to Newton. So, all the other "neat" values in that narrow interval will need to receive much smaller priors, so much small that when they're all summed up, the sum will still be significantly smaller than the prior for a=2.

One interesting question here is what justifies such an assignment of priors. The theist at least can cite God's love of order, which makes "neater" laws much more likely.

Friday, July 9, 2010

Collimating a collimator

Collimating a Newtonian telescope basically means aligning the optical axis of the primary mirror with the optical axis of the eyepiece.  An easy way to do this is to use a collimator, e.g., a laser collimator.  The collimator is basically a tube that contains a laser that you put in place of the eyepiece.  You adjust the angle of the secondary mirror in the telescope so the beam hits the center of the primary mirror, and then you adjust the angle of the primary mirror so that the beam comes back on itself.  But it is crucial for this procedure with a laser collimator that the collimator be itself collimated, i.e., that the laser's axis be aligned with the tubing that the laser is in.

Now, if we were writing a philosophy paper, at this point it would be very tempting to say: "And a vicious infinite regress ensues."  But that would too quick.  For a laser-collimator collimator is very simple: a block of wood with two pairs of nails, where each pair makes an approximate vee shape.  You then lay the laser collimator on the two vees, aim it at a fairly distant wall, and spin it.  Then you adjust the adjustments screws on the laser collimator until the beam doesn't move as you spin the collimator on its axis, at which point the laser is collimated to its housing.  Moreover, because of how the geometry works, the vees don't need to be very exactly parallel--all the work is done by spinning.  So, it seems, the regress is arrested: the laser-collimator collimator does not itself need collimation.

Potential lesson: Perhaps sometimes we philosophers are too quick after one or two steps in a regress to say that the regress is vicious and infinite.  For sometimes after two steps, the regress may be stopped with a bit of cleverness.

Well, actually, that's not quite right.  For the double-vee collimator depends on the laser collimator's housing being a cylinder.  And one might argue that manufacturing an exact cylinder requires a procedure like collimation.  For suppose that we manufacture the cylinder by taking a block of aluminum, spinning it in a lathe and applying a lathe tool.  But to get an exact cylinder, the lathe tool needs to remain, at the end, at an equal distance to the lathe's rotational axis.  So that's another alignment procedure that's needed.  I don't know how that's done, being foggy on the subject of lathes, but I bet it involves aligning some sort of a guide parallel to the lathe's rotational axis or by moving the workpiece parallel to the rotational axis.  So another collimation step will then be needed when manufacturing the lathe.

And so the regress does continue.  But still only finitely.  At some point, parallelism can be achieved, within desired tolerance, by comparing distances, e.g., with calipers.  There is still a collimation issue for the calipers, but while previous collimations involved the spatial dimensions, the collimation of calipers uses spatial and temporal dimensions: in other words, the calipers must keep their geometrical properties over time.  For instance, if one sets the calipers to one distance, and then compares another, the caliper spacing had better not change over the amount of time it takes the calipers to move from one place to another.  So calipers allow one to transfer uniformity over time into uniformity over space.

But how do we ensure uniformity over time?  By using a rigid material, like hardened steel.  And how do we ensure the rigidity of a material?  This line of questioning pretty quickly leads to something that we don't ensure: laws of nature, uniform over space and time, that make the existence of fairly rigid materials possible.  And if we then ask about the source of these laws and their uniformity, the only plausible answer is God.  So, we may add to God's list of attributes: ultimate collimator.

There are, of course, other ways of manufacturing cylinders than by using a lathe.  One might cast a cylinder in a cylindrical mould--but that just adds an extra step in the regress, since the mould has to be manufactured.  Or one might extrude a cylinder by pushing or pulling the material for it through a circular die.  In the latter case, one still has to make a circular die, perhaps with a spinning cutter at right angles to a flat piece, and one has to ensure that the material is moved at right angles to the die.  So one has changed the problem of parallelism into the very similar problem of aligning at right angles.  And I suspect we eventually get back to something like rigid materials anyway.

So the lesson that sometimes regresses stop after one or two steps is not aptly illustrated with the case of the collimator.  That regress is still, perhaps, finite--but it goes further back, and eventually to God.

Monday, September 14, 2009

Aquinas's design argument and evolution

St. Thomas's Fifth Way is:

We see that things which lack intelligence, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result. Hence it is plain that not fortuitously, but designedly, do they achieve their end. Now whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence; as the arrow is shot to its mark by the archer. Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God.

A standard question about design arguments is whether they aren't undercut by the availability of evolutionary explanations. Paley's argument is often thought to be. But Aquinas' argument resists this. The reason is that Aquinas' arguments sets itself the task of explaining a phenomenon which evolutionary theory does not attempt to, and indeed which modern science cannot attempt to, explain. In this way, Aquinas' argument differs from Intelligent Design arguments which offer as their explananda features of nature (such as bacterial flagellae) which are in principle within the purview of science.

Aquinas' explanandum is: that non-intelligent beings uniformly act so as to achieve the best result. There are three parts to this explanandum: (a) uniformity (whether of the exceptionless or for-the-most-part variety), (b) purpose ("so as to achieve"), and (c) value ("the best result"). All of these go beyond the competency of science.

The question of why nature is uniform—why things obey regular laws—is clearly one beyond science. (Science posits laws which imply regularity. However, to answer the question of why there is regularity at all, one would need to explain the nature of the laws, a task for philosophy of science, not for science.)

Post-Aristotelian science does not consider purpose and value. In particular, it cannot explain either purpose or value. Evolutionary theory can explain how our ancestors developed eyes, and can explain this in terms of the contribution to fitness from the availabilty of visual information inputs. But in so doing, it does not explains why eyes are for seeing—that question of purpose goes beyond the science, though biologists in practice incautiously do talk of evolutionary "purposes". But these "purposes" are not purposes, as the failure of evolutionary reductions of teleological concepts show (and anyway the reductions themselves are not science, but philosophy of science). And even more clearly, evolutionary science may explain why we have detailed visual information inputs, but it does not explain why we have valuable visual information inputs.