Monday, May 4, 2015

The cause of a living thing is alive

  1. Every known cause of a living thing includes a living thing.
  2. Therefore, probably, every cause of a living thing includes a living thing. (Induction)
  3. Therefore, either there is (a) an infinite regress of living things, (b) circular causation among living things, or (c) an uncaused living thing.
  4. But (a) and (b) are false.
  5. So there is an uncaused living thing.
(Et hoc dicimus deum.)

30 comments:

Gorod said...

Hello,

very interesting post. For me it raises some questions...

1. I was wondering how Aquinas could have missed this in his Five Ways, but then I wondered: did he miss it, or is it included in one of his ways? If so, which one? I'm not an expert, but I hear his Five Ways are structured around the various types of causes (formal, material, efficient, final), so could we think of life as part of one these?

2. This seems a special case where we end saying "this is what we call God", but since the causal link is through begetting, something that exists also within God, how does this really articulate with the doctrine of the Trinity? Does it "stop" only in the Father? But then the Son wouldn't be God. But if it stops in God (whatever Person), then we would have to say that the Son isn't begotten, or make some distinction between the kind of begetting we're speaking about...

3. If the other Ways give us names of God like Uncaused Cause, Immobile Motor, etc., what could we call the origin in this argument? The Unborn/Unbegotten Life?

Thanks in advance if you care to address any or all of these questions.

Alexander R Pruss said...

The causal link doesn't have to be through begetting. We might one day create synthetic life.

In any case I doubt we should talk of the relations in the Trinity as causal.

Cale said...

The "inductive" leap from 1 to 2 is a logical error.

Cale said...

After all:

1.) Every known mind is embodied.
2.) Therefore, probably, every mind is embodied.
3.) If Gos exists, an ubembodied mind exists.
4.) Therefore, probably, Gos doesn't exist.

Theists in particular have a very compelling reason to forgo this sort of naive induction--God is unlike anything that exists within our experience. Every single one of God's alleged properties can be used to prove that he probably doesn't exist, if we accept this sort of "logic."

Alexander R Pruss said...

But your 1 is false since we have very good arguments for the existence of God. :-) If we didn't, I think your argument wouldn't be half bad. Of course like all inductive arguments, it's defeasible.

Cale said...

Calling 1 false is actually just begging the question. My argument starts from an entirely appropriate position of assumed agnosticism about the subject under debate.

Your "rebuttal" is just assuming that the conclusion is false.

Honestly, your response is an even more serious error than your initial argument.

Cale said...

And no. We really don't. ;)

Basically, when we actually bother to look at these alleged "arguments" for God, what we find is exactly this sort of trivial error, over and over again. The specific error varies from case to case, but there isn't even one single approach that doesn't suffer from a fatal logical or epistemic flaw.

So 1 is certainly not undermined by one's claims to "know" that God exists. Not only is such a claim question begging, and thus not really a response at all, it is false, to boot.

Miloš said...

I am puzzled with assertion that all theistic arguments commit same logical error (which one?) and I am not quite sure is there any atheist with such incredible discovery?

I believe that knowledge of God's existence is not necessary condition to undercut 1. from your argument - barely epistemic possibility of God's existence is enough. And than argument leaves us with agnosticism about God's existence - from place where we start.

Cale said...

Epistemic possibility is certainly not enough to undercut my premise 1. Only an affirmative assertion that God is known to exist would be sufficient--and that would trivially beg the question (and, as such, not constitute a response at all--just hot air).

If you want to know what logical or epistemic error a particular argument suffers from, feel free to offer it, and I will tell you.

Alexander R Pruss said...

Milos:

Mere epistemic possibility isn't enough, indeed, to undercut the induction. I think one does need something stronger than epistemic possibility, though probably weaker than knowledge.

Cale:

If one has an independent argument for p, one isn't begging the question. You're not convinced by the independent arguments. But that doesn't make the use of their conclusion question-begging.

That said, these inductive arguments--your and mine--may not be very strong since they move to a case that is significantly different from the ones in the evidence base. But even though they aren't very strong, they still have some evidential force.

Mark Rogers said...

Hey Cale!
You say:
'Epistemic possibility is certainly not enough to undercut my premise 1. Only an affirmative assertion that God is known to exist would be sufficient--and that would trivially beg the question (and, as such, not constitute a response at all--just hot air).'
This does not seem quite right to me. You would need I think:
1. An affirmative assertion that God is known to exist.
2. An affirmative assertion that God is known to be a disembodied mind.
Not all theists agree with William Lane Craig that God is a disembodied mind.

Cale said...

If one's only approach to objecting to an argument is to assert the opposite of the conclusion, that actually is question begging.

Of course, it's an inductive argument and, if we were doing induction *properly* instead of *improperly* we would have an easy solution, since proper induction via Bayesian updating can easily accommodate as much evidence as you've got, whereas this sort of argument pretends that some partial set of evidence is sufficient to warrant a probabilistic conclusion--which is why this sort of argument is a logical error.

I actually don't think it carries any rational epistemic weight at all, really. It's simply too ham-fisted to call a worthwhile argumentative approach. Demanding a Bayesian approach rather than the...not really an approach at all that this argument utilizes seems quite reasonable.

Mark:
<<
This does not seem quite right to me. You would need I think:
1. An affirmative assertion that God is known to exist.
2. An affirmative assertion that God is known to be a disembodied mind.
>>

I agree. Both would be needed. Good point.

Cale said...

But, if you don't like my particular parody argument, we can prove your pseudo-inductive heuristic faulty easily using a hypothetical example:

Given your proposed rule of induction (call it A) and some subject S:

If all X known to S are Y, then S is warranted in believing that probably all X are Y.

Given that S knows of precisely one rat, and that rat is hairless.

1.) If A is true, S is warranted in believing that all rats are hairless.

2.) S is not warranted in believing that all rats are hairless.

3.) Therefore, A is false.

Simply, statements like "all X are Y" is too strong a claim to be warranted in this way, especially since this approach allows for us to warrant such a universal rule with only a single sample. However, no rational person would take a single sample as sufficient evidence for a universal rule.

Indeed, this points to two problems in your proposed rule:

First, that we shouldn't be using this sort of induction to try to warrant universal rules. That is, we shouldn't say "if all known X are Y, then probably all X are Y."

We should, rather, say "If all known X are Y, then any given unknown X is probably also Y."

These statements are actually significantly different in terms of their implications, and the pseudo-inductive heuristic you're trying to use actually applies to the latter, rather than the former. That's mistake one.

Mistake two is that your heuristic doesn't take into account the amount or strength of evidence available. It doesn't discriminate between samples of one individual and samples of thousands, even though no rational person would accept a sample of one as indicative while samples of thousands quickly become powerfully indicative (if processed correctly, which, of course, this heuristic is incapable of doing).

If you want to use induction, actually use induction properly--use statistics and probability theory.

This sort of naive pseudo-induction really has no place in sincere intellectual endeavors. To take it as providing any epistemic warrant at all, really, is a straight-forward mistake.

Mark Rogers said...

Hey Cale!
Have you given much thought to evolution and natural selection? It is taught in school like it is the undeniable truth but I see scant evidence of it in the fossil record. I was just wondering what you thought caused life and it's diversity. Possibly God?

Mark Rogers said...

Hey Cale!

1.) Every known mind is embodied.
2.) Therefore, probably, every mind is embodied.
3.) If Gos exists, an ubembodied mind exists.
4.) Therefore, probably, Gos doesn't exist.

I realize you are trying to show that God does not exist. Realistically though would not a conclusion like this:
4.) Therefore, probably, God doesn't exist as an unembodied mind.
Flow better with your argument?

Alexander R Pruss said...

Cale:

It's a good point that inductive inferences tend to be enthymematic, i.e., have unstated background assumptions. Thus, the classic argument "All known ravens are black; so, probably, all ravens are black" has the unstated background assumption that quite a number of ravens are known. If no ravens are known, then the first premise is trivially true. If only one or two ravens are known, as in your rat example, again the induction isn't very good.

One can try to quantify this sort of thing probabilistically, but I don't think we have a way of doing it without a lot of arbitrariness as yet. Kolmogorov priors are promising, but suffer from language-dependence. Carnap priors were a brave attempt, but I don't think they worked. Methodologically, preserving the correctness of simple inductive inferences like the raven one constrains the prior probabilities rather than the other way around.

Mika's Piano Blog said...

Dr. Pruss,

Since you mentioned Kolmogorov and Carnap priors, I'm curious - what do you think of Solomonoff priors?

Alexander R Pruss said...

I actually meant to talk about Solomonoff priors, which are based on Kolmogorov complexity. :-)

There is a language-dependence in them. Granted, there is a theorem that Kolmogorov complexity differs only by a constant term across languages. But that's only reassuring in limiting cases.

For relatively simple cases, that constant term can dwarf the differences we want to capture.

Mika's Piano Blog said...

Thank you for the clarification! :)

Cale said...

Mark: I have actually studied evolution and natural selection a fair bit, yes. There seems to be quite a bit of evidence, both frok fossils and from genetics.

As for the argument, I'm actually not trying to argue against God. I'm presenting a parody argument aimed at the type of psuedo induction Pruss is utilzing in the OP, which is why I crafted my form to mirror his. You can construct an essentially undefeatible inductive argument against the existence of God, but it looks quite a bit different.

Cale said...

The fact that the language dependency of Solomonoff priors can be done away with via that constant "translation" code means that, though any given prior cannot be calculated (this is nothing new for solomoff priors) they can be used to none arbitrarily compare limited sets of hypotheses and return those hypothese ordered in terms of probability, even without attaching any specific value to them--which, frankly, is sufficient for all practical purposes and still better than non-rigorous, provably flawed, and still essentially arbitrary heuristics like the IBE or this form of pseudo-induction.

Alexander R Pruss said...

Actually, no. The constant factor is an upper bound on the difference in complexity. But the actual difference isn't going to be the same for every hypothesis, because some hypotheses may fit more nicely with some Turing machine.

Indeed, for any two hypotheses H1 and H2, there will be a Turing machine according to which H1 is much simpler than H2 and another Turing machine according to which H2 is much simpler than H1.

Alexander R Pruss said...

An easy way to see that the factor will be different for different hypotheses is this. Consider the hypotheses H and ~H. Then if we rescale the probability of H, we cannot rescale the probability of ~H by the same amount, or else the probabilities won't add up to one.

Cale said...

That last part is incorrect. If we add the translation code to each hypothesis, their *complexities* will change and their probabilities will change, but the distribution will remain valid.

The probabilities of those two hypotheses will never add up to one, because the solomonoff prior is from a distribution over an infinite set of hypotheses. That isn't the issue.

If we add the translation code, we will end up with a non-arbitrary relationship between the solomoff priors for the two hypotheses. That is the issue.

Cale said...
This comment has been removed by the author.
Cale said...

Gah, I'm really sorry about those double posts.

Alexander R Pruss said...

But we don't just add the translation code to each one. Rather, the complexity is defined by the minimal length of the code generating a given pattern. Adding translation code is one way of generating code. But it's not always the minimal one. In fact, if we gerrymander a Turing machine enough, there will be an arbitrarily complex pattern that you can generate with a single bit of code.

Cale said...

My understanding is that Kolmogorov complexity is the minimal length after being universalized to all possible Turing machines via the translation code.

Interestingly, Solomonoff himself didn't use Kolmogorov complexity specifically, but rather something of his own conceptualization that was substantially similar. Kolmogorov's big contribution to that was the theorem regarding the translation code, and I'm pretty sure it works out in the manner I've described.

So, we could arbitrarily pick a Turing machine and calculate complexity based on that choice--this would be arbitrary (though, again, no more arbitrary than IBE or the pseudo-induction in the OP).

Alternately, we could evaluate their kolmogorov complexities including the translation code (or, since this is not feasible at present, some arbitrarily close approximation) and the only arbitrary feature of this route would be the degree of precision we choose to use.

Alexander R Pruss said...

As far as I can tell there is no such thing as "after being universalized to all possible Turing machines via the translation code". There is no such thing as "the translation code", either. Rather, for any pair of machines, there will be some optimal translation code from one to the other (and another optimal translation code from the other to the first).

You can try to come up with a precise specification of what you mean, and then it can be evaluated. As it is, the "universalized" doesn't make sense.

The usual procedure is to arbitrarily choose a Turing machine, and then to use the translation code stuff to be confident that *in the limit* things are going to be basically the same. But here we're not interested in limiting behavior.

The problem with an arbitrary Turing machine is that you can always rule in favor of a hypothesis by gerrymandering the Turing machine to make the code to generate the favored hypothesis be a single bit and the code to generate all the competitors super long.

Cale said...

I'll look into it further, but even if it did work out that way, it would still be no *worse* method than IBE or the pseudo-induction presented in the OP.