Some time in the fall, Ted Poston asked me how I thought one should model the force of multiple arguments for the existence of God in a Bayesian setting. There are difficulties. For instance, when we discover a valid argument, what we are discovering is the necessary truth that if the premises are true, the consequent is as well. But necessary truths should have probability one. And it's hard to model learning things that have probability one. Moreover, the premises of the arguments are typically not something we are sure of. At the time, I suggested that we conditionalize on the contingent propositions reporting that the premises seem true. Poston ended up going with an urn model instead.
I want to try out another model for counting up the force of multiple arguments, one where we not worry about what is and what isn't necessary. I will develop the story with a toy model that has prior probabilities that make calculation easy, leaving it for future investigation to weaken my assumptions of prior independence and equiprobability.
So, suppose we're looking at decent (say: valid, non-question-begging) arguments for and against a conclusion q, and we find that there are m arguments for and n arguments against. How likely is q given this? Start the model by identifying in each argument the controversial premise. (If there is more than one, conjoin them.) Thus, we now have m+n controversial premises. Let's say that premises p1,...,pm support arguments for q and pm+1,...,pm+n support arguments for ~q.
Prior to the discovery of the arguments, in my model I will take the propositions p1,...,pm,pm+1,...,pm+n,q to be all independent, and, further, to each have probability 1/2.
I now model the discovery of the arguments as a discovery of material conditionals. Thus, we discover the m material conditionals p1→q,...,pm→q that favor q and the n material conditionals pm+1→~q,...,pm+n→~q that favor ~q. How do we model this discovery? We simply ignore all the messy details that the discoveries were at least in part a matter of discovering logical connections (though perhaps only in part; some of the premises beside the controversial premise might have been empirical). We simply conditionalize on the m+n discovered material conditionals.
What's the result? Well, we could use Bayes' Theorem, but that's just a tool for computing conditional probabilities, and sometimes other methods work better. We have m+n+1 "propositions of interest" (i.e., q and the pi). Our prior probabilities assign equal chances to each of the 2m+n+1 possible ways of assigning True or False to the propositions of interest. When we conditionalize on the material conditionals we rule out some combinations. For instance, if we assign True to p1, we had better assign True to q as well, and we had better assign False to pm+1,...,pm+n, all on pain of contradiction.
We can say something about how many truth assignments remain after the conditionalizations:
- Assign False to all the pi and False to q: one combination
- Assign False to all the pi and True to q: one combination
- Assign False to p1,...,pm and True to at least one of pm+1,...,pm+n and False to q: 2n−1 combinations
- Assign True to at least one of p1,...,pm and False to all of pm+1,...,pm+n and True to q: 2m−1 combinations.
- P(q|D)=2m/(2n+2m).
For a more realistic model, we will need to change our priors for the controversial premises so that they aren't all 1/2. Some of the controversial premises of the arguments will be fairly plausible and they will have priors higher than 1/2. Some may not be all that plausible and will have priors lower than 1/2. And maybe the conclusion q will have a prior other than 1/2. Furthermore, there may be mutual dependencies among the controversial premises over and beyond the dependencies induced by the fact that some of them imply q and others imply ~q (the latter dependencies are handled by our conditioning). All of this would require fiddling with the priors, and the simple "counting combinations" method of calculating the posterior P(q|D) will need to be replaced by a more careful calculation. Nonetheless, the principle will be the same.