Maybe a Bayesian should be a hybrid of an internalist and an externalist about justification. The internalist aspect would come from correct updating of credences on evidence, internalistically conceived of. The externalist aspect would come from the priors, which need to be well adapted to one's epistemic environment in such a way as to lead reasonably quickly to truth in our world and maybe also in a range of nearby worlds.
This seems a natural way to think about the internalist and externalist question by means of the analogy of designing a Bayesian artificial intelligence system. The programmer puts in the priors. The system is not "responsible" for them in any way (scare quotes, since I don't think computers are responsible, justified, etc--but something analogous to these properties will be there)--it is the programmer who is responsible. Nonetheless, if the priors are bad, the outputs will not be "justified". The system then computes--that is what it is "responsible" for. It seems natural to think of the parts that the programmer is responsible for as the externalist moment in "justification" and the parts that the system is "responsible" for as the internalist moment. And if we are Bayesian reasoners, then we should be able to say the same thing, minus the scare quotes, and with the programmer replaced by God and/or natural selection and/or human nature.
1 comment:
Bayesian updating seems like a generalized view of inference. It would be an odd view in which inference was not justified in internalist fashion. Is it even conceivable?
Post a Comment