I wonder if virtue epistemology isn't particularly well-poised to solve the problem of prior probabilities. To a first approximation, you should adopt those prior probabilities that a virtuous agent would in a situation with no information. This is perhaps untenable, because maybe it's impossible to have a virtuous agent in a situation with no information (maybe one needs information to develop virtue). If so, then, to a second approximation, you should adopt those prior probabilities that are implicit in a virtuous agent's epistemic practices. Obviously a lot of work is needed to work out various details. And I suspect that the result will end up being kind-relative, like natural law epistemology (of which this might be a species).
2 comments:
While I agree with this in some sense, one problem is that a virtuous agent is virtuous in multiple ways, and some parts of virtue lead to epistemic practices which are not the most likely to attain the truth, because truth is not the only good. So for example a virtuous agent will tend to trust people unless there are clear signs that they should not be trusted, but there are various matters where people are likely to be wrong despite the fact that there are no clear signs that they should not be trusted. So virtuous epistemic behavior is not the same as the behavior which is most likely to obtain the truth.
I was thinking of an intellectually virtuous agent, and I suppose your comment suggests that the intellectually virtuous agent might not be fully virtuous simpliciter.
That said, I suspect that the fully virtuous agent's epistemic practices always align with epistemic rationality, for deontic reasons. Just as a fully virtuous agent wouldn't lie to save billions, she also wouldn't believe against the evidence to save billions. But this is, admittedly, controversial.
Post a Comment