I am actually kind of suspicious that there is a subtle problem with my conditional probabilities in the following argument. It's a rather complex argument. Start with this observation. Suppose I know that there are N people in the universe, and ten minutes ago, a random process independently occurred to each of the N people, bestowing upon each the unobservable property Q, with probability p. How likely is it, given this information, that I have Q? The answer is obvious: p. But now suppose that I learn some additional information: I learn exactly how many people now have Q and how many don't. Presumably, approximately pN have Q and (1-p)N don't, but I now have better than an approximation—I have an exact number. Let's say, K people have Q. So now my probability that I have Q is K/N. Observe that p has now dropped out, because the information I have supercedes the information that involves p. For instance, maybe N=10, and in one world where p=1/4, three people have Q, and in another world where p=1/3, also three people have Q. If I am in the first world, my probability for having Q should be 3/10, and in the second, likewise.
The above illustrates a general principle: If I know the actual distribution at t1 of the property Q in the population, my best estimated probability for having Q depends on that actual distribution, and not on the history of how the members of the population got to have Q. Different historical processes could produce the same distribution.
Now, suppose an actual infinity is possible, and so there are countably many people in the population. If this is possible, it is also possible that some process at t0 independently bestowed Q on some of the people, with probability p strictly between 0 and 1. Suppose this is all I know. Then I ought to assign probability p to the claim that I have Q. But by the above principle, if I were to learn what the distribution of Q in the population is, I should use that distribution to estimate the probability of my having Q, instead of using information about p. But I do know the distribution of Q in the population is—even without actually observing anything. For I know that countably infinitely many members of the population have Q and countably infinitely many members of the population lack Q. (This works best if the persons are otherwise indiscernible, or differ in respect of properties that have no ordering or topology or significance to them.) At least, the probability that this is so is 1, and that's surely good enough for knowledge. But now here is the funny thing: this fact about the distribution is independent of p. Whatever the value of p is, the distribution would almost surely (i.e., with probability one) be infinitely many Qs and infinitely many non-Qs. So, by the above trumping principle, regardless of the value of p, I ought to assign the same probability (1/2? undefined?) to my having Q. But it is obvious that I ought to assign p. So we have a contradiction.
Here's a perhaps clearer way to run the argument. Two processes independently bestowed the properties Q and R, bestowing Q with probability 1/10 and R with probability 9/10, on all the members of an infinite population. I now know that in the population, there now are infinitely many Qs and infinitely many non-Qs. This is exactly the same distribution as the distribution of Rs. If my probabilities should depend on the distribution of Qs and Rs, as the trumping principle says they should, it follows I should assign the same probability to my having Q as to my having R. But plainly this is false—it is nine times as likely that I would have R than that I would have Q. Hence we have a contradiction.