Suppose all you know about n is that it is a positive integer. What probabilities should you assign to the values of n? Intuitively, you should assign equal probability to each value. But that probability will have to be zero if the probabilities are to add up to one (infinitesimals won't help by Tim McGrew's argument). So now the probability of everything will be zero by countable additivity. We can drop countable additivity, but then there will no longer be a unique canonical measure—there are many finitely additive measures.
So, here's a suggestion. The details are not yet worked out and may well overlap with the literature. The following is for an ideal agent with evidence that is certain. I don't know how to generalize it from there.
Step 1: Drop the normalization to one. Instead, talk of an epistemic possibility measure (epm) m, and say that m(p) is the degree of epistemic possibility of p (I am not calling it probability, but probability measures will be a special case; I am following Trent Dougherty's idea that given classical probabilities, the degree of epistemic probability of p is equal to its degree of epistemic probability). An epm takes values from zero to infinity (both may be included) and is countably additive. Depending on context, I'll go back and forth between talking it as assigning values to propositions or sets (in the latter case, it'll just be a Lebesgue measure). The case where the total measure (i.e., the measure of a tautology or of a whole set) is one shall be referred to as classical. I will say that p is epistemically possible if and only if the epm if p is greater than zero.
Step 2: Instead of modeling the degree of belief in p with a single number, P(p), as in the classical theory, we model it with the pair of numbers: <m(p),m(~p)>, which I will call the degree of belief in p. The agent is certain of p provided that ~p is epistemically impossible, i.e., provided the degree of belief in p is of the form <x,0>. This means that there is a distinction between maximal epistemic possibility and certainty: maximal epistemic possibility is when the degree of epistemic possibility of p is equal to that of a tautology, while certainty will be when the degree of epistemic possibility of the negation of p is zero. The axioms (see Step 4) will ensure that when the total measure is finite, certainty and maximal epistemic possibility come together. (Here is the example which leads me to this. If N is the set of positive integers, m is counting measure and E is the set of even integers, then m(E)=m(N)=infinity, but obviously if all one knows about a number is that it is in N, one isn't certain that it is in E. Here, m(~E)=infinity as well, so both E and ~E have maximal epistemic possibility, and hence there is no certainty.) We say that the agent is has a greater degree of belief in q than in p provided that either m(q)>m(p) or m(~p)<m(~q).
Step 3: The agent's doxastic state is not just modeled by the degrees of epistemic possibility assigned to all the (relevant) propositions, but by all the conditional degrees of epistemic possibility assigned to all the (relevant) propositions on all the (relevant) conditions. More precisely, for each proposition q whose negation isn't a tautology, there is a "conditional" epm m(−|q). The unconditional epm, which measures the degree of epistemic possibility, is m(p)=m(p|T) where T is a tautology. These assignments are dynamic, which I will sometimes indicate by a time subscript, and are updated by the very simple updated rule that when evidence E comes in, and t is a time just before the evidence and t' is just after, then mt'(p|q)=mt(p|q&E).
Step 4: Consistency is forced by an appropriate set of axioms, over and beyond the condition that m(−|q) is an epm for every q whose negation isn't a tautology. For instance, it will follow from the axioms that m(p&q|q)=m(p|q), and that m(p|q&r)m(q|r)=m(p&q|r)m(T|q&r) whenever both sides are defined (stipulation: xy is defined if and only if it is not the case that one of x and y is zero and the other is infinity) and T is a tautology. Maybe these are the only axioms needed. Maybe the second is all we need, but we may need a little more.
Step 5: To a first approximation, it is more decision-theoretically rational to do A than B iff the Lebesgue integral of (1A(x)−1B(x))p(x) is greater than zero, where p is the payoff function on our sample space, 1S is the indicator function equal to 1 on S and 0 elsewhere, and the integral is taken with respect to m(−|do(A) or do(B)). Various qualifications are needed, and something needs to be said about cases where the integrals are undefined, and maybe about the case where either A or B has zero epm conditionally on (do(A) or do(B)). This is going to be hard.
Example: Suppose we're working with the positive integers N (i.e., with a positive integer about which we know nothing). Let m(F|G) be the cardinality of the intersection of F and G. Then, we're certain of N, but of no proper subset of N. We have the same degree of beliefs in the evens, in the odds, in the primes, etc., since they all have the same cardinality. However, we have a greater degree of belief in the number being greater than 100 than we do in the evens, and that is how it should be. Supposing we get as evidence some finite set (i.e., the proposition that the number is in some finite set). Then, quite correctly, we get a classical uniform probability measure out of the update rule. Moreover, in the infinite case, we still get correct conclusions like that it is more decision-theoretically rational to bet on the numbers divisible by two than on the numbers divisible by four, even though the degree of belief is the same for both.