Take regularity as the thesis that the rational agent assigns a probability of 0 only to impossible propositions and a probability of 1 only to necessary propositions. Bayesians like regularity in large part because regularity allows them to prove convergence theorems. These convergence theorems say that if if you start with a regular probability assignment, and keep on gathering evidence, your probability assignments will converge to the truth. Here, a probability assignment for p "converges to the truth" provided that if p is true, then one's credences converge to 1, and if p is false, then one's credences converge to 0.
But they cannot use this argument for regularity. For consider the proposition Cp: "If you keep on gathering evidence in manner M, your probability assignment for p will converge to the truth" (take that as a material conditional). The kinds of convergence theorems that the Bayesians like in fact show that P(Cp)=1.[note 1] And that's why the Bayesians like these theorems. They give us confidence of convergence. But now notice that these very convergence theorems are incompatible with regularity. For it is clear that Cp is not a necessary truth. Just as it is possible to get an infinite run of heads (it's no less likely than any other infinite sequence) when tossing a coin, it's possible to have an infinite run of misleading evidence.
In summary, one of the main reasons Bayesians like regularity is that it yields convergence theorems. But the convergence theorems are not compatible with regularity. Ooops. Not only do the convergence theorems refute regularity, but they are supposed to be the main motivation of regularity.
In email discussion, a colleague from another institution suggested that the regularist Bayesian might instead try to assign probability 1−e to Cp where e is an infitesimal. I don't have a proof that that can't work for the particular convergence theorems they're using, but I can show that that won't work for the strong Law of Large Numbers, and since the convergence theorems they're using are akin to the strong Law of Large Numbers, I don't hold out much hope for this here.