Mellor's argument against circular causation in Real Time II seems to me to basically come down to the following observation. If we have probabilistic causation of B by A and of A by B, then the four conditional chances involved in the causation, namely P(B|A), P(B|~A), P(A|B) and P(A|~B), fully determine all the unconditional probabilities of A, B and their negations. Namely, only one assignment to P(A) and P(B) does not generate a violation of the laws of probability. (For instance, if P(B|A)=P(A|B)=1/2 and P(B|~A)=P(A|~B)=1/4, then we have to set P(A)=P(B)=1/3, or we will violate the laws of probability.) But Mellor seems to think that in a causal system, we should be able to keep fixed the conditional chances and yet set some unconditional probability howsoever we wish.
Certainly, in ordinary non-looping causation, we can do that. If we have a events A1,A2,... occurring or not occurring in a sequence with no memory (i.e., we have a Markov chain with a binary state space), then no matter what the transition probabilities P(An|An−1) and P(An|~An−1) are, we can arrive at a coherent probability assignment to the system as a whole no matter what we let P(A1) be.
It is a quite interesting feature of circular causation that the conditional chances fix all the probabilities. But where is the absurdity?
Well, to be fair to him, Mellor does bring in frequencies. Suppose we have a large number of independent causal loops governed by the same chances happening side-by-side. Suppose the chances are all strictly between 0 and 1. Then any distribution of features between the loops should be metaphysically possible. Thus, we could have five million As and ten million non-As, or equal numbers, or ten million As and five million non-As. But now suppose that frequency of As and non-As in the system does not match the frequency which is determined by the unconditional probability P(A) that can be derived from the conditional chances. Then we can calculate the expected number of As and non-As that there should be after going around the A-B-A loop, and we will find that that expected number doesn't match the number we have. (Mellor does this explicitly.) And so what? Well, one problem here is that it is very unlikely that the expected number not match the observed number. Correct! But it should not surprise us that we have an unlikely scenario when we have started with an unlikely assumption. For the expected distribution of As and non-As is the one given by the unconditional probabilities P(A) and P(~A), and ex hypothesi our distribution departed from that. In unlikely circumstances, an unlikely result. What's strange about that? This is basically the same point that Nicholas Smith made here in the case of a somewhat different argument.