Imagine scientists find that people having brain cancer have a subconscious desire to drink orange juice, and consequently on any given occasion of choosing what to drink, they are about 10% more likely to choose orange juice than if they don't have brain cancer. The also find that drinking orange juice has no causal effect on the progress or development of a cancer. Suppose background information such as the frequency of headaches and family history gives me a 0.001 probability of having brain cancer.
So, now I am choosing between orange and grapefruit juice. I like the taste of orange juice a little more, but sometimes I drink grapefruit juice. On a non-causal, purely Bayesian, decision theory, it seems I have reason to drink grapefruit juice. For, suppose, P(drink orange juice|cancer)=0.75 and P(drink orange juice|no cancer)=0.65. Then, plugging into Bayes' Theorem, we get P(cancer|drink orange juice)=0.00115 and P(cancer|drink grapefruit juice)=0.00087. Now the utility of the taste of orange juice is maybe 1.0 and of the taste of grapefruit juice is 0.8. But the disutility of brain cancer is at least as bad as -30,000. (Quick calculation to make sure this is in the right ballpark: I would be very willing to give up a daily pleasure equal to drinking one glass of orange juice for life to certainly prevent brain cancer, and if my favorite fruit juice has utility 1.0, and I live for 75 years, that's 27394 units of utility I would be willing to forego to avoid brain cancer.) Now, Utility(drink orange juice)=1.0+(0.00115)(-30000)=-33.5 and Utility(drink grapefruit juice)=0.8+(0.00087)(-30000)=-25.3. So I should drink grapefruit juice, even though I don't like it as much.
But it seems to be an established fact about health policy and health advice that non-causal correlations are not to be acted on. Hence, we need causal decision theory.
[Fixed a crucial typo. -ARP]