Write down a decimal point. Then choose a digit at random, with equal probability 1/10 of each possible digit. Repeat ad infinitum, with all the digits chosen independently. Let X be the number you've written down the infinite decimal expansion of.
Suppose you find out that X is going to be either 1/4 or 1/3. Which of the two is more likely? Answer: 1/4. For there are two ways of getting 1/4: 0.250000... and 0.249999.... But there is only one way of getting 1/3: 0.333333..., and each infinite sequence is equally likely. Thus, intuitively P(X=1/4 | X=1/3 or X=1/4)=2/3. Surprised?
Another interesting fact here. In the technical probability-theory sense, X is uniformly distributed on the interval [0,1]. But in the intuitive sense, it's not. So the technical probability-theory sense does not capture the notion of uniform distribution.
Similarly, the technical probability-theory sense of independence does not capture the intuitive notion of independence. Suppose that a random process uniformly picks out a number Y in the interval [0,1], and suppose you get a dollar if and only if the number is 1/2. Let A be the event that the number picked out is 1/2 and let B be the event that you get a dollar. Then P(A&B)=P(A)=0=P(A)P(B), and hence in the probability-theoretic sense A and B are independent. But intuitively they are far from independent: B is entirely determined by A.
Maybe a better definition of independence for philosophical (though maybe not mathematical) purposes is that both P(A|B)=P(A) and P(B|A)=P(B). And then conditional probabilities should not be defined by ratios of unconditional probabilities.