tag:blogger.com,1999:blog-3891434218564545511.post646484759457190610..comments2017-09-23T13:23:02.834-05:00Comments on Alexander Pruss's Blog: Conciliationism and another toy modelAlexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-3891434218564545511.post-36999689308008160982017-02-11T20:29:49.194-06:002017-02-11T20:29:49.194-06:00Thanks, Ian, for your help.
On reflection, a ser...Thanks, Ian, for your help.<br /><br /><br />On reflection, a serious problem with all these models is that it's not clear that Alice and Bob can be said to have the same evidence. For 2nd-level-Alice has available to her the data from 1st-level-Alice, while 2nd-level-Bob has available to him the data from 1st-level-Bob. And this is different data, hence different evidence.<br /><br />I am not sure, though, that the conciliationist can make this complaint. For it seems that something like *this* difference of evidence will be present whenever two peers evaluate things differently.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-85270158438390666682017-02-11T20:05:37.002-06:002017-02-11T20:05:37.002-06:00You can model people that way, but then they are n...You can model people that way, but then they are not strictly Bayesian. As you say above and in your newer post, A* and B* implicitly use priors on the evidential force. In a strictly Bayesian approach, these priors would be derived from a probability model.<br /><br />Here is a sketch of a strictly Bayesian model that works much like the one in the post, but requires no numerical integration.<br /><br />One of two hypotheses about a random variable X is true. Under Hypothesis A, X is distributed as N(μ,V), under hypothesis B as N(-μ,V). Each hypothesis has prior probability 1/2.<br /><br />You observe X. The log-odds for A, calculated from the Normal distribution function, is 2μX/V. Suppose there is an additive error, distributed as N(0, W), in reporting the log-odds to the 2nd level. Then under A, the reported log-odds is distributed as N(2μμ/V, 4μμ/V+W). Under B, it is distributed N(-2μμ/V, 4μμ/V+W). So the 2nd level log-odds ratio for A, taking into account the reporting error, is (2μX/V) / (1 + VW/(4μμ)). Call this R. The 2nd level posterior probability can be calculated from R as exp(R) / (1 + exp(R)). This is an exact Bayesian result.<br /><br />If W = 0 (i.e. no reporting error), R is the same as for first order result (as it should be). As W increases, R shrinks towards 0, again as it should. If two people share their 2nd level info as in the post, W should be replaced by W/2. This expands R away from zero. So, as in the post, if Alice and Bob agree on a posterior probabilitiy greater than 0.5, their merged probability will be higher.<br /><br />Now for the point. Note that the adjustment factor 1/(1 + VW/4μμ) depends not only on W (the variance of the reporting error), but also all the other parameters through (μ^2)/ V. So no generic 2nd level prior on evidential force can match the strict Bayesian result.<br />IanShttps://www.blogger.com/profile/00111583711680190175noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-90349902908075267242017-02-10T15:42:42.553-06:002017-02-10T15:42:42.553-06:00William:
That's right, but only regarding my ...William:<br /><br />That's right, but only regarding my other model.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-70103658566935281112017-02-10T12:45:24.743-06:002017-02-10T12:45:24.743-06:00"But the standard deviation in the *average* ..."But the standard deviation in the *average* of the two agents' log-odds estimates is smaller, by a factor of the square root of two"<br /><br />Convergence of the standard error of the mean assumes a consistent estimator. But if half the estimates are biased in a uniform random distribution way, I am dubious that the estimate is consistent enough for it to converge. <br /><br />This may mean that with larger sample sizes a cynical estimator that sets all estimates (that are not the same with Alice and Bob) to 0.5 may do just about as well as any other formula.<br /><br /><br />Williamhttps://www.blogger.com/profile/12533263841520213358noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-55194072691630065582017-02-10T11:17:05.903-06:002017-02-10T11:17:05.903-06:00Regarding my worries about priors, as long as the ...Regarding my worries about priors, as long as the priors on the evidential force (measured as additive in log-odds space) are flat within several sigmas of Alice's and Bob's first order estimates, what I said works out.<br /><br />Ian:<br /><br />That's interesting. I guess you could modify the story by introducing two new characters, Alice* and Bob*. Alice* and Bob* don't have access to Alice and Bob's first order evidence. All they have available are Alice and Bob's respective first-order evaluations, which they simply take to be evidence in a straightforward conditionalization. And then when you bring Alice* and Bob* together, you don't have peerhood, since Alice* and Bob* have different evidence: Alice* knows only about Alice's evaluation and Bob* knows only about Bob's evaluation. Alice* and Bob* aggregate their data in a standard Bayesian way, and get the results I mention.<br /><br />And then my original story merges people with their starred versions.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-86354271720520165272017-02-09T19:31:24.814-06:002017-02-09T19:31:24.814-06:00Is higher-order Bayesianism (i.e. ordinary Bayesia...Is higher-order Bayesianism (i.e. ordinary Bayesianism plus cognitive error model) really Bayesian? (I take it that this was the point of Heath White’s comment on your earlier post.)<br /><br />Here is a simple, strictly Bayesian, model. There are two similar-looking but biased coins. Coin A has Pr(Heads) = 0.9. Coin B has Pr(Heads) = 0.1. A third party picks one coin randomly with probability 0.5, flips it once, and briefly shows you the outcome. You see Heads. But there is a catch. Your vision is unreliable. The probability that you see Heads or Tails correctly is 0.5 + Δ. (i.e Pr(See Heads | Heads) = 0.5 + Δ, etc.). What is your posterior credence that the coin is A? You apply Bayes and get Pr(A | See Heads) = 0.5 + 0.8 * Δ. Now suppose that a second person with similarly but independently unreliable vision also sees the outcome. If she also sees Heads (which you can infer from her stated posterior credence), you can again apply Bayes to get Pr(A | both see Heads) = 0.5 + 0.8 * Δ / (0.5 + 2 * (Δ ^ 2)). For 0 < Δ < 0.5, this is clearly greater than your original credence. If she sees Tails your new credence would be 1/2. These results are entirely intuitive. The second person is in effect just a second independent pair of eyes. If you agree, you have more confidence in what you saw, so your credence is more extreme. If you disagree, less.<br /><br />But note, this is not second order Bayesianism. Your cognitive faculties are taken to be perfect: it’s only your vision that is faulty. You are treating what you see as ordinary evidence that can be modelled probabilistically. It seems to me that in any truly Bayesian approach must work like this, i.e. it must ‘top out’ in perfect cognitive faculties, with everything else taken as evidence that can be probabilistically modelled.<br /><br />For what it’s worth, I doubt that real people work like this. That’s one reason I have trouble with Bayesian approaches, except in special, textbook-style circumstances.<br />IanShttps://www.blogger.com/profile/00111583711680190175noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-77950484808792980392017-02-09T15:01:48.994-06:002017-02-09T15:01:48.994-06:00The calculations in this post unacceptably neglect...The calculations in this post unacceptably neglect the priors on the evidential force.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-76229040775286662712017-02-09T14:08:53.152-06:002017-02-09T14:08:53.152-06:00The standard deviation sigma is the one that affec...The standard deviation sigma is the one that affects the log-odds of each individual agent. But the standard deviation in the *average* of the two agents' log-odds estimates is smaller, by a factor of the square root of two (precisely because variances add).Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-1358147900374154712017-02-09T13:40:22.171-06:002017-02-09T13:40:22.171-06:00I wonder if an increased standard deviation (from ...I wonder if an increased standard deviation (from summing variances when combining two measures of data) would change anything here? You appear to use the same sigma for all the standard deviations.<br /><br /><br /><br />Williamhttps://www.blogger.com/profile/12533263841520213358noreply@blogger.com