Showing posts with label expertise. Show all posts
Showing posts with label expertise. Show all posts

Friday, April 12, 2019

Voting and expertise

Here is something that worries me. In a democratic system, voters need to decide questions where not only is the first-order evidence regarding the questions far beyond the area of expertise of the typical voter, but it is far beyond their area of expertise to know who are the reliable experts.

Economic questions seem particularly glaring cases of this. One politician proposes to raise the minimum wage on the grounds that this will improve the earnings of the neediest members of society, and thereby on balance raise up the most vulnerable. Another proposes to keep the minimum wage fixed on the grounds that raising it will lead to greater automation or close some businesses or reduce employment hours, and thereby on balance bring down the most vulnerable. Who is right is largely an empirical question. There is no way to address it without hard data, and the analysisof the data is really difficult.

If I were voting on such an issue (as an expat Canadian, I don’t get to vote either in the US or Canada), I could to talk to colleagues in the Economics Department and try to get their expert opinion. But, frankly, even that probably wouldn’t be very reliable. These issues are ones that economists are going to be divided on, and while I know about the intellectual integrity of my colleagues in the Economics Department, it’s hard to know about their standing in the field and their knowledge of a particular question. And the vast majority of people doesn’t even know any economists personally.

This is really pessimistic. And I don’t see a solution. More education is good, of course, but the level of education that would be needed would be way higher than most people would have either the time or talent for. Maybe the one happy thought is this. When we have controverted empirical questions like that, and we need to make a decision, tossing a coin isn’t a bad way to do it. And voting is no worse than tossing a coin.

Tuesday, November 1, 2011

When should you adopt an expert's opinion over your own?

Consider two different methods for what to do with the opinion of someone more expert than yourself, on a matter where both you and the expert have an opinion.

Adopt: When the expert's opinion differs from yours, adopt the expert's opinion.

Caution: When the expert's opinion differs from yours, suspend judgment.

To model the situation, we need to assign some epistemic utilities.  The following are reasonable given that the disvalue of a false opinion is significantly worse than the value of a true belief, at least by a factor of ~2.346 in the case of confidence level 0.95, according to the hate-love ratio inequality.
  • Utility of having a true opinion: +1
  • Utility of having a false opinion: approximately -2.346
  • Utility of suspending judgment: 0
Given these epistemic utilities, we can do some quick calculations.  Suppose for simplicity that you're perfect at identifying the expert as an expert (surprisingly, replacing this by a 0.95 confidence level makes almost no difference).  Suppose the expert's level of expertise is 0.95, i.e., the expert has probability 0.95 of getting the right answer.  Then it turns out that Adopt is the better method when your level of expertise is below 0.89, while Caution is the better method when your level of expertise is above 0.89.  Approximately speaking, Adopt is the better method when you're more than about twice as likely to be wrong as the expert; otherwise, Caution is the better method.

In general, Adopt is the better method when your level of expertise is less than e/(D-e(D-1)), where e is the expert's level of expertise and D is the disutility of having a false opinion (which should be at least 2.346 for opinions at confidence level 0.95).  If your level of expertise is higher than that, Caution is the better method.

Here is a graph (from Wolfram Alpha) of the level of expertise you need to have (y-axis), versus the expert's level of expertise (x-axis), in order for adopting Caution rather than Adopt to be epistemic-decision-theory rational, where D=2.346.

Here is a further interesting result. If you set the utility of a false opinion to -1, which makes things more symmetric but leads to an improper scoring rule (with undesirable results like here), then it turns out that Adopt is better than Caution whenever your level of expertise is lower than the expert's. But for any utility of false opinion that's smaller than -1, it will be better to adopt Caution when the gap in level of expertise is sufficiently small.
If you want to play with this stuff, I have a Derive worksheet with this. But I suspect that there aren't many Derive users any more.

Tuesday, April 26, 2011

Epistemically otiose appeals to authority

Suppose I am an art graduate student.  After careful study, a certain well-known painting of uncertain provenance looks very much to me like it is by Rembrandt.  Kowalska is the world expert on Rembrandt.  I have never heard what Kowalska thinks about this painting.  But I reason thus: "This painting is almost certainly by Rembrandt.  Kowalska is very reliable at identifying Rembrandt paintings and has no doubt thought about this one.  Therefore, very likely, Kowalska thinks that the painting is by Rembrandt."  I then tell people: "I have evidence that Kowalska thinks this painting is by Rembrandt."

What I say is true--the evidence for thinking that the painting is by Rembrandt combined with the evidence of Kowalska's reliability is evidence that Kowalska thinks the painting is by Rembrandt.  But there is a perversity in what I say.  (Interestingly, this perversity is a reversal of this one.)  By implicature, I am offering Kowalska's Rembrandt authority as significant evidence for the attribution of the pointing, while in fact all the evidence rests on my own authority.  Kowalska's authority on matters of Rembrandt is epistemically otiose.

This kind of rhetorical move occurs in religious and moral discourse to various degrees.  In its most egregious form, one reasons, consciously or not: "It is true that p.  Jesus knows the truth at least about matters of this sort.  Therefore, if the subject came up, Jesus would say that p."  And so one says: "Jesus would say that p."  (I am grateful to my wife for mentioning this phenomenon to me.)  Here it seems one is implicating that Jesus' theological or moral authority supports one's own view, but in fact all the evidential support for the view comes from one's initial reasons for believing that p.  One's reason for thinking that Jesus would say that p is that one thinks that it is true that p and one therefore thinks that Jesus would say it.

At the same time, there are contexts where this rhetorical move is legitimate, namely when the question is not primarily epistemic but motivational--when the point is not to convince someone that it is true that p, but to motivate her to act accordingly.  In this case, the imaginative exercise of visualizing Jesus saying that p may be helpful.  But when the question is primarily epistemic, there is a danger that one is cloaking one's own epistemic authority with that of Jesus.

Still, sometimes it is epistemically legitimate to appeal to what Jesus would say.  This is when one has grounds for believing that Jesus would say that p that go over and beyond one's other reasons for believing that it is true that p.  We can know about Jesus' character from Scripture and cautiously extrapolate what he would say about an issue.  (Likewise, we might know that Kowalska judged paintings relevantly like this one to be by Rembrandt, and this gives us additional confidence that she thinks this one is Rembrandt's.)  But we need to be very cautious with such counterfactual authority.  For one of the things that we learn from the New Testament is that what Jesus would say on an issue is likely to surprise people on both sides of the issue.  In particular, even if it is true that p and Jesus knows that p, Jesus might very well not answer in the affirmative if asked whether it is true that p.  He might, instead, question the motivations of the questioner or point to a deeper issue.

Here is a particularly unfortunate form of this epistemically otiose appeal to authority.  One accepts sola scriptura and one thinks that it is an important Christian doctrine that p.  So one concludes that Scripture somewhere says that p.  With time one might even forget that one's main reason for thinking that Scripture says that p was that one oneself thought that p, and then one can sincerely but vaguely (or perhaps precisely if  eisegesis has occurred) cite Scripture as an authority that p.  This is, I think, a danger for adherents of sola scriptura.  (Whether this danger is much of a reason not to accept sola scriptura, I don't know.)

But religious authority is not the only area for this.  This also happens with science.  One accepts the proposition that p for some reason, good or bad.  That proposition is within the purview of science, or so one thinks.  So, one concludes that one day science will show that p or that science will make disagreement with the claim that p ridiculous, and one says this.  Here, the appeal to a future scientific authority is epistemically otiose and has only rhetorical force, though one may well be implicating that it has more than rhetorical force.

Here is another interesting issue in the neighborhood.  Suppose I know some philosophical, theological or scientific theory T to be true, and I know that God believes all truths.  Then I should be able to know that God believes T (barring some special circumstances that make for a counterexample to closure).  But it sounds presumptuous to say: "I know that God himself believes T."  I think the above considerations suggest why such a statement is inappropriate.  It is inappropriate because in standard contexts to say that one knows what an expert believes implicates that one believes it in part because of the expert's opinion--one is covering oneself with the expert's mantle of authority.  Still, inappropriateness is not the same as presumptuousness, and so the above still isn't a very good explanation of why "I know that God himself believes T" sounds bad.  Maybe a part of the explanation of the apparent presumptuousness is that by saying that one knows what God believes one is suggesting that one is one of God's intimates?  (Still, surely no theist would balk at: "God believes that 2+2=4.")