Wednesday, May 5, 2010

The metaextrapolation fallacy

Let's suppose on Friday I am running a public star night. I am looking forward to this and checking the NOAA cloud cover forecast for Waco for that Friday night, starting the preceding Saturday. chart On Saturday, the forecast for that Friday is 75% cloud cover, let's suppose (I made up the data). On Sunday, the forecast is 70%. On Monday, the forecast for Friday is 62%. On Tuesday, Friday's forecasted cloud cover is 60%. Today, Wednesday, the forecast for Friday says 51%. What should I estimate Friday's cloud cover at? Well, a reasonable line of thought is that I can extrapolate from the data, and suppose that on Friday the forecast for Friday will say something like 40%, and since Friday's forecast for Friday will be the most accurate, my current estimate for Friday's cloud cover should be around 40%. Or I can give the appearance of being scientific and run a linear regression, and get the same answer from a pretty good regression with R2=0.95.

But this could be called the metaextrapolation fallacy. The weather data has already been pored over by experts—or by algorithms designed by experts. On Wednesday, these experts have access to the raw data behind the Sunday, Monday, Tuesday and Wednesday forecasts for Friday, and based on all that raw data, they extrapolated to 51% cloud cover. If they've done their job well—and of course that is always a question—my second-guessing extrapolation from their extrapolations should be trumped by their last extrapolation. In other words, I should suppose the cloud cover will be 51%, and that the trend I observed is just due to chance.

If, however, I find that in successive weeks there are similar trends in forecasts, I will then have reason to think that I've identified something in the data that they haven't. For if their estimates are the best possible, we would expect that the distribution of forecasts for a particular date, as one gets closer to the date, typically has no statistically significant trends. If there tend to be trends, then we might start forming hypotheses, like that longer term forecasts overestimate cloud cover. But to do that, we need more data than just these five points. (This post builds on, and qualifies, some of the remarks here.)

No comments: