Monday, May 09, 2011

 

Predicting Elections is like the NFL... We have parity (and may not have a major contest until 2012)

As some of us have tried the first attempts at predicting the 2012 elections, we are all wondering which prediction will be most accurate at the end of the day. Can we skip most of the "expert" predictions and concentrate on only one website. The answer it would seem is yes... but only because we're likely to get the same answer wherever we go. Why?

Pretty much all expert projections are privy to and rely on the same polling data, and most pollsters are as accurate as one another. Nate Silver's pollster rankings suggest that this is not the case, but there is considerable debate within the academic/polling community over whether his rankings are statistically significant in most cases or hold any predictive value for future campaigns.

Without reliving old arguments over statistical significance (see this Mark Blumenthal piece on anti-significance and Silver's response), I think the predictive value is far more important.

Following the 2010 midterm elections, Nate released a preliminary 2010 general election pollster scorecard* for eight pollsters. As you can see, their errors ranged from 3.3-5.8% with Rasmussen registering as the least accurate. How did their relative rankings compare with past performance?


There was actually a slightly insignificant relationship. The highest ranking pollster (Quinnipiac) was actually ranked lowest in the previous incarnation of the pollster rankings. YouGov, which ranked 7th, came in at 3rd. Disturbingly, the prior rankings had punished YouGov because it conducts polls over the Internet. In other words, an effort to add to prior pollster performance to create a more accurate forecast of future accuracy did not help.

Based on this evidence as well as the fact that a different rating system by American Research Group's Dick Bennett that included data from the primary actually found Quinnipiac towards the bottom of the pack (due a poor primary performance), I do not believe that, for the most part, past pollster accuracy foretells future performance.

Not surprisingly then, poll aggregation techniques are as accurate as each other. Chris Bowers (a pioneer of simple polling averages) found that the difference in accuracy between the final predictions of Pollster.com, FiveThirtyEight.com, and a simple 25-day polling average for the 52 closest Presidential, Senatorial, and Gubernatorial contests in 2008 and early 2010 (before the general) was only 0.27% (with the simple average coming out statistically insignificantly ahead).

In the 2010 general election, Bowers' calculated that in closest 45 campaigns the difference in mean error between Pollster.com, FiveThirtyEight.com, Real Clear Politics, and a simple 25-day polling average was only 0.31% (with FiveThirtyEight coming out statistically insignificantly ahead). Dick Bennett's review of the aggregation methods found similar insignificance in error.

What about in the House of Representatives, where neither Bowers nor Bennett have roamed?

Few websites, I know of, actually attempt to predict the results (not just the winner) in each House race. To do so, you need not only polling data, but also past district voting history (on both the Congressional and Presidential level) among other variables.

The only two that did so in 2010 were FiveThirtyEight.com and Stochastic Democracy (which I have previously contributed to). Looking at all the House races that had both a Republican and Democratic candidate (406 in total), I found that average error (on the two-way) was 6.27% for FiveThirtyEight.com and 6.72% for Stochastic Democracy. This difference is not statistically significant at any mathematically accepted level.


What happens when we just look at who better predicted the winner? FiveThirtyEight.com had 19 missed calls, while Stochastic Democracy had 18. Again, not statistically significant. Another prominent site, Larry Sabato's Crystal Ball (to whom I have recently contributed) also had 18 missed calls. Most prominent websites had similar track records.


Where does this leave us? Any prognosticator/pollster who claims that they are more accurate than the other guy probably is not (at least for more than one cycle). The fact that a simple 25-day and simple Real Clear Politics average does as well as some of the more complicated methods in statewide contests indicates that those at home can try their own hand at beating the pros (and on any given day have a decent chance of doing so). Most importantly, I feel secure knowing that readers are getting good information no matter where they go.

*I have not seen an updated scorecard, but would be more than happy to update the post based on a new one.

Note: If you are interested in any part(s) of the 2010 House dataset, feel free to email.

Comments: Post a Comment

Subscribe to Post Comments [Atom]





<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]