Sunday, February 07, 2010

 

Another Argument Against IVR Polling Falls Flat

The fight over the accuracy of interactive voice response (IVR) polls is long enough to write a book. IVR polls, which use a computerized or recorded voice to ask questions to respondents who answer using a touchtone keypad, should according to critics lead to a whole host of problems that make them less accurate than the traditional live interviewer mode of polling. Yet, the final IVR polls taken before elections have, on the whole, performed as well if not better than live interviewer polls in predicting the outcome. This "shocking" accuracy has led to a host of responses from acceptance of IVR polls to continued arguments against IVR methodology. Now, we have a new claim against IVR polls: they get the final outcome right, but they tell a "less realistic story" of how the election evolved.

Democratic pollster Mark Mellman (who has more polling experience than I could dream of having) points to three elections in which he believes IVR polls were incorrect until the final round of polls. Of course, as Mellman himself notes, there is no way of knowing whether a poll is right two months before an election. Instead, we must play a sort of guessing game on what "seems" more realistic. While I think this is a game that leads open way too much interpretation, I'll play along.

Mellman's first case analysis is the recent Massachusetts Special Senatorial Election in which now Senator Scott Brown came from behind to defeat Attorney Martha Coakley. In that election, a University of New Hampshire poll taken January 2nd through the 6th and Mellman's own poll taken the 8th through the 10th showed Coakley lead by 17 and 14 respectively. These two live interviewer polls differed significantly from three IVR polls taken during the same period. A Rasmussen poll taken January 4th, a Public Policy Polling poll taken the 7th through 9th, and a Rasmussen poll taken the 11th showed a Coakley lead of 9, Brown lead of 1, and a Coakley lead of 2 respectively. The IVR polls clearly show a movement towards Brown during the week of the 3rd, while these live interviewer polls showed no such trend. These live interviewer polls did not show a trend until the following week when Mellman's own final poll before the January 19th Special Election predicted, as the final Public Policy Polling poll, a 5 point Brown victory.

Obviously, these two sets of polls cannot both be right. Mellman believes that "given the timing of ads and the feel on the ground, our story strikes me as more plausible". Is this story really more "plausible"? Well, we know that an internal Republican poll from all the way back in mid-December showed Brown only down 13. Brown had along with special interest groups aired television ads (I saw them on TV in New Hampshire) and ran around the state of Massachusetts during this period and into early January, while Coakley was on vacation. I would think this campaigning would have cut the lead. Furthermore, an internal Coakley poll conducted by Celinda Lake showed Brown cutting Coakley's lead from 15 on the 2nd-4th of January to 5 by the 9th to the 11th.

You get that? Coakley's own internal (live interviewer) poll was different by 9 points from Mellman's poll taken over pretty much the same period. A Suffolk University poll, also live interviewer, taken in the immediate three day period after Mellman's poll showed a 4 point Brown lead. In other words, if we believe that only live interviewer polls show the true story, Brown saw an 18 (yes, 18) point bounce in a matter of 3 days. So despite all the advertising by Brown in the month prior, something (of unknown origin) broke in that three day period to give Brown a 4 point lead? Would it not make more sense that Brown slowly, but surely chopped away at Coakley's lead (supported somewhat by Coakley's own live interviewer internal polling) as his message took hold? Could it have been that Mellman's own polling was maybe wrong? I think that explanation makes a lot more sense. Thus, it was not a matter of IVR vs. live interviewer polls as much as it was a battle between polls that were right and polls that were wrong, which happened to be mostly live interviewer polls in this instance. A look at the graph to the right shows that, if nothing else, many IVR polls differed little with live interview polls.

As for the two other two races Mellman points to to support his point, I see selective use of data. In the 2006 Washington Senate race, Rasmussen polling showed Democratic candidate Maria Cantwell's one time mid-teens lead drop to the mid-single digits throughout the summer, while Mellman's own polling showed a relatively consistent high teen to low twenties lead for Cantwell. (Mellman's poll is not shown in the table because he did not release the exact date of when the poll was conducted). Mellman asks whether we should believe a "big initial lead that narrowed somewhat as the campaign engaged, or the bottom suddenly falls out for Cantwell for no discernible reason, but she recovers her advantage after both sides hit the airwaves?". I'm inclined to agree with Mellman that his own polling was right, but this was not a matter of IVR vs. live interviewer.

The live interviewer firm Elway Research showed Cantwell's lead dropping from 29 in April to 14 in July, which is nearly equal to Rasmussen's 11 point lead in July. While Rasmussen's poll showed that margin shrinking to 6 in August, SurveyUSA (an IVR firm) showed Cantwell with a comfortable 17 point lead. Cantwell's SurveyUSA lead shrunk to around 10 points as the race entered September and October, but her lead also dropped to this level in live interviewer polls conducted by Mason-Dixon. In other words, the polls may have been inconsistent in Washington, but this inconsistency happened across IVR and live interviewer polls.

In the Connecticut 2006 Senate race, we see more of this selective use of data. Melman points out that Rasmussen's polls showed that Lamont and Lieberman were" tied in July... and were still neck-and-neck in September... [and that when] Rasmussen called the race even, Quinnipiac gave the incumbent a 24-point edge. September found the challenger [Lamont] closing the gap to 10 points, a lead Lieberman held through Election Day".

The problem in claiming that live interviewers showed a more realistic picture is that like Rasmussen live interviewer American Research Group also claimed Lieberman (the independent) and Lamont (the Democrat) were statistically tied in August and September. At the same time, IVR pollster SurveyUSA gave Lieberman a 13 point lead in early September, which equaled the Quinnipiac poll's margin. It was not that the IVR polls were in one camp and live interviewers were in another. You can find (see table above from Pollster.com) live interviewer and IVR polls that tell the "more realistic" and "less realistic" story.

In all three of these races, "unrealistic" polls were unrealistic because of something other than being an IVR or live interviewer poll. Those who continue to try and find faults with IVR polls have to look to reasons other than them showing a "unrealistic" picture.

And truthfully, I think those reasons are getting harder and harder to find everyday.

Comments: Post a Comment

Subscribe to Post Comments [Atom]





<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]