Many polls, but were they accurate?

We know who won the hearts and minds of Minnesota voters for president and the U.S. Senate votes are being recounted.  But who won pollster bragging rights for the next four years in the state?

One pollster showed an 11-poinlt lead for Obama, within one point of the actual winning margin.  In the Senate race, one pollster showed only a one-point difference in the Senate race, but didn’t do quite so well in the presidential polling.

And there were a double handful of other polls, too, all of varying accuracy in the two races.

But we can figure it out.

First, we’ve got to decide what constitutes “accuracy.”  Next we’ve got to decide which polls to look at, knowing that those taken closer to Election Day are more likely to reflect the actual outcome of the election rather than polls taken in, say, July or August before party convention bumps and the whirlwind of attack ads.

So here are the ground rules: We’ll only look at the two statewide races — not the referendum, since not all the polls asked about the referendum.  We’ll only consider those polls where interviewing was finished after Oct. 22.  (If you’re going to be polling in late October, you’ve got to realize you’ll be held up to this sort of scrutiny.)  And we’ll go to to get as complete a list as possible of all those polls.

For the accuracy measure we’ll use the “difference on the difference” — stay with me here — the percentage-point difference between the winner and second-place finisher in the tabulated vote compared with the same difference in the polls.  Then for each poll, we’ll average the errors for the two races to come up with a single measure of accuracy.
Wonk warning: Non-wonks can skip the next paragraph.

(There are nearly a dozen measures of accuracy we could use, most of them developed after the Dewey-beats-Truman polling fiasco in 1948.  This one’s called “Mosteller 2” after statistician Frederick Mosteller. If you’re a wonk, check out the article by Warren Mitofsky in Public Opinion Quarterly after the 1996 elections that will explain all the measures and why this one’s the best.  Mosteller 2 is one of the more commonly used measures of accuracy among pollsters and journalists when they describe candidate support in the race.)
OK, you can start reading again.

Of course we’ll factor in whether the poll got the winner right, but not necessarily dock it if it was a very close race — within the poll’s stated margin of sampling error.  Only SurveyUSA showed a tie in the 2006 gubernatorial election, when many other polls showed Mike Hatch with more support than Gov. Tim Pawlenty.

Ready?  Here we go.

‘Poll of polls’
In the presidential race, the final “poll of polls” in Minnesota was quite accurate — on average:  Obama won by 10.2 points, and the polls’ average lead for Obama was 10.4 points.  The polls ranged from the Big Ten/Battleground Poll and MPR/Humphrey Institute’s 19-point spreads (outside the margin of sampling error) to SurveyUSA’s three-point spread (well within the margin).  Princeton Survey Research Association’s live-interviewer telephone poll for the Star Tribune showed an 11-point lead.  The MPR/Humphrey Institute’s telephone poll, also conducted with live interviewers, showed a nine-point lead, both of which were one-point differences from the actual 10-point spread.  They were most accurate presidential polls in the state.

In the volatile U.S. Senate race, full of wild and wooly accusations during the final week of the campaign, the average “poll of polls” also was pretty accurate.  It showed less than a percentage point lead for Coleman. When this was written, Sen. Norm Coleman led Al Franken by only a few hundred more votes out of nearly 2.9 million.  So we’ll call it 42 per cent to 42 per cent.
On one end was St. Cloud State University’s poll, in which researchers only showed results for the voting age population rather than likely voters. It found Coleman with nine points more support than Franken.  On the other, the Big 10/Battleground Poll showed Franken with six points more support than Coleman.  The YouGov/Polimetrix Poll, an Internet poll that matched its respondent panel to a registered voter list, was the most accurate with only a one-point difference between Coleman and Franken.

How do we sort that all out? Check out the graphics at the end of this story. The bottom graphic tells the tale: Two polls tied for Best of 2008 with an average error of only 2.5 percentage points. Congratulations to Princeton Survey Research Associates and YouGov/Polimetrix.

So how’d the 2008 medalists do it?

PSRA’s Larry Hugick says it’s paying attention to detail, which includes adapting their likely voter model to the times.  Earlier in the year, Hugick said they used a basic two-question screen.  But in the last poll before the election they switched to a model developed in the 1950s by Gallup’s Paul Perry but modified to give extra points to younger people who said they would vote.

Larry Jacobs at the MPR/Humphrey Institute Poll agreed that discerning likely voters was tough, but not necessarily because of the youth vote, which he says looked proportionally similar to past elections. The tougher likely voter nut to crack was handling the disparity of interest between Democrats and Republicans in likely voter models, and the last-minute shifts. 

Like Eeyore
“It’s the Eeyore factor” he said.  Like the Winnie-the-Pooh character who took a ho-hum attitude, Republicans were less enthusiastic about the race, but turned out anyway.

If one includes SurveyUSA polls in the 3rd and 6th Congressional districts, its body of work was quite accurate, according to Leve, the SurveyUSA pollster.  Plus, Leve points out that his IVR poll never had Franken ahead in any statewide likely voter poll.
So what’s the final verdict?  Overall, the polls did pretty well in this year, especially if taken collectively.  There were a few outliers, especially the Big Ten/Battleground poll in the presidential race.  If you took the average of all polls close to Election Day, you got a pretty accurate picture of the race, despite multiple modes of interviewing and different likely voter models.

It’s good to have more polls, Jacobs said, because they provide a check on campaign and candidate polls, and give voters the strategic voting information they need to made decisions.
And that’s something to think about in 2010 — an off-year election where there are likely to be fewer polls than in this 2008 sea-change election.

Rob Daves is former director of the Minnesota Poll.  He is principal at Daves & Associates Research, a Minneapolis public opinion research company, and teaches survey research at the University of Minnesota’s Humphrey Institute.

Polling Accuracy in Minnesota’s 2008 General Election

This table shows the outcome of the election, polls final numbers in Minnesota, and the amount of error compared with the final outcome. Who gets “Best of 2008” bragging rights? It’s a three-way tie.

Sources: Poll data comes from and includes only those polls that interviewed after Oct. 22, 2008. Election data came from the Minnesota Secretery of State's web site.
Sources: Poll data comes from and includes only those polls that interviewed after Oct. 22, 2008. Election data came from the Minnesota Secretery of State’s web site.

You can also learn about all our free newsletter options.

Comments (1)

  1. Submitted by Michael Ernst on 11/06/2008 - 11:33 am.

    Your ground rules were: “We’ll only consider those polls where interviewing was finished after Oct. 22.”

    Did you mean “finished on or after Oct. 22?” The Big Ten and SCSU polls were finished on Oct. 22.

Leave a Reply