Nonprofit, nonpartisan journalism. Supported by readers.

Donate

MPR/Humphrey Institute poll review: Too many 612s?

The Minnesota Public Radio and the University of Minnesota Humphrey Institute released critiques of their much-criticized 2010 election polls late Friday afternoon.

The two entities commissioned the review after their final poll put DFL gubernatorial candidate Mark Dayton up 12; he beat Republican Tom Emmer by 0.4 percent.  Republicans especially have howled about the result.

The U profs who direct the poll, Larry Jacobs and Joanne Miller, did an internal critique, which was reviewed by Gallup Editor-In-Chief Frank Newport.

Safe to say, Jacobs and Miller don’t find a lot of fault with themselves; Newport, president of the American Association for Public Opinion Research, was a tougher grader.

Too many 612s and black interviewers?
Newport says the “issue which appears most relevant” is a potential oversample of the 612 area code. The Minneapolis-anchored area favors Democrats.

According to the U profs, 81 percent of “612” voters participated when asked. Statewide, the figure was 67 percent.

This is one of the areas where Jacobs and Miller make the numbers dance a bit; Newport notes they should have compared “612s” to non-612s. The gap would’ve been even more stark.

Jacobs and Miller suggest weighting future polls by region. Newport agrees, somewhat witheringly: “This is commonly done in state polls, particularly in states where voting outcomes can vary significantly by region, as apparently is the case in Minnesota.”

At one point, there’s a jolting suggestion that the U employed too many African-American interviewers.

Jacobs and Miller wonder whether the proportion of African-American poll-takers — 44 percent — might’ve freaked out (presumably white) voters in a state that’s only 5 percent black.  However, they conclude, “This investigation failed to detect statistically significant differences” in candidate support.

However, Newport wonders if minority interviewers achieved higher cooperation rates among minority voters — perhaps a factor in the high “612” cooperation rate, since that D-favoring area code also has a higher percentage of minorities. He suggests going back over the data.

Too light on weighting
During the campaign, SurveyUSA president Jay Leve criticized how the MPR/HHH poll weighted voters — particularly how the poll simulated the preferences of so-called “cell-phone-only voters” (CPOs) who lack landlines. MPR/HHH did not call CPOs.

The Jacobs/Miller review only says that their methodology was sound, with a Columbia University expert confirming that evaluation.

However, Newport faults the U profs for not discussing the “particulars of the weighting other than to say that it was reviewed and approved. … This is an important area of focus.”

Taken alone, excluding CPOs wouldn’t explain a too-big Dayton margin — in fact, it would likely narrow it. Pollsters have generally shown that if CPOs have a partisan lean, it’s toward Democrats. 

However, the weighting issue here is broader. The important point is that Newport didn’t have enough information to critique the formula.

(Jacobs, Miller and Newport agree that CPOs should be included in future polls, a more expensive process.)

The other guys did it
Even though the MPR/HHH poll, conducted eight to 12 days before Election Day, gave Dayton his biggest margin of the campaign, Jacobs and Miller contend it wasn’t an outlier.

They examine Dayton’s and Emmer’s support versus four other polls in the field at least part of the time MPR was.

Dayton’s support fell within the margin of sampling error of all four (St. Cloud State, Rasmussen, the Star Tribune and SurveyUSA). Emmer’s support fell within St. Cloud’s and the Strib’s margin.

However, it’s important to remember that St. Cloud (which showed Dayton up 10) has been criticized, too — particularly for an ultra-long 12-day interviewing process that can carry three-week-old voter attitudes into a final-days result.

The Dayton number really wasn’t the problem; Emmer’s was the one that varied widely. The MPR poll didn’t even catch all of the back end of the Strib’s range, while missing SurveyUSA’s and Rasmussen’s entirely.

Jacobs has repeatedly noted polls aren’t predictions, but a “snapshot in time” that may legitimately miss shifting voter attitudes.

However, Newport archly notes that “the MPR/HHH poll was second closest to Election Day” of the five “and reported the highest Democratic margin.”

Other recommendations
If everyone is going to treat the final poll like a prediction, Jacobs and Miller recommend polling even closer to Election Day. That way, they have a better shot at catching voters’ ultimate mood. Newport agrees.

(Interestingly, the Strib decided not to do a traditional final-week poll this year because it might influence voter, volunteer and/or donor behavior too much.)

The U profs also suggest reporting poll results differently. They favor including other polls with their own, and reporting candidates’ support not as a single point, but a range within the error margin.

On the latter point, Newport is wary: “It does … again, raise the question of the purpose of, and value of, pre-election polls if they are used only to estimate broad ranges of where the population stands.”

As far as overcoming any interviewer “lack of rapport,” Jacobs and Miller suggest asking innocuous questions before getting to the horse race question — something other polls do. Newport suggests testing that theory before implementing it.

A MinnPost poke?
As regular readers know, I’ve loudly questioned the MPR poll, and there might be a bit of payback in the U’s write-up. They repeatedly refer to the St. Cloud State survey as the “MinnPost/St. Cloud State” poll.

We did pay for three questions involving ranked-choice voting, not the topline result that’s the focus here. Unlike the MPR/HHH poll, which was a joint production and labeled thusly, St. Cloud State’s name stood alone on its survey.

At the very most, it should be “St. Cloud State/MinnPost,” but unlike MPR/HHH, we didn’t jointly determine the content of most of the lengthy questionnaire.

You can also learn about all our free newsletter options.

Comments (4)

  1. Submitted by Hiram Foster on 12/18/2010 - 06:46 am.

    I have heard it said that in Israeli elections, the voters tell the truth to pollsters, but lie when they vote. Maybe that’s a problem with Minnesota voters as well.

    In general, as the electorate becomes increasingly fragmented, ideologically, technologically, and in other ways as well, polling becomes increasingly difficult. Mistakes when they are made, now have a greater impact. But I also do think that people make up their minds later and are less decisive. I have never done polling but I have done quite a lot of canvassing, and I know that success in getting results depends a lot on who the canvasser is, how aggressive he or she is, how well that person interacts with the voter.

  2. Submitted by Paul Brandon on 12/18/2010 - 02:02 pm.

    A line that might be added to the graph is total election spending on each candidate over time. Be interesting to see how well it is correlated with the polls and the actual vote.

  3. Submitted by Matthew Steele on 12/18/2010 - 09:22 pm.

    This is 2010, and area codes don’t refer to areas. I bet half of the people I know, work with, etc have cell phone numbers without MN zip codes. How do pollsters find these people?

  4. Submitted by Dick Novack on 01/02/2011 - 10:58 pm.

    Pardon my late response almost no one will see.

    Matt Steele has it on the head. Area Codes meant nothing in the wireless world even before “porting” and people started moving around.

    When cell phones were first issued in the Twin cities, ALL western were 612 and all eastern were 641 EVEN if you lived elsewhere like 952 or 763. That is because they were issued from federal “number banks” assigned to the carrier offices, not the phone owner. My early numbers were all 612’s from any carrier because there were NO 952’s. We all still have the same numbers. One family plan member going to college as late as 2004 in DC had to change to 202 by carrier rules.

    Poll subjects need to be selected by residence like we did 40 years ago, and then their phone number tracked down – it can be done because we do it for other purposes. Polling randomly chosen phone numbers is just plain stupid in today’s society as there is no way to make the sampling representative.

Leave a Reply