Nonprofit, nonpartisan journalism. Supported by readers.

Donate

Community Voices features opinion pieces from a wide variety of authors and perspectives. (Submission Guidelines)

A geek’s guide to political polling

On any given day during election season polls suggest one candidate is up or down, pulling ahead or falling behind. Mitch McConnell and Kay Hagen are leading or behind in Kentucky or North Carolina, David Pryor is in tight race in Arkansas, or Greg Orman in Oklahoma is tied with Pat Roberts or pulling away. Here in Minnesota, polls suggest Sen. Al Franken has either a nine- or nearly 18-point lead in the polls, or that Gov. Mark Dayton has either a nine- or 12-point lead in the polls over challenger Jeff Johnson, with some pundits contending that the races will surely tighten. One only has to look to four years ago when polls suggested a Dayton blowout over Tom Emmer, only for it to be a squeeker of a victory.

schultz portrait
David Schultz

The media is obsessed with polls. Donors and political parties fret over, or use them tactically to create impressions about how well their candidates are doing. Conversely, polls are criticized as biased, inaccurate, or simply wrong. So what’s in a poll? How do we know when one is good? 

Snapshot of a moment

From a geek point of view (of which I may be one since I teach research methods and polling), polls need to be put into perspective. They are snapshots that reflect public opinion at a specific point in time. There are many reasons why they do not always predict well.

When I see a poll here is what I look for:

First, I look to see what is called its confidence level. This refers to a statistical test expressing how confident the pollster is that the poll is an accurate sample of the entire population. The industry standard for pollsters is a 95 percent confidence level. By that, the pollster is statistically 95 percent confident that the sample drawn for the survey is an accurate representation of those in the entire population, such as a state. This also means that there is a 1 out of 20 chance even with the best polls that the sample is just bad – it surveyed the wrong people or just got a bunch of outliers (too many liberals or conservatives for example) in the survey. Remember this, but also be wary about surveys with confidence levels of 90 percent, which are often used. One of 10 of them will be wrong.

Second, polls need to decide whom to sample. Do you sample all adults, all voters, or likely voters? Good polls survey likely voters, but how do you identify them? Are you likely if you voted two years ago? What if you are just turning 18 or just moved into the state? Defining likely is difficult.

The all-important survey method

Third, the survey method or technique is critical. Is the survey only using land lines to reach voters or does it include cell phones too? We know fewer and fewer people answer their phones at home and that more and more people are exclusively or primarily relying on cell phones. A good survey will mirror the mixture of land-line and cell-phone users in its population. Nationally, more than 90 percent of adults have cell phones now with about 50 percent using only cell phones. A good survey mirrors phone-type usage because there are some demographic differences in who uses cell phones exclusively, and that might bias a survey.

Fourth, sample size is important for several reasons, because the more individuals surveyed the better the poll. Sample size affects what is called the margin of error. All surveys have margins of error indicating that the poll is accurate to plus/minus a certain percent. Larger polls have smaller margins of error. Often times conflicting poll results reflect margins of error. If one poll shows a 10-point candidate lead with a margin of error of four points its results may be no different from a poll two weeks later showing a lead of eight points and a similar margin of error. Be wary of one poll claiming a narrowing or widening of a lead if the poll results are within margins or error.

Moreover, some polls may have large enough samples to tell us something in general — how a candidate is viewed statewide — but it does not have samples big enough to tell us about subpopulations — such as women.

Clustering within samples

Even if the sample size is adequate, pollsters often do some clustering in selecting whom to survey. They may seek quotas of people who live in cities or rural areas because geography may be important in determining accuracy. I generally look to polls that have samples that approximate the Democrat and Republican breakdown in the population based on most recent election exit polls. In Minnesota, about 38 percent identify as DFL and 32 percent as Republican. A good poll should reflect that split.

There may be other sources of problems in surveys such as question bias that might affect answers. But for those of us who teach surveying, knowing how the polls were done illuminates the problems associated with misinterpretation of them during elections.

David Schultz is a Hamline University professor of political science and the author of “Election Law and Democratic Theory” (Ashgate, 2014) and “American Politics in the Age of Ignorance” (Macmillan, 2013). He blogs at Schultz’s Take

WANT TO ADD YOUR VOICE?

If you’re interested in joining the discussion, add your voice to the Comment section below — or consider writing a letter or a longer-form Community Voices commentary. (For more information about Community Voices, email Susan Albright at salbright@minnpost.com.)

You can also learn about all our free newsletter options.

Comments (9)

  1. Submitted by Frank Phelan on 10/13/2014 - 11:22 am.

    Fact Check

    My recollection is that in 2010, most polls through the fall shod Dayton and Emmer very close. there was a KSTP/Survey USA Poll about a week or so before the election that showed Dayton with a 12 point lead. Even at the time, it was viewed as an outlier. Annette Meeks also referenced that poll on Almanac this past weekend.

    I’m not sure how this false narrative has become accepted, but I expect better of David Schultz.

    The actual vote totals showed that the majority of polls was correct, and that if there is an outlier among polls, that poll should be ignored in most cases.

  2. Submitted by Greg Kapphahn on 10/13/2014 - 01:21 pm.

    I Would Add a Couple More Caveats

    1) beware of any in-house polls from ANY political party or interest group: who’s paying for a poll often has far more to do with the results of that poll than those who trumpet such polls (when they’re in their favor) or the polling organizations might like us to believe.

    2) I ALWAYS throw out without consideration any poll where the polling organization refuses to reveal it’s methodology, the actual questions asked, and the order of those questions. Of course that information is not always made available when the results of a poll are released.

    When that’s the case I take it to mean this particular polling organization was seeking to produce a particular outcome, rather than seeking to discover, in as accurate a way as possible, what opinions were actually out there,…

    and perfectly willing to massage their questions and their sample group until they get the outcome they wanted.

    I was particularly fascinated by a call from a polling organization in the LAST election cycle which started with several very negative statements about Collin Peterson (all of which I disagreed with the caller over),…

    then finished with a final question along the lines of,…

    “Knowing what you now know [the stuff the survey caller had just tried to tell me, all of which was factually false] would you vote for Collin Peterson or his opponent (I’ve forgotten who was running against Peterson two years ago).

    I can’t help but wonder how common such “Push Polls” are, and how often their numbers are reported, as that one was (as I remember), as a legitimate survey rather than an advocacy call for one candidate or another.

    Having once worked in a call center, I can’t help but feel sorry for those who make survey calls, especially when the contract under which they’re calling requires them to spew B.S. that they, themselves, know to be exactly that,…

    or lose their jobs.

  3. Submitted by E Gamauf on 10/13/2014 - 01:36 pm.

    Polls only matter if people vote.
    And if the people polled, told the truth.

    Said what they WILL do, not what they MIGHT do in their mind’s eye vision of themselves.

    I think we need to accept that a lot of polls today, are manipulation medium:

    Making people believe a certain thing is inevitable & that they can sluff off without personal inconvenience, is a boon to the leader’s opponent.

  4. Submitted by Rachel Kahler on 10/13/2014 - 02:54 pm.

    I wonder

    I actually wonder what the value of a political poll is, other to influence voters.

  5. Submitted by jason myron on 10/13/2014 - 02:56 pm.

    Polls

    have become nothing more than click bait for 24/7 political wonks. They literally change by the day according to how partisan the group is doing the polling.

  6. Submitted by Hiram Foster on 10/13/2014 - 03:21 pm.

    Polls

    I am known in my little circle at least as a poll skeptic. My most fundamental problem with most polls is that they can’t be checked against anything, and therefore, it can’t be determined whether they are accurate. How do you know a July poll is accurate when there won’t be an election until November? The professor makes an issue of “confidence level” but the fact is, a bad result or an unknowable result isn’t made better because you have confidence in it. How do we even know what the correct level of confidence is? In an upset election, one where the usual assumptions are disrupted, and here I am thinking of Ventura in 1998, perhaps the better polls were the one where the pollster were less rather than more confident in their polling.

  7. Submitted by David Schultz on 10/13/2014 - 07:33 pm.

    2010 MN Gubernatorial Polls

    Frank Phelan’s recollection is wrong. There were no less than six polls done during the general election that had Dayton with a high single digit or double-digit lead over Emmer. I stand by my comments and I am not perpetuating any myths. Sorry to disappoint you Frank but for reasons different than you assert.

    • Submitted by Hiram Foster on 10/14/2014 - 05:39 am.

      2010 polls

      Given that the polls were inaccurate in 2010, why are we paying attention to them in 2014? No matter how much self interested confidence the pollsters had in them?

Leave a Reply