Nonprofit, nonpartisan journalism. Supported by readers.

Donate
Topics
Community Voices features opinion pieces from a wide variety of authors and perspectives. (Submission Guidelines)

A geek’s guide to political polling

So what’s in a poll? How do we know when one is good?

On any given day during election season polls suggest one candidate is up or down, pulling ahead or falling behind. Mitch McConnell and Kay Hagen are leading or behind in Kentucky or North Carolina, David Pryor is in tight race in Arkansas, or Greg Orman in Oklahoma is tied with Pat Roberts or pulling away. Here in Minnesota, polls suggest Sen. Al Franken has either a nine- or nearly 18-point lead in the polls, or that Gov. Mark Dayton has either a nine- or 12-point lead in the polls over challenger Jeff Johnson, with some pundits contending that the races will surely tighten. One only has to look to four years ago when polls suggested a Dayton blowout over Tom Emmer, only for it to be a squeeker of a victory.

schultz portrait
David Schultz

The media is obsessed with polls. Donors and political parties fret over, or use them tactically to create impressions about how well their candidates are doing. Conversely, polls are criticized as biased, inaccurate, or simply wrong. So what’s in a poll? How do we know when one is good? 

Snapshot of a moment

From a geek point of view (of which I may be one since I teach research methods and polling), polls need to be put into perspective. They are snapshots that reflect public opinion at a specific point in time. There are many reasons why they do not always predict well.

When I see a poll here is what I look for:

First, I look to see what is called its confidence level. This refers to a statistical test expressing how confident the pollster is that the poll is an accurate sample of the entire population. The industry standard for pollsters is a 95 percent confidence level. By that, the pollster is statistically 95 percent confident that the sample drawn for the survey is an accurate representation of those in the entire population, such as a state. This also means that there is a 1 out of 20 chance even with the best polls that the sample is just bad – it surveyed the wrong people or just got a bunch of outliers (too many liberals or conservatives for example) in the survey. Remember this, but also be wary about surveys with confidence levels of 90 percent, which are often used. One of 10 of them will be wrong.

Article continues after advertisement

Second, polls need to decide whom to sample. Do you sample all adults, all voters, or likely voters? Good polls survey likely voters, but how do you identify them? Are you likely if you voted two years ago? What if you are just turning 18 or just moved into the state? Defining likely is difficult.

The all-important survey method

Third, the survey method or technique is critical. Is the survey only using land lines to reach voters or does it include cell phones too? We know fewer and fewer people answer their phones at home and that more and more people are exclusively or primarily relying on cell phones. A good survey will mirror the mixture of land-line and cell-phone users in its population. Nationally, more than 90 percent of adults have cell phones now with about 50 percent using only cell phones. A good survey mirrors phone-type usage because there are some demographic differences in who uses cell phones exclusively, and that might bias a survey.

Fourth, sample size is important for several reasons, because the more individuals surveyed the better the poll. Sample size affects what is called the margin of error. All surveys have margins of error indicating that the poll is accurate to plus/minus a certain percent. Larger polls have smaller margins of error. Often times conflicting poll results reflect margins of error. If one poll shows a 10-point candidate lead with a margin of error of four points its results may be no different from a poll two weeks later showing a lead of eight points and a similar margin of error. Be wary of one poll claiming a narrowing or widening of a lead if the poll results are within margins or error.

Moreover, some polls may have large enough samples to tell us something in general — how a candidate is viewed statewide — but it does not have samples big enough to tell us about subpopulations — such as women.

Clustering within samples

Even if the sample size is adequate, pollsters often do some clustering in selecting whom to survey. They may seek quotas of people who live in cities or rural areas because geography may be important in determining accuracy. I generally look to polls that have samples that approximate the Democrat and Republican breakdown in the population based on most recent election exit polls. In Minnesota, about 38 percent identify as DFL and 32 percent as Republican. A good poll should reflect that split.

There may be other sources of problems in surveys such as question bias that might affect answers. But for those of us who teach surveying, knowing how the polls were done illuminates the problems associated with misinterpretation of them during elections.

David Schultz is a Hamline University professor of political science and the author of “Election Law and Democratic Theory” (Ashgate, 2014) and “American Politics in the Age of Ignorance” (Macmillan, 2013). He blogs at Schultz’s Take

WANT TO ADD YOUR VOICE?

If you’re interested in joining the discussion, add your voice to the Comment section below — or consider writing a letter or a longer-form Community Voices commentary. (For more information about Community Voices, email Susan Albright at salbright@minnpost.com.)