Polling season ahead: What you need to know

Can Americans still trust political polls to help them understand the mood and preferences of the country? Yes, but they have to be smart consumers of polls, says Craig Helmstetter.

Can Americans still trust political polls to help them understand the mood and preferences of the country? Yes, but they have to be smart consumers of polls, says Craig Helmstetter.

by Craig Helmstetter, Managing Partner

Polling season ahead: What you need to know

This past week I had the opportunity to participate in the premier conference for the nation’s survey researchers, the annual meeting of the American Association of Public Opinion Researchers (AAPOR). Now that we are several states into the primaries, with the mid-term elections—and accompanying punditry and prognostications—nearly upon us, there couldn’t be a better time to review some fundamentals about political polling.

Here are my top three conference take-aways that will help me survive the season. I hope you find them useful too:

  1. High quality polls still have value. They did not fail in 2016.

    Although many of us were surprised by President Trump’s electoral victory in 2016, many national polls conducted in the last days of the campaign accurately reflected a modest margin for Clinton, who won the popular vote by 2 percent.

    More relevant to the electoral college: State-level polls narrowly missed Trump’s win in Pennsylvania, Michigan, and Wisconsin. The two main reasons for this, according to AAPOR’s thorough post-mortem, were (a) difficult-to-measure late voting decisions, and (b) over-representation of college-educated voters in the polls. Based on the conversations I heard last week, pollsters are now obsessed with accurately representing education and very unlikely to repeat that mistake.

    Part of the perceived inaccuracy of polling has do with margins of error. Like weather forecasts, poll results come with some measure of uncertainty. While uncertainties are difficult for humans to process and are often under-emphasized in news reports (and typically ignored by pundits), we would do well to think of poll results in terms of ranges of likely preferences at a point in time, rather than precise predictions of some future outcome.

  2. Not all polls have value.

    Major media outlets and many survey research firms have a strong reputational incentive for investing in solid surveys. Still, some pollsters and media outlets have either partisan motivations, more interest in attention than credibility, or both.

    How do you distinguish the good from the bad? While at AAPOR I attended a panel of polling directors from ABC, Associated Press, CBS, and the Washington Post that offered some tips.

    Before publishing results from a poll that they did not conduct themselves, they all indicated that they asked these questions:

    • Who funded it? Advocacy groups and those affiliated with campaigns obviously have a stake in the outcome, so their poll results are sometimes ignored, or published only after receiving a high level of scrutiny.

    • What is the target population? Political polls can focus on the voting-age population, likely voters, or registered voters. All can be valid but can yield different results.

    • What are the sampling methods? The basic idea here is for everyone in the target population to have an equal chance of being included in the survey. Polls that rely only on landlines are no longer valid, and neither are internet-based polls that rely exclusively on people choosing to opt in (more on that below).

    • How are the results weighted? Survey researchers universally recognize that it is impossible to get a sample that perfectly reflects the target population, no matter how good the sampling methods. So mathematical adjustments are made after-the-fact. Polling directors indicated that before they publish survey results sent to them, they need to know (a) that the results are weighted, and (b) that the weighting methods look reasonable. For example, was educational attainment included in the weighting?

    • They need to have access to question wording and at least the basic “topline” results.

    These guidelines largely reflect the tenets of AAPOR’s Transparency Initiative, which the panel universally endorsed. (We have been following those tenets here at the Lab and plan to formally join the initiative in the near future.)

  3. Some web surveys have merit, but many do not.

    Internet-based surveys continue to grow in popularity. Once a novelty, then valuable only for list-based samples of people known to access the internet (professional associations, on-line shoppers, etc.), now web surveys are being used for broader purposes—including political polling. Perhaps even more than other surveys, however, the quality of web surveys varies widely.

    Some people still do not have access to the internet, and many of us who do have internet access routinely ignore requests to fill out web surveys. But the problems do not end there. At the AAPOR conference some presenters noted a proliferation of respondents who appear to treat web surveys as a joke, attempt to “game” survey results for their strategic advantage, or even fill out online surveys as a way of trolling!

    Even more notably, hackers have caught on to the fact that many online surveys pay small fees for each completed survey, and so have developed bots that answer surveys by algorithm.

    Despite trolls and bots, some firms do a very careful job of recruiting and maintaining representative on-line panels. I talked to one firm that provides tablets and internet access for people who don’t already have it, and another that supplements its “online” panel with telephone contacts. When combined with expert weighting and analysis, some online surveys are now more accurate than telephone (cell + landline) surveys.

    The upcoming election cycle should be an exciting and interesting one. As usual, the outcomes will hold big implications for our daily lives. As we enter this election season, and the accompanying season of political polling, we should all beware of polling results that seem too good (or too bad) to be true—especially those that may circulate on social media with little information concerning who did them or how they were done.

Still, we shouldn’t throw the baby out with the bathwater. Polling, when done well, can provide much-needed insight to the mood and preferences of the country, and can serve as an important voice of the people.

-Craig (@c_helmstetter)


Reactions? Please email us your thoughts or respond on Twitter or Facebook.


Craig Helmstetter