Pollsters keep chasing elusive random sample

Fewer landlines, caller ID are problems

A group of top-tier pollsters gathered last Friday at a meeting of the New England chapter of the American Association for Popular Opinion Research to discuss the fractured state of the survey industry and the upcoming midterm elections. The pollsters, including Mike Mokrzycki (NBC Election Polling), Douglas Schwartz (Quinnipiac University Poll), Andrew Smith (UNH Survey Center), and Brian Schaffner (UMass Poll), said the demand for public opinion polling has never been greater. But they said it is simultaneously becoming harder and harder to conduct high quality polls using the traditional and mathematically beautiful survey technique of random sampling.

“Using statistical inference to generalize from a small random sample is sort of magical, with a sample of several hundred to a thousand people projecting to the behavior of millions,” said Mokrzycki.

Random sampling relies on the fact that any person in the target population of the survey has a known probability of being part of the sample. But changing technologies and societal norms have been making it harder to reach some members of target populations, and harder to assess the probability of reaching each person.

Survey response rates have plummeted as household landlines become a thing of the past and caller ID screening allows people to only answer calls from known callers. Low response rates can result in non-response bias where polling results can suffer when non-responders differ from the general population in meaningful ways.

For example, if young people are less likely to respond to a poll and also hold different opinions than older respondents who might have higher response rates, the results of the survey may be substantially incorrect. If the people who do respond are substantially similar to those who don’t, non-response can often be corrected by weighting. But with survey response rates plummeting to the single digits, it is not guaranteed that weighting will continue to be an effective remedy.

Mokrzycki noted that most political polling results have remained remarkably accurate in the face of these changes, with some exceptions.

There was clear disdain by some of the panelists for Interactive Voice Response (IVR) or robo-polling. Andrew Smith, director of the UNH Survey Center, said he hands IVR polling calls to his 14-year-old son, given the lack of care for good results shown by IVR firms. “If their concern about quality is that good, I can do the same,” said Smith.

A key limitation of IVR polling is the federal law which disallows automated calls to mobile phones. Studies have shown that 40 percent of households are cell-phone-only, which puts a large segment of the population outside the range of IVR pollsters. IVR pollsters try to ensure accurate results by making sure that the respondents they do reach match the demographics of their target population through careful selection or weighting of the results. Some large IVR pollsters such as Rasmussen and Public Policy Polling have been supplementing their calls with panels of Internet users to reach cell-only respondents.

Tom Jensen of Public Policy Polling was invited to the panel to discuss IVR polling, but had to cancel when the date was changed.

The new wave of Internet-based panel polling was represented by Schaffner, director of the UMass Poll. Schaffer works with online polling company YouGov to conduct his surveys and he provided an overview of Internet polling techniques. YouGov polling has been adopted by the Huffington Post and The New York Times, but there is concern in the polling community about the methodological changes required by the new medium.

There is no directory of Internet users that allows for true random sampling. Instead, Internet polling firms recruit very large panels (well over 1 million panelists for YouGov in the United States), identify the demographics of the target population, and then match those demographics to actual people from their panel. Schaffner’s results have been promising.

“When Nate Silver did an analysis of the 2012 election polls he found an average Internet poll error of 2.1,” said Schaffner. The average error of live interviewer phone polls was 3.5 points, and IVR pollsters had an average error of 5.0 points.

Internet polls and live caller polls of the 2014 Massachusetts race for Governor have been similar to each other, showing the same average—Democrat Martha Coakley up 3 points. IVR polls, on the other hand, have been much more favorable to the GOP, showing Charlie Baker up 3 points, possibly due to the different populations seen by the different polling methods.

While political campaigns have always called into question polls that show their candidate down, Schwartz, director of the Quinnipiac University Poll, has found that campaigns have stepped up their attacks in recent years.

“In the past if we put out numbers that a campaign didn’t like, they would call me,” said Schwartz. “Their campaign pollster would say ‘What are you guys doing? What’s your likely voter model? What’s your screen? What is your weighting?’ and we would come to understand why we had different numbers.”

“Now it’s a totally different game,” said Schwartz of the testy Governor’s race in Connecticut. “They are going public. Even the candidate himself attacked the poll. I thought ‘You couldn’t even get a flak to do that?'”

However, Schwartz maintained he does not have a problem taking heat from campaigns. “When you are being criticized by both Democrats and Republicans you have to be doing something right.”

Another topic on the agenda was the politics of the 2014 midterm elections.

Professor Smith of the UNH Survey Center said that political parties are like sports teams and people don’t often change allegiance, so it is more a matter of who is motivated to vote.

“Turnout, for midterm elections in particular, is key,” said Smith. “Who is going to show up?”

Smith is spending much of his time trying to figure out the makeup of the midterm electorate, rather than focusing on voters’ or candidates’ stance on the issue. Given that many voters don’t know the candidates, he is a proponent of using generic ballot questions on polls that ask for whether a respondent supports a generic Republican or Democrat.

Smith is also watching what is going on at the national level to motivate or discourage voters. “‘Midterm elections come in three varieties for the White House party,'” said Smith quoting Charlie Cook, “‘Bad, really bad, and horrific.’ and we’re probably someplace between really bad and horrific.”

During the question and answer portion of the program panelists were asked about a recent dust-up between Massachusetts politicos in which political science professor Jerold Duquette claimed that the Massachusetts gubernatorial primary election was influenced by highly publicized weekly tracking polls in media outlets such as the Boston Globe.

“There is some very good political science research that shows how viable you think a candidate is affects your vote in a primary,” said Schaffner, “although it is probably not the most important factor.”

Michael Link, president of the American Association for Popular Opinion Research, said some might say pollsters are in a very trying time. “But we have never lived in a time when people have more opportunities to express themselves, and there are more ways for us to measure public opinion than ever before,” Link said.

Link said the association will continue its mission of providing guidance on methods and transparency, while also expanding the organization’s scope beyond surveys to any technique for gauging public opinion, including social media analysis and data mining.

Meet the Author
“Whether you are producing or consuming public opinion information, how do you know it is right?” asked Link. “Standards and transparency are the key.”

Brent Benson analyzes politics and public policy in the state of Massachusetts using a quantitative approach on the Mass. Numbers blog — follow him on Twitter @bwbensonjr.