Here is a case of journalistic fraud. Some may consider it a minor case. It involves highly questionable practices in the coverage of election-year politics, but it was hardly remarked upon at the time it occurred, which was the middle of August, a full month before the Sept. 15 state primary election.

The story in question ran at the top of the front page of the Boston Herald, just above the scally cap of the knickerbockered newsboy who (until the Herald‘s recent full-color makeover) used to leap from the banner. The headline: “8th District pack closing in on front-runner Flynn.” On this particular Friday morning, there were still 10 Democrats seeking the party nomination in the 8th Congressional District and the chance to replace Rep. Joe Kennedy in Washington. The Herald, in partnership with WCVB-TV in Boston, had commissioned a poll to find out where the candidates stood with the district’s voters. The result was not surprising: Former Boston Mayor Raymond Flynn “holds a clear lead” the newspaper reported, “but a tightly-bunched group of Democratic rivals is breathing down his neck.”

It was just the kind of story newspapers and television stations have come to love–and to pay good money for. A poll story is straightforward, easy to understand, and it captures the drama of unfolding events. Plus, it can be reported with the ring of scientific objectivity. The Herald presented the numbers: Flynn was the choice of 18 percent of likely voters, followed by Somerville Mayor Michael Capuano with 12 percent and former legislators Marjorie Clapprood and George Bachrach with 10 percent. Two candidates followed with 7 percent, another had 6 percent, two others had 2 percent, and a final one had 1 percent. In fine print were the usual disclosures: The poll was conducted by RKM Research and Communications earlier that week by interviewing 402 “likely Democratic primary voters.” The margin of error was plus or minus 5 percent.

As always in poll stories, the “margin of error” line is a throwaway–it gives the sound of authoritativeness, as long as the reader doesn’t think about what it means. But suppose a reader did stop for a moment to examine the information being presented. If Ray Flynn’s showing of 18 percent could have been 5 points higher or lower, that means he might have had as much as 23 percent support or as little as 13 percent. The next closest candidate, Capuano, might have had as much as 17 percent or as little as 7 percent. So already we deduce that it might have been possible that Capuano was the one in the lead, or, that he and Flynn were tied–each with, say, 16 percent. Yet the first paragraph of the story stated flatly that Flynn “holds a clear lead.”

Other polls had shown Flynn out front, too, so the “finding” was believable enough. The eyebrow-raiser in the story was Capuano’s strong second-place position. But was he really in second place? Factoring in the margin of error, we know that a different poll could have shown Clapprood and Bachrach with 15 percent, or perhaps at least 12 percent instead of the 10 percent the poll found. Capuano could have had, say, 8 or 9 percent, in which case he would have looked more like an underdog than a contender. Obviously, the entire 10-candidate ranking the Herald presented could have been–and probably would have been–reshuffled if a different group of 402 “likely voters” had been interviewed. The headline of the story, the numbers, and the conclusions all rested on nothing more than the laws of chance.

As it played out, Capuano captured the Democratic nomination with 23 percent of the vote. (Flynn followed with 18 percent, Bachrach was third with 14 percent, and environmentalist John O’Connor edged Clapprood 13 percent to 12 percent.) The eventual outcome, of course, says little about the accuracy of a poll taken a month earlier–people change their minds, previously undecided voters make a choice, and a lot depends on which candidates’ organizations can turn out their voters. Whether the poll in August gave an approximation of voter sentiment at the time is an open question. What is more to the point is this: The Herald‘s report on its in-house poll–like most election-season reporting of polling data in the mass media–made for shoddy and misleading journalism.

In fact, the Herald‘s presentation of the candidates’ standing wasn’t the worst of it. The Aug. 16 story went on to dabble in even more dubious science, reporting the views of key sub-groups of the 402-member sample. Among Catholics, Flynn was said to be the favorite of 27 percent of the voters. Among male voters, Flynn had 23 percent. Among women voters, “there is a surprise,” the Herald reported: Capuano led the way with 15 percent, Flynn followed with 14 percent, and Clapprood had only 10 percent. Nowhere did the story report the size of these sub-samples. How many Catholics were interviewed? How many men? How many women? (Herald pollster R. Kelly Myers did not return a call from CommonWealth.) Nor was there a mention that among smaller samples the statistical margin of error is always greater–for a sample of 200 it is said to be about 6 percent. Guessing that about half the respondents were female, and therefore expecting a margin of error for that sub-group of plus or minus 6 percent, we might wonder about the validity of the finding that one candidate had 15 percent, another one had 14 percent, and another had 10 percent of the women’s vote. The truth, of course, was that these numbers were not statistically meaningful. In fact, almost nothing about the story from top to bottom involved anything more than guesswork and speculation. Yet it was presented as hard news, on the front page, at a time when most of the 10 candidates were facing make-or-break weeks ahead with funders, campaign workers, and voters.

It could be argued that nobody takes a poll such as the Herald‘s seriously. It could be said that this is a misdemeanor case of journalistic malpractice, and hardly worth exposing–except that it has to do with the fabrication of “news,” the reporting of unverifiable information as “fact,” and the media’s role in influencing a congressional election. You know, small potatoes stuff.

To poll or not to poll…

On the morning the Herald‘s poll on the 8th District race was published, I happened to be interviewing John Della Volpe, a Concord-based political strategist who has been working in polling and survey research for about seven years, the last three of them with his own company, Della Volpe & Associates. (Right now, the “associate” is mainly his wife, Linda, who has a background in market research.) Having an experienced pollster on hand, I took the opportunity to ask what he thought of the poll in that morning’s newspaper.

Della Volpe was not entirely dismissive–he trusted the finding that Flynn was the favorite. He took more interest in the poll’s report on each candidate’s “favorable” and “unfavorable” ratings than in the actual rankings. I read him the paragraph that reported where the candidates stood with women: “Among female voters, Flynn has 14 percent support, followed by Clapprood with 10 percent, Bachrach with 9 percent and [Susan] Tracy with 8 percent.” Given what you know about sampling error, does that paragraph have any meaning at all? I asked. “Not at all,” he responded. In Della Volpe’s view, though, this said little about the value of polling. Like every other pollster I spoke with, he puts the garden variety newspaper poll in a separate category from the kind of work he does. “You have to distinguish between what we all do and what they do,” he said, meaning by “they” newspapers and TV stations.

I had heard the same point in an earlier conversation with Lou DiNatale, the widely quoted senior fellow at the McCormack Institute who runs the quarterly University of Massachusetts poll. DiNatale had taken issue with a Boston Globe poll on the 8th Congressional District race that had been published in May. In a press release that month detailing the results of his own spring survey on the governor’s race, he added an addendum blasting the Globe poll as “a classic example of the inappropriate use of media-sponsored polls. It was too early, it will affect the field, and the poll sponsor, the Globe, is in a high visibility dispute with a candidate.” The dispute he referred to was with Flynn, who had been the subject of critical reports in the Globe, particularly with regard to the former mayor’s drinking habits. That the Globe poll showed Flynn ahead with 21 percent (to Clapprood’s 13 percent and Capuano’s 8 percent, with a 5 percent margin of error) did not mute DiNatale’s criticism. “They did not let this field form,” he told me in August, charging that the early poll “fixed the field permanently.”

Too early? It struck me as an odd criticism from a poll-taker–even more so when I came across a news clip from May of 1997 reporting the results of a poll commissioned by a group of liberal Democrats who wanted to assess the strength of potential candidates in the 8th District. This was before Joe Kennedy had even announced he would not run for re-election (though he was considering a run for governor). And this was a full year before the Globe‘s poll that DiNatale said “fixed the field.” Working as a “consultant” on that poll was Lou DiNatale himself.

Is it ever too early to take a poll? Indeed, DiNatale and others were conducting polls on this year’s governor’s race all through last year. (William Weld would beat Joe Kennedy 53 percent to 27 percent, DiNatale found in April of ’97.) As Thomas E. Mann and Gary R. Orren wrote a few years ago in a Brookings Institution report, “To poll or not to poll is almost never the question.” Opinion polls have become ubiquitous, “a hallmark of contemporary politics,” Mann and Orren noted. When an election year is approaching, there are polls to find out who is a potential contender, and how candidates would fare in all manner of hypothetical match-ups. In the election year itself there are polls that tell us who’s ahead, who’s gaining, who’s fading, who has high “negatives,” whose advertising seems to be working, whose message is “resonating,” who did well and who did poorly in debates. And in the weeks before the election there are “tracking polls” that purport to show day-by-day shifts in voter opinion. Sometimes the late-season polls even do a pretty good job of telling us ahead of time who is going to win.

“To poll or not to poll is almost never the question.”

But how much of what polls tell us do we really need to know? Why is it necessary to know how our fellow citizens might or might not vote? Doesn’t that skew the outcomes in elections? How much of the polling information we receive is reliable? How much of it is downright fraudulent? And how are we to know the difference? More to the point, how much of what the polling industry churns out simply takes that amorphous body of confusion, misperception, prejudice, right and wrong ideas, knowledge, ideology, and ambivalence that is present in each of us, magnifies it many times over, and quantifies it into exact but meaningless numbers? In other words, how much do we really understand public opinion–what it is and what it’s good for–in this poll-crazy age?

Naturally, these questions could be put to the public at large–and they have been. Pollsters from the Gallup and Roper organizations have found that most people think polls are usually accurate, and one Gallup survey reported that 76 percent said polls are a good thing for the country while 12 percent said they are a bad thing. But the discussion gets more interesting when one samples the opinion of the poll-takers themselves, as I set out to do in this year’s election season.

Who’s who in polling

The major league firms in the polling business are in Washington, D.C., (including the nearby Virginia suburbs) and New York City. But since the early 1970s, Boston has had a respectable assortment of homegrown opinion research companies. One of the pioneers on the Boston scene was Patrick Caddell, who became interested in political polling as a Harvard undergraduate. With his classmates Danny Porter and John Gorman, he set up a firm called Cambridge Survey Research in 1971 that operated out of an office above the Casablanca Restaurant near Harvard Square. In a few years Caddell opened a Washington, D.C., office and he and Gorman’s profile grew as they worked as Jimmy Carter’s pollsters.

At about the same time Gorman and Caddell were setting up, an anti-war activist named Tom Kiley arrived from Detroit. He got involved in the congressional campaign of Father Robert Drinan, who in 1970 won an upset victory against conservative Democrat Philip J. Philbin. Kiley got to know Caddell and Gorman by working on some of the same campaigns. By the late 1970s, Kiley was beginning to specialize in public opinion research for political clients.

Also drawn to the developing opportunities in opinion research in the late 1960s was a New York lawyer named Irwin Harrison, who was known to almost everyone as “Tubby.” (“I was fat when I was young,” he explains.) Harrison moved to Boston and started work for Becker Research, a firm run by John Becker. After a few years, he left to join Temple Barker & Sloane, a management consulting firm, before setting out on his own in the early 1980s.

Today, Gorman, Kiley, and Harrison each have their own companies and are seen as the deans of political polling in Massachusetts. Gorman set up Opinion Dynamics in 1988. Though most of his work now is market research for business clients, he continues to be a sought-after pollster by politicians. (This year he was enlisted by Scott Harshbarger’s gubernatorial campaign and by Capuano’s congressional bid.) Kiley runs Kiley & Company, a seven-employee firm based in downtown Boston. Among Kiley’s clients are Senators John Kerry and Edward Kennedy, and Reps. Joe Kennedy, Barney Frank, Joe Moakley, and Ed Markey. Tubby Harrison, principal pollster for Harrison & Goldberg, conducted polling in the 1980s for The Boston Globe and the (now defunct) Hartford Times and has worked for Michael Dukakis’s state campaigns and then for his presidential campaign. Later, he polled for Paul Tsongas’s campaign for president. Also active in political and commercial polling are John Della Volpe, who runs his own company in Concord, and Jack Maguire, a former physicist who owns Maguire Associates, Inc. in Bedford.

Somewhat removed from the political hurly-burly are the survey researchers and poll-takers in the academic world. Notable among these is Robert Blendon, who holds positions with both the Harvard School of Medicine and the John F. Kennedy School of Government. Blendon works with the Harvard Opinion Research Program, a university center that specializes in health and social policy research. Blendon has also co-directed a number of high-profile polls for the Washington Post, with backing from the Henry J. Kaiser Foundation.

And of course there is Lou DiNatale, whose background is in politics. After graduating from Brandeis in 1972, he ran several unsuccessful campaigns for Democrats and worked for former Colorado Sen. Gary Hart in 1984. Later that year, he landed at the McCormack Institute at UMass-Boston. As a senior fellow, he began arguing that the university should have a regular polling operation. Two years ago, the university agreed to establish quarterly polls, and DiNatale has been working mostly out of UMass President William Bulger’s office in Boston since then.

DiNatale’s hunch was that there was an opening for a “public poll”–one that takes up questions of politics and policy and is meant for public consumption, as opposed to being done for a private client. That niche had mostly been filled by the two competing media polls: the Boston Herald/WCVB-TV poll, which is conducted by R. Kelly Myers of RKM Research and Communications, and The Boston Globe/WBZ-TV poll, conducted by Gerry Chervinsky of KRC Communications in Newton.

Noticeably absent from this Who’s Who in Massachusetts Polling are women and Republicans. That’s because the field here is still predominantly male and Democratic (or non-partisan in the case of the UMass poll and the media polls). Republican candidates tend to look out of state when they want polling. This year, for example, Acting Gov. Paul Cellucci turned to Neil Newhouse of Virginia-based Public Opinion Strategies. Joe Malone’s polling was done by Arthur Finkelstein, who lives in Massachusetts but bases his company, Diversified Research, in Irvington, N.Y.

Getting in the game

Pollsters and media outlets have a “co-dependent relationship,” to put it in pop psychology terms. They don’t share a mutual respect, but they need each other. The serious poll-takers chafe when they see what the media do with opinion surveys, but at the same time, they love to see themselves quoted as informed analysts, as the ones who hear and understand the voice of the people, as interpreters of the public mind.

The public may think of the poll-taker as an objective, “scientific” compiler of facts and data, but pollsters who work in politics are in fact partisans. (See box, page 57.) They are players in the electoral wars that determine which politicians rise and which fall. And they do not always appreciate the way newspapers and TV stations that conduct election-season polls can become players, too.

Tubby Harrison gives an example from the 1996 race for U.S. Senate between incumbent Sen. John Kerry and former Gov. William Weld. Harrison was not working for either candidate but he remembers being appalled by the actions of The Boston Globe in the final weeks of the campaign. Through September and into October, polls had found Kerry to be holding a narrow lead. But shortly before the election the Globe reported that Weld had caught up, and then a few days later that Kerry had regained his lead. Harrison believes Weld had never really pulled even with Kerry. “It was bad polling,” Harrison said, adding that to outsiders wondering what the Globe was up to “it does not leave a good impression.”

A review of the stories in question shows that the Globe did indeed report with nine days remaining before the election that Weld was “edging past Kerry,” as the headline put it. “Weld now runs ahead of Kerry, with 42 percent of respondents supporting Weld and 40 percent for Kerry,” said the story’s second paragraph, although the next paragraph noted that considering the three-point margin of error “the race remains a statistical deadlock.” Three days later, the newspaper reported that Kerry “appears to have regained a narrow advantage,” as a new Globe/WBZ-TV poll showed him with a 44 percent to 38 percent advantage.

The Globe‘s pollster, Gerry Chervinsky, reported finding extraordinary volatility in the final weeks. So perhaps the electorate, weighing the decision from day to day, had in fact leaned Weld’s way at one point. But perhaps the apparent closeness was a result of sampling error. It cannot be said that Chervinsky’s polls were entirely off base: On the Sunday before Tuesday’s election, the Globe reported that recent tracking polls showed Kerry with a 7-point advantage. In the end, Kerry won by just that margin–51 percent to 44 percent. But had the story about Weld “edging past Kerry” been a solid one, or had it been erroneous? And what are the implications of publishing an erroneous front-page story nine days before an election?

Chervinsky defends the poll, but not the wording in the headline and story. “I don’t think there was any problem with the poll,” he said. But he said in internal discussions within the Globe there was “deep regret” about the headline. “It wasn’t right,” Chervinsky said. “It taught people a lesson the news media continues to forget all the time,” namely that a lead within the margin of error is not a lead. On the other hand, Chervinsky argued that the effect of such polls on an election is “really marginal.” “Do we think they influence the electorate?” he asked. “What influence? Weld didn’t win.”

“It taught people a lesson the news media continues to forget all the time.”

UMass pollster Lou DiNatale said part of the problem is that the media has become infatuated with covering the tactical and strategic side of politics. “They choose to cover the technology of elections instead of the substance,” he said. But he also charges that media outlets “want to get into the game.” He said the Globe has become more willing to use “horse-race” stories through the election season due to competitive pressure with the Herald–which raises the question, he said, “Is it your job to be in the race, or observing the race?” DiNatale said he decided not to conduct a late-summer poll on the governor’s race. “We want to stay out of voter-decision windows,” he said, adding that the media should back off, as well. “The role of the media can’t be to influence the race,” he said.

Other pollsters, such as John Gorman and Tom Kiley, have broader complaints about the media’s use of polls. It’s not just a matter of accuracy, in their view, but also one of shallowness. “I’m not very impressed with the media’s discernment on these issues,” Kiley said. While he sees some reporters who “decide what information is reliable and what isn’t” and leave the unreliable information out, others know that when it comes to making news “any poll results can have some value, even if they’re not all that reliable,” Kiley said. “The other thing is, most reporters are stuck with their own newspaper or TV station’s polling, and most of that polling is subpar by definition.” Why? “Because they don’t spend enough money on it,” Kiley said, “and because the media’s not really interested in the things we’re interested in, which is making sure you’re right.”

“You know,” Kiley continued, “what’s at stake for us is very different. We’re out of business if we don’t do a good job – if we don’t come up with good strategies and if we’re not right. You can only be off so many times before it’s going to affect your business very quickly. That’s not true of the media. The media have a story to sell today–it’s gone in a week.”

Gorman notes that as polling has become cheaper due to a drop in phone costs, there has been a “proliferation” of sampling by media and by campaigns, often on a daily basis. “You can have quite a lot of night-to-night fluctuation” in such polls, he noted, causing political leaders and the media to “overreact to meaningless information.”

To DiNatale and Harrison the primary blame for shallow or misleading polls lies with the media. There is “an enormous quantity of bad polls,” being conducted, Harrison said. “I think the media has a lot of it.” DiNatale isn’t so sure there’s anything wrong with the polling. In cases such as the Herald‘s report on Flynn’s “clear lead” or the Globe‘s report that Weld was “edging past Kerry,” he said, “The problem here is not that the poll is bad, the problem is that the reporter doesn’t know how to interpret the poll.”

Gorman and Kiley extend the critique somewhat. Both confess to some discomfort with the kind of polling they see being done by competitors in the field. “There’s way too much polling being done and not enough thought given to what it all means,” Gorman said. He specifically faulted DiNatale’s methods at UMass, charging that DiNatale has a habit of releasing polls on the governor’s race and including results based on smaller subsamples that may not be statistically significant. “He shouldn’t put those out,” Gorman said.

Gorman also conceded that he finds it “a little annoying” that DiNatale sometimes releases his quarterly survey to coincide with a quarterly poll Opinion Dynamics conducts for its client Mass Insight, a public policy research group. Asked if there is a rivalry between the two quarterly polling operations, Gorman smiled slightly and said, “We’re a lot better than Lou.”

Kiley also took issue with the quality of DiNatale’s work. Asked if he believed the UMass poll is “done on the cheap,” he responded, “I think it probably is, yeah. I think it probably is. And I… well, whatever.” When told that DiNatale had challenged the view that good quality polls tend to cost more, Kiley responded, “I’m tempted to say I rest my case.” He then added, “Lou’s an old friend who’s found himself in this business, so I guess I have to start thinking of him as a pollster.” He conceded that the UMass poll “has evolved into a more serious effort… I think people tend to look at it more seriously than they did a couple of years ago.”

DiNatale insists he spends about the same to develop and conduct a poll as Kiley does, but that Kiley charges more because his company–unlike UMass–exists to make a profit. DiNatale said Kiley might charge from $12,000 to $15,000 for a poll UMass could do for $8,500. “Theirs are expensive because Tom’s expensive,” he said.

In a later interview, Kiley said he sometimes charges clients “as low as $8,000, depending on the circumstances.” He amended his remark about the UMass poll being done “on the cheap,” and said the gap between his costs and DiNatale’s may be less than he thought.

Polling’s pitfalls

Competitive jibes aside, the pollsters’ disagreements raise the key question: What makes for “high quality” polling? And how are those of us outside the profession to tell the difference between good polls and lousy ones?

I found Boston pollsters to be uninterested in wide-ranging critiques of their craft. When asked about the major problems facing pollsters these days, most of them spoke about “non-response” rates – telemarketing is making it harder for pollsters to get willing respondents on the phone. (See box, page 53.) But in their own circles, polling professionals are constantly discussing what can go wrong–what does go wrong–in their business. Very little of this discussion finds its way into mainstream newspapers–which do not want to debunk the polling they’ve become addicted to–with the notable exception of Richard Morin’s “What Americans Think” column in the Washington Post.

Morin recently quoted Kathy Frankovic, director of polling at CBS News, discussing the “arrogance” that she believes undermines pollsters’ credibility. “Even when we know our methods cannot produce precision, we allow those who read or use our results to think they do,” Frankovic said. Indeed, pollsters know better than anyone else how many factors there are that can produce unreliable results–and most concede that the much-discussed “sampling error” is the least of their worries.

Most pollsters concede that “sampling error” is the least of their worries.

Every poll-taker has intimate knowledge of how answers vary depending on the way questions are asked. Questions that use the word “welfare,” for example, may prompt a different response than questions about “assistance to the poor.” The best pollsters make Herculean attempts to present questions in a neutral way but, of course, it’s not possible to eliminate all subjectivity from an opinion survey. There are often inquiries, as well, that may not be understood in the same way by all respondents–especially if the question is badly worded. A famous example is the Roper poll’s 1992 question, “Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?” The nation was shocked when Roper reported that 34 percent of Americans seemed to believe the Holocaust never happened. Polling professionals immediately criticized the double negative in the question as confusing. Two years later, Roper did the poll over, attempting to improve the question wording. In the new result, only 1 percent said it was “possible the Holocaust never happened.” (Some 91 percent said they were “certain” the Holocaust happened and 8 percent weren’t sure.)

Experts understand also that the order of the questions, and the context in which they are presented, affect responses. If early questions ask about allegations of wasteful military spending–perhaps mentioning the Pentagon’s purchase of toilet seats costing hundreds of dollars–later questions may well find strong sentiment to reduce the military budget. Careful pollsters have developed ways of designing questionnaires to balance such concerns, to come at issues from different angles, and to understand how context might affect answers. But polls that are done more quickly tend to simplify rather than to probe for nuances and complexity. And when the media get hold of poll results, very little about the wording and the context makes it into the story.

Another little-discussed factor has to do with the skill of the interviewers. Telephone polls based on hundreds or thousands of respondents are conducted by battalions of interviewers who have varying levels of training, insight, or patience. If they skip questions or hurry responses or inaccurately record viewpoints, they are adding the kind of errors that are unquantifiable but that weaken the results. There is also the matter of whether respondents are giving honest answers–many researchers believe people tend to fudge their answers on questions about race, for example. And in election polls, much depends on how “likely voters” are screened. Different ways of guessing who is likely to vote, especially in a party primary, can lead to quite different poll findings.

But perhaps the major problem facing poll-takers has to do with “non-attitudes.” Veteran pollster Harry W. O’Neill put it this way in a speech last year to the American Association for Public Opinion Research (as reported by Morin in the Post), “We ask people about every conceivable issue and dutifully record their responses–whether or not they have ever heard about the issue or given it any thought prior to our interview bringing it to their attention.” A now-classic demonstration of the presence of “pseudo-opinions” was conducted in 1978 by researchers who asked people in the Cincinnati area whether they favored or opposed the repeal of the 1975 Public Affairs Act. A third of the respondents ventured an opinion–even though there was no such thing as a 1975 Public Affairs Act. A few years ago, the Washington Post reconducted the experiment with a national sample. The Post found 24 percent favored repeal of the non-existent act, while 19 percent did not–a total of 43 percent offering a meaningless opinion.

And beyond “non-attitudes” there is ambivalence. People simply may not be sure what they think about questions of public policy and politics. Yet when a pollster calls, many will feel compelled to choose an answer rather than to admit they have no opinion. “It’s a big argument among people in opinion research: How do you deal with the issue of ambivalence?” Everett Carll Ladd, director of the Roper Center for Public Opinion Research at the University of Connecticut, has noted. “I argue that ambivalence in many cases is a mature response.” Still, a Through-the-Looking-Glass remark such as “Sometimes I’ve believed as many as six impossible things before breakfast” is not the response most pollsters want to hear.

The public’s judgment

Poll-takers begin every day wondering, “What is the public’s opinion?” but they spend less time wondering, “What is public opinion?”

Long before George Gallup appeared on the scene, Thomas Paine wrote, “It might be said that until men think for themselves the whole is prejudice, and not opinion; for that only is opinion which is the result of reason and reflection.” By 1922, when Walter Lippmann wrote Public Opinion, the modern view was emerging. Lippmann believed that people tended to act based on “the picture in their head”–that is, their perceptions of the world that was beyond their direct experience. “The pictures of themselves, or others, of their needs, purposes and relationship, are their public opinions. Those pictures which are acted upon by groups of people, or by individuals acting in the name of groups, are Public Opinion,” Lippmann wrote.

The renowned political scientist V.O. Key defined public opinion narrowly as “those opinions held by private persons which governments find it prudent to heed.” Nowadays, public opinion is generally thought to be synonymous with the results of polls. Clearly, where Key’s definition is too specific to government action, the common understanding today gives too much credence to polls. Might there be a better way to understand public opinion? In our era, a worthy successor to Lippmann and Key has stepped forward in the person of Daniel Yankelovich. One of the nation’s leading pollsters, Yankelovich is also a co-founder of the group Public Agenda, a non-partisan research organization based in New York that seeks to bridge the gap between leaders and the public. Yankelovich advanced a compelling perspective in his 1991 book, Coming to Public Judgment, that offers new ways of looking at polls and public opinion.

Yankelovich’s concern is broader than what makes a “good” poll and what makes a “bad” one. He asks what distinguishes good public opinion from bad. The “missing concept in American democracy,” he wrote, is the quality of opinion. Yankelovich makes a distinction between what he calls “mass opinion” on the one hand and “public judgment” on the other. The first he considers of poor quality: It is inconsistent, ever-shifting, and not well thought out. He defines public judgment as “the state of highly developed public opinion that exists once people have engaged an issue, considered it from all sides, understood the choices it leads to, and accepted the full consequences of the choices they make.” He cites public views on capital punishment as an example of “judgment” — most people have weighed the issue, discussed it, and come to a firm point of view. That’s why polls on the death penalty tend to be consistent. The matters of what to do about anti-trust problems in the computer industry or how to combat international terrorism are more probably the province of “mass opinion.”

Yankelovich argues that as the polling industry has grown it has cavalierly lumped mass opinion with public judgment, and that media polls especially have little interest in the matter of “quality.” “For subtle and complex human responses, one needs subtle and complex opinion surveys,” he writes. “These demand time, money and skill. Today, the trend is toward oversimplified, cheap, crude public opinion polls….the bad opinion polls are driving out the good ones.”

Elites and “Mass Opinion”

My discussions with Boston pollsters did not find much support for Yankelovich’s perspective. (It should be noted that none confessed to having read Coming to Public Judgement.) Lou DiNatale dismissed the notion that good polling has any connection with cost, or that any special expertise is required. “It’s not difficult to do,” he said. “Just ask the question so that Joe Bag-of-Donuts understands it. It’s as simple as that.” Large national firms such as Yankelovich’s tend to talk about quality, DiNatale said, when they feel threatened by upstart, cheaper pollsters. Tubby Harrison agreed: “You can do decent enough polls that are short and that don’t cost a fortune. And you don’t have to be a sociologist, and you don’t have to smoke a pipe.”

DiNatale also detected “elitism” in Yankelovich’s arguments about “mass opinion.” “Why isn’t that useful information?” he asked. “Why is that bad to know?” In DiNatale’s view, polling has brought about a shift in the political system that many professionals and “experts” object to: Instead of ideas filtering down from think tanks and professors to politicians and then to the public, pollsters now check with the public first and the process is upended. “The elites hate having to listen to the masses,” he said. (Yankelovich, in fact, makes the same critique in his book, arguing that the relationship between experts and the public is “badly skewed toward experts at the expense of the public.”)

Harvard’s Robert Blendon agreed with at least one of Yankelovich’s key points: that to properly interpret results a researcher needs to know whether people have thought about the consequences of their stances. “Interpreting early enthusiasms for change, you just need a bit of caution, because these things will change over time,” he said. “Privatization of Social Security is a perfect example. People are leaning toward individual accounts,” according to surveys Blendon has conducted. “But no one has explained to people that individual accounts require a front-end investment,” which could result in higher taxes. As well, more than half of his respondents said they could change their mind on privatization. “As people learn more about the consequences and the costs of doing it, they’ll become more cautious,” Blendon predicted. And DiNatale also makes the point that studying the public’s view over time produces the most worthwhile results– which is why he pushed for a regular quarterly poll at UMass.

But it is Yankelovich’s analysis that something could and should be done to improve the quality of public opinion that does not sit well in the polling community. “I’m not a lover of mankind,” said Harrison. “But I don’t see what point there is to look down your nose at them and say they’re not making up their minds the way they should.” Gorman agreed: “The public doesn’t do a bad job on the whole.” Gorman associated Yankelovich with a school of thought “that makes knowing a lot into a civic virtue.” The idealism involved in promoting more thoughtful judgment among the public has “almost nothing to do with polling,” he said, nor did it strike him as realistic. “Is it really a rational or virtuous thing for people to spend a lot of time understanding complex issues when, in a representative democracy, people only need to know enough to elect their representatives?” Gorman asked.

Blendon said most people in polling do not expect government decisions to be made in sync with the public’s “coming to public judgment.” The prevailing view is closer to V.O. Key’s: “What counts for public opinion is not how well it’s informed, it’s where it stands at the time the legislative process makes a decision,” Blendon said.

What good are pollsters?

Public opinion polling is a weird science. One thing I found among all the pollsters I interviewed was a bedrock faith in the statistical theory of polling — that a small sampling of opinion can accurately represent the entire population, as long as it is randomly selected. Gorman’s faith in the science of polling was evident when he spoke about focus groups, which he said are “a plague on this country.” Focus groups, by their very nature, tend to attract middle-class, extroverted people with an interest in public issues. “You’re getting a biased sample right off the bat,” Gorman said, and then instead of giving them “a controlled set of words” as in a poll, there is a “free-wheeling discussion.” The same objection has been made about “deliberative polling,” an experiment launched by professor James Fishkin at the University of Texas. Fishkin’s idea was to study the way opinion changes when people are given a chance to participate in seminars and discussions. In bringing a randomly selected group of 600 citizens to Austin, Texas in 1996, Fishkin was interested in people’s considered opinions — in their “judgments,” as Yankelovich would put it.

The respected pollster Everett Carll Ladd dismissed the science behind deliberative polling as “absolutely atrocious. There’s no possibility of introducing meaningful controls.” And from a “scientific” perspective, he has a point: There’s no way to prove that a deliberative poll demonstrates what people would think “if they were thinking about an issue,” as Fishkin once put it. Much would depend on the combination of personalities assembled and on how the discussions went.

But no serious pollster can be unaware of the opposite problem: That they are often carefully and objectively measuring people’s snap judgments, their guesses, their ill-informed opinions. Does that have more value because it can be quantified by supposedly scientific methods? Pollsters such as Ladd put their hopes in bringing better standards to the profession. As Ladd wrote earlier this year in The Public Perspective, a bi-monthly journal for polling professionals, “The public has no way to consistently evaluate the survey research it sees. The public is not protected by peer review and most often is not protected by journalistic fact-finding.”

Certainly the promotion of “poll literacy” would be a step forward. The ability to spot the tell-tale signs of a useless poll ought to become one of the fundamental skills of modern citizenship. It would make sense for reporters and editors to lead the way. And if it’s true that politicians are making more decisions than ever based on polls, they ought to take the time to become well-versed in what kind of polls can be trusted.

Beyond that, there is a deeper question: Can the quality of public opinion itself be improved? It’s odd that the question is seen by some pollsters as “elitist.” Pollsters are more intimately aware than any of us how little their respondents know about the ins and outs of public policy issues. Yet they do not like the implication they are gathering and analyzing meaningless data. Reflexively, they defend the views of the people as inherently worth reporting. Pollsters see themselves as putting public opinion front and center. “Joe Bag-of-Donuts” will have his say. The elites had better get used to it.

But what kind of populism are pollsters peddling? It is a populism that expects almost nothing of the people. If the citizen can manage to pick up the phone and answer 10 minutes of the pollster’s questions, the pollster is happy. If Joe B. Donuts can manage to trudge off to vote once in a while, all the better. The average pollster expects nothing more out of politics than the status quo: Complicated decisions are best left to the experts, who will all the while be inundated with polls reporting the ever-changing views of a distracted and bored public.

In the end, how we view the role of polls in American politics has everything to do with how we think about democratic government. Is it “elitist” to say, as Yankelovich does, that average citizens “are ill-prepared to exercise their responsibilities for self-governance”? Or that “in present-day America, few institutions are devoted to helping the public to form considered judgments”? Or does it show a higher faith in the potential of the people? John Gorman is wide of the mark when he says Yankelovich makes “knowing a lot into a civic virtue.” Yankelovich doesn’t maintain that getting more information to the citizenry is the foremost goal– we cannot all be experts in government and social policy. But we can bring our judgment to public affairs. Yankelovich is talking about ways to recognize and cultivate “the thoughtful side of the public’s outlook, the side that belongs with the world of values, ethics, politics, and life philosophies rather than with the world of information and technical expertise.” If pollsters — and the poll-obsessed media — aren’t interested in finding that “thoughtful side,” in studying it, in understanding its importance to the relationship between the general public and its political leaders, what good are they?