Explore Harvard's Nieman network Nieman Fellowships Nieman Lab Nieman Reports Nieman Storyboard

Handle opinion polls the way you would any story: Do a credibility check

ASK THIS | March 23, 2004


Q. Is the pollster's technique transparent?

Q. How can you tell if a poll is flawed, or if you are being manipulated?

Q. What are 'confidence levels' and 'margins of sampling error?'

 

By Leo Bogart
[Leo Bogart died in October 2005 at the age of 84.]

 

As the apparently endless election campaign of 2004 continues to preoccupy America’s news media, the question of who’s ahead will take precedence over the substantive issues debated by candidates. That displeases those who worry about the state of our democracy, but it is the inevitable lesson to be drawn from past presidential election years since the dawn of the television age. The “horse race” reports on the candidates’ competitive fortunes will largely be drawn from the polls that take over the first leads in newscasts and newspapers alike. Can news organizations do any better in reporting polls than they have done in the past?

 

The record until now has been flawed by a widespread failure to distinguish among polls that carry varying levels of credibility. No one can expect editors and reporters to be polling experts, but they should have the good sense to consult those who are. News people are always and understandably hungry for the latest piece of information, but the latest poll out isn’t always the most reliable.

 

Entrance polls, exit polls, push polls

Polls are done for different purposes and with different degrees of conscientiousness and skill. Polls intended for publication are done by different means than those done privately for the candidates or party organizations and leaked to the press if the results look good. Pre-election surveys use different methods than the exit polls conducted on Election Day or the “entrance polls” before the Iowa caucuses. And then in recent years we have seen the emergence of “push polls” which use mass telemarketing techniques to convey politically charged messages in the guise of asking survey questions.

 

The emergence of political consultants has introduced further complications. This new breed of campaign advisor is adept at manipulating the interpretation of polls done by others. They also, with varying claims to being qualified, conduct their own studies along the lines of commercial advertising research – testing the appeal of different themes and arguments, assessing candidates’ comparative personality strengths and vulnerabilities. Such studies can be done on a shoestring, compared to the expense of running large cross-sectional samplings of public opinion.

 

Forecasting election outcomes from poll results is a tricky business. The famous Literary Digest fiasco of 1936, which predicted a victory for Alf Landon, was based on postcards returned by the magazine’s largely Republican readers – a biased sample if there ever was one. There were two reasons for the equally famous failure of the 1948 pollsters (memorialized in the photo of a triumphant Harry Truman holding up the front page of the Chicago Tribune with its “Dewey Defeats Truman” headline). The leading polling organizations (Gallup, Roper and Crossley) relied on the sometimes biased selection of respondents by interviewers using arbitrary quotas for women, blacks and low income people. And the pollsters cut off their interviewing well before the crucial date, though many undecided people changed their minds in that interval.

 

Polls try to focus on actual voters

In 2000, the polls all agreed in predicting a close election, though no one could have predicted the failure of Florida’s “butterfly ballot” and the “hanging chads” – let alone the Supreme Court’s decision to implant George W. Bush in the White House.

 

The fundamental difficulty in election polling is determining who is actually going to vote. Research organizations try to solve the problem by asking people whether they are registered, whether they have voted in past elections and how they assess their own intentions to vote. This produces somewhat different numbers than those that emerge directly by asking everybody’s preferences. But turnout can be strongly affected by the weather, and the strength of inclinations to choose a particular candidate can be swayed by unforeseen news just before the election.

 

Political choices are volatile, like all expressions of opinion. This is especially true for that large portion of the electorate who do not have an intense commitment to a party or office-seeker. 

 

In the face of such obstacles, it is a marvel that the pollsters’ track record is, overall, as good as it is. That is because of the seriousness, know-how and integrity of the principal polling organizations. But efforts of even the best of these are still circumscribed by economic constraints. The projectability of all survey findings depends in good measure on sample size, and sample size is determined by the budget. The polls run by leading news organizations are generally done well, but none of them can afford to do as many interviews as researchers would like to have, in order to make valid generalizations about sub-groups in the population. This may sound weird, but it takes just as many interviews with blacks, Hispanics or young people aged 18-24 as it takes for a sample of the whole population, to produce findings that have the same level of statistical confidence.

 

Polls may have many sources of error

References to those confidence levels, or statistical tolerances, are now customary in press releases of poll results. But the allusion to an error margin of plus or minus 2% or 5% does not mean that the true numbers fall within that range. That would presume that the only sources of error in surveys arise from the pure probability of sampling. Polling is not the same as tossing a penny 100 times, 500 times and 2,000 times and observing how close the results come to 50-50 for heads and tails. Polling is a human enterprise, in which errors can creep in from faulty questionnaires, poor interviewer training, mistakes in coding and keying answers.

 

One source of error with which pollsters have had to reckon is the rate at which interviews are actually completed and conform to the initial sampling plan, in which the choice of respondents is determined by pure chance, governed by a complicated set of steps.

 

Interviewers used to go door to door and look people in the eye. Most surveys are done for market research by the same companies that do opinion polling, and today most surveys of all kinds are done on the telephone, with numbers dialed automatically and responses recorded with the aid of a computer. In the era of two wage-earner families pressed for time, of telephone marketing and telephone answering machines, the rate of response to pollsters has been steadily slipping. Good research organizations call back repeatedly, but they can’t reach everyone originally designated in the sampling plan, and the people they don’t reach are never exactly like the ones they do. This difference becomes even more marked when polls are done on the Internet, as a growing proportion are.

 

Reputable research organizations are acutely aware of these matters, and use elaborate computerized weighting models to bring their samples into balance with the characteristics of the whole population. (Those statistics come from the Census Bureau, which has had its own problems in getting 100% public cooperation and has been sabotaged by a politically-imposed Congressional restriction of its own efforts to adjust data for greater accuracy.)

 

Pay attention to the pollster's credentials, affiliations

Respondent cooperation has also cropped up as a problem in exit polling on Election Day, as some voters rush away rather than fill out an anonymous ballot. However, throughout their history the exit polls have had an excellent record. In 2002, VNS, the Voter News Service, a consortium of leading news companies, ran into software problems that choked up on the tabulations. The networks made some erroneous judgments in calling the state-by-state electoral vote. The result was much hand-wringing and the demise of VNS itself. In 2004, the same companies turned to an outside firm headed by Warren Mitofsky, formerly CBS’s pollster and a founder of VNS. But the leap from what people actually do to what they say they did or will do will inevitably continue to beset those who run any kind of election polling.

 

The key to good research is transparency – a willingness to set forth exactly what was done and leave the books open for anyone to check on it. Professional pollsters who belong to the American Association for Public Opinion Research commit themselves to do just that. But of the many hundreds of firms that will be doing some form of election polling -- national, state or local -- in 2004 only a small fraction are led by or employ AAPOR members. Some of the pollsters most often quoted by the news media are not in this category. The explanation of their methods is sometimes deliberately opaque or not given at all. Some do private polls for one political party and also do media-published polls on the side. News organizations should be wary of the credentials of those whose polls they publish or quote. Who knows? There may even be some good stories there!

 



The NiemanWatchdog.org website is no longer being updated. Watchdog stories have a new home in Nieman Reports.