By CP Staff
By Olivia LaVecchia
By Chris Parker
By Jesse Marx
By John Baichtal
By Olivia LaVecchia
By Jesse Marx
By Olivia LaVecchia
When Franklin Roosevelt ran against Alf Landon in the 1936 presidential election, a prominent magazine called Literary Digest mailed surveys to about 20 million citizens to find out how they would vote. After tallying the responses, the Digest predicted a big Landon win. At the time, there seemed to be little reason for skepticism: The Digest had successfully forecast the outcome of the five previous elections.
That same year a young journalist-turned-advertising man, George Gallup, conducted a much smaller poll, sending out researchers to interview just 3,000 people. Gallup, however, predicted a Roosevelt victory. Why the disparity? In determining whom they would contact, Digest pollsters had relied chiefly on automobile registration lists and telephone directories. Gallup, on the other hand, sought out a more representative cross section of the population, including the sorts of folks who didn't own car or a telephone.
Of course, during the Depression, lots of people didn't have cars or telephones. The rest is history. Roosevelt defeated Landon in a record landslide, and the "scientific" public opinion poll was established as a ubiquitous fixture of the American political scene. In the decades since, pollsters have tweaked and refined their techniques, but Gallup's emphasis on identifying a representative cross section of likely voters has remained the first principle of the business.
In the wake of this month's elections, some are questioning whether the pollsters are losing some of their ability to accurately identify likely voters and, hence, to forecast the outcomes of elections. That's understandable. As the commentator Ariana Huffington observed acidly: "I'm still trying to figure out who had a more wretched Election Night 2002, the Democratic party or America's pollsters."
Nationwide, prominent pollsters flat-out blew a number of the year's biggest races. In the Georgia gubernatorial race, Mason Dixon Polling Research, Inc., picked incumbent Roy Barnes as a nine-point leader. He lost by five points. New York-based Zogby International reported that Colorado Sen. Wayne Allard was trailing by nine points; he won handily. In Illinois, Zogby declared the Illinois gubernatorial race too close to call. It was decided by seven points, a result that company president John Zogby frankly declared "embarrassing."
And, of course, in Minnesota's Senate race, the pollsters were all over the place. On the Sunday before the election, Mason Dixon, which polled for the St. Paul Pioneer Press and Minnesota Public Radio, found a six-point Norm Coleman advantage. Meanwhile, the Star Tribune's Minnesota Poll trumpeted a five-point Walter Mondale lead.
Rob Daves, director of the Minnesota Poll, says he thinks his poll was accurate--at least within the 3.2-point margin of error. "I'm convinced that the polls done late in the election were pretty good. What they showed, taken in total, was an incredibly volatile electorate. And if you've got a volatile environment, then a poll is just a snapshot in time," says Daves.
In fact, the Strib's tracking polls, taken in the two days before the election, did seem to reveal an astounding amount of volatility. A poll on Sunday showed a 15-point Mondale lead. The following day, the tracking poll showed an 11-point Coleman lead. Did the voters really swing that much--or did the pollsters fail to properly model the electorate?
Mark Schulman, president of the American Association for Public Opinion Research, says he hasn't studied the Minnesota race, so he can't comment specifically on what happened with the Minnesota Poll. (He does, however, hasten to point out that Daves is "considered one of the best media pollsters in the country.") But Schulman notes that a host of new technologies have made the pollsters' job tougher. Caller ID and cell phones, for instance, make it harder for pollsters to reach people they wish to interview--thus potentially skewing results. In addition, Schulman says, an increasing number of people contacted by pollsters simply refuse to participate.
"I don't think there is anybody in the business who is not a little bit concerned about the trends," says Scott Keeter, associate director of the Pew Research Center for the People and the Press. "But so far, the evidence does not support the more extreme pronouncements of some critics who say polls are now worthless because there is so much refusal to cooperate." In a recent study, Keeter points out, he found that the much-fretted-over hard-to-reach folk typically hold virtually the same opinions as people who are easy to reach.
Shawn Towle, the editor of the St. Paul-based Checks and Balances, a political newsletter, points out that the difficulty in identifying who will vote is more pronounced in Minnesota than elsewhere because of the state's unusual same-day voter registration system. Add to that the increasing number of voters no longer bound to the political parties, Towle contends, and you have a recipe for inaccurate polls.
Rob Daves of the Minnesota Poll takes exception to the inaccuracy charge. Remember, he says, the poll is a snapshot in time, not a prediction. But Daves says voter "volatility" has steadily increased in statewide elections in Minnesota since 1992, meaning that more voters are waiting longer to make up their minds. As a practical matter, that means polls have less value for forecasting the outcome of a close election.