The Opiate of the Electorate
The Opiate of the Electorate
If your anti-Bush sentiments have turned into electoral passion, then you probably restrained your exhilaration after last Thursday's debate until you got a sense of how it played to the American electorate; which means, how it played in the polls that began to pour out only moments after the event ended. The first "instant" polls seemed to indicate a Kerry victory, and by Sunday the Newsweek poll (considered notoriously unreliable by the pros) had appeared with the news that Kerry had pulled even or might be ahead in the presidential sweepstakes. If it was then that the real rush of excitement hit you, face it, like a host of other Americans, you're a polls addict.
Opinion polls are the narcotic of choice for the politically active part of the American electorate. Like all narcotics, polls have their uses: they sometimes allow us to function better as political practitioners or even as dreamers, and don't forget that fabulous rush of exhilaration when our candidate shows dramatic gains. But polls are an addiction that also distort our political feelings and actions even as they trivialize political campaigns -- and they allow our political and media suppliers to manipulate us ruthlessly. The negatives, as pollsters might say, outweigh the positives.
But let's start with the good things, the stuff that makes people monitor polls in the first place, relying on them to determine their moods, their attitudes, and their activities. The centerpiece of all that's good in the polls lies in the volatility of public opinion, a trait that polls certainly discovered. The scientific consensus before World War II had it that political attitudes were bedrock, unchanging values.
Take, for example, Bush's "job rating," as measured by that tried and true polling question: "How would you rate the overall job President George W. Bush is doing as president?" The Zogby Poll's results are typical; until
What happened next is harder to explain. Despite the fact that wartime presidents almost always have huge support for the duration of the conflict, Bush's approval rating began a sustained decline, losing 20 points in the next 12 months (leading up to the first anniversary of 9/11) and another 12 points the following year. By September 2003, his approval rating had hit the 50% level again.
Virtually every group of political activists quickly grasped the significance of this decline: Something surprising was happening to our "war president." In this case, the polls helped to inspire peace activists to rebuild a quiescent anti-war (or at least anti-Bush) movement, because they knew (from the polls) that the decline in his approval rating was largely due to the war. The same figures convinced a whole host of important Democratic politicians to declare for the presidency, bringing well-healed financial backers with them. And they triggered a campaign by Karl Rove and his posse of Bush partisans to discredit Bush's attackers.
Poll results can be a boon to informed and effective politics; they alert activists and others to the receptiveness of the public on important issues. But the key fact that makes polls valuable -- that public opinion is a volatile thing -- also turns polls into an addictive drug that distorts and misleads. Once the addiction forms, we all want to know (immediately, if not sooner) the "impact" of every event, large or small, on the public's attitudes, so that we can frame our further actions in light of this evidence. And this responsiveness means that instead of sustained organizing around important issues that can have long-lasting impact on political discourse, we increasingly go for the "quick fix," especially attention-getting gimmicks that can create short-term shifts in the public-opinion polls which then, of course, feed more of the same.
The use of polls to determine the immediate impact of less-than-monumental events is a fruitless -- and often dangerous -- enterprise. There are two interconnected reasons why this is true. First, polls are at best blunt instruments. They can measure huge changes over time, like the enduring shifts of 30%, 20% and 12% in Bush's ratings, but they are no good at measuring more subtle changes of opinion in, say, the 3-5% range. As the famous (and much ignored) "margin of error" warning that goes with all polls indicates, this incapacity is built into the technology of polling and cannot be eliminated by any means currently available. One sign of it is the often-used phrase in news reports that a 3% difference between candidates is a "statistical tie" (which everyone promptly ignores and which in any case might actually indicate a 6% difference in the candidates). And that 3% "margin of error" is only one of five or six possible inaccuracies. The sad fact is that even a 15% difference between two candidates might not exist, unless it is replicated over time and/or across several different polls.
Let's take an example that, for most people, no longer carries the emotional weight it once did -- the 2000 election. If you had consulted the Gallup poll on most days late in that campaign, you would not have known that the vote would prove to be a virtual dead heat. On October 21, with a little more than two weeks to go,
We now know that this surge was a blunder by
Consider, for instance, the fact that many young adults party on Thursday, Friday, and Saturday. Since the trends recently have been for young singles to be Democratic, you can expect fewer Democrats and more Republicans to be home during polling hours on those days. And that's but a single example of changes in polling audiences. Daily polls, in other words, often record large fluctuations in attitudes because questions are being asked of very different audiences. Even time of day can make a big difference. (Think of who is at home on Sunday afternoons during football season.) This, in turn, forces pollsters to make all sorts of adjustments (with fancy scientific names like "stratified sampling" and "weighted analysis"). And these adjustments are problematic; in the context of daily electoral polls they often add to that margin or error instead of reducing it.
No One Knows Who is Going to Vote
There are lots of other problems, but the big kahuna, when it comes to an election, is that we only want to interview people who are actually going to vote (a little over 50% of all eligible voters in a typical presidential election -- and possibly closer to 60% in this atypical year). One way to eliminate the non-voters is by looking only at registered voters, but that is just a partial solution, since in most elections fewer than 80% of registered voters actually vote. What pollsters need to find out is: Which of those registered voters are actually going to vote. This is made particularly crucial, because while there are a great many more registered Democrats than Republicans, the Republicans usually narrow that gap by being more diligent in getting to the polls.
But there is no way to figure out accurately who is going to vote. Going to the polls on Election Day is a very complicated phenomenon, made even more so this year by the huge number of new registrations in swing states. It is almost impossible for pollsters to know who among these new voters will actually vote. While many potential voters have a consistent track record -- always voting or rarely voting -- many others are capricious. For these "episodic voters," factors like weather conditions and distance to the polls mix with levels of enthusiasm for a favorite candidate in an unstable brew that will determine whether or not they get to the polling station. In fact, who is "likely to vote" actually varies from day-to-day and week-to-week and there's just about no way of measuring (ahead of time) what will happen on the only day of the only week that matters, November 2.
Pollsters, in fact, are really in a pickle. If they rely on previous voting behavior (as many polls do), they're likely to exclude virtually all first-time voters. Since the preponderance of newly registered voters are young singles (who, we remember, tend to be Democrats), they will be underestimating the Democratic turnout. So many polls (including
But this creates new distortions. For example, a big news story, including a polling-influenced one like the recent Bush "surge," can suddenly (but usually briefly) energize potential new Bush voters, turning them into "likely voters"; at the same time, it may demoralize Kerry backers, removing some of them from the ranks of "likely voters." Two days or two weeks later another event (the first Presidential debate, any sort of October surprise, or you name it) may create an entirely different mixture. And come election time, none of this may be relevant. On that day the weather may intervene, or any of a multitude other emotions may arise. So "likely voter" polls are always extremely volatile, even though the underlying proportion of people who support each candidate may change very little.
What this means is that a large proportion of all dramatic polling fluctuations -- this year and every year -- are simply not real in any meaningful sense. But this does not stop election campaign managers and local activists from developing or altering their activities based on them, which only contributes to a failure to mount sustained campaigns based on important issues, while focusing on superficial attention-getting devices.
You Can't Tell Which Poll is Right
This leads us to the second huge problem with polls: Different polls taken at the same time often produce remarkably different results. Fifteen percent discrepancies between polls are not all that rare. If a group of polls use just slightly different samples (all of them reasonably accurate), slightly different questions (all reasonable in themselves), and slightly different analytic procedures (all also reasonable), the range of results can be substantial indeed. If, in addition, they call at different times of the day or on different days of the week, the differences can grow even larger. And if they use different definitions of "likely voters," as they almost surely will, the discrepancies can be enormous.
To see how such a cascade of decisions really screws up our ability to rely on polls, consider the now famous "bounce" that Bush got from the Republican Convention. The media, using selected opinion polls, conveyed the impression that Bush surged from a "statistical tie" to a double-digit lead. Many of my friends -- Kerry supporters all -- felt the election was lost. (Some of them would certainly have fallen from the ranks of
This is a prime example of the polls having a profoundly detrimental effect on public behavior, because the bounce for Bush was moderate at best. In fact, the most reasonable interpretation of the polls as a group suggests that there may have been a shift in public opinion from slightly pro-Kerry (he may have had as much as a 3% advantage) to slightly pro-Bush (perhaps as much as 4%). A plausible alternative view, supported by a minority of the reliable polls, would be that the race was a "statistical dead heat" before the convention and remained so afterward, interrupted only by an inconsequential temporary bounce.
To see why a moderate interpretation is a reasonable one, you need to consider all the polls, not just the ones that grabbed the headlines. I looked at the first 20 national polls (Sept 1 to Sept 22) after the end of the Republican convention, as recorded by PollingReport.com, the best source for up-to-date polling data. Only three gave him a double-digit lead. Two others gave him a lead above 5%, and the remaining 15 showed his lead to be 4% or less -- including two that scored the race a dead heat. In other words, taking all the polls together, Bush, who was probably slightly behind before the convention, was probably slightly ahead afterward. Certainly the media are to blame for our misimpression, but before we get to the media, let's consider how various polls could disagree so drastically.
Fortunately there are some energetic experts, especially Steve Soto and Ruy Teixeira, who have sorted this discrepancy out. The bottom line is simple: the double-digit polls far overestimated the relative number of Republicans voters.
How, then, could
But in some ways, those exaggerated
To see how pervasive this problem is, consider this sobering fact: The media have been reporting that the first debate pulled Kerry back into a "statistical dead heat." This is a source of exhilaration in the Kerry camp and (if we can believe media reports) significant re-evaluation in the Bush camp. It has certainly affected the moods of their supporters. But there is a good chance that this Kerry bounce was inconsequential. According to Zogby and Rasmussen -- two of the most reliable and respected polling agencies -- the Bush lead had already devolved into a "statistical dead heat" and the debate had no significant impact on the overall race.
Granted, these two polls are a minority, but in polling, unfortunately, the minority is often right. For a vivid example, consider the polls taken the last weekend before the 2000 presidential election. Since the election itself was a virtual dead heat, well conducted polls should have called it within that 3% margin of error -- with some going for Gore and some going for Bush. But that is not what happened: PollingReport.com reports scientifically valid polls taken in the last weekend before the 2000 presidential election. Fully 17 gave Bush a lead, ranging from 1% to 9%, while only two predicted that Gore would win (by 2% and 1%); one called it a tie. Even if you remove the absurd 9% Bush advantage, the average of the polls would have been a Bush would win by 3% -- which in our Electoral College system would have translated into something like a 100 vote electoral majority. In other words, even in a collection of the best polls doing their very best to predict an election, the majority was wrong and only a small minority was right.
Consider then that there are three extant interpretations of what has happened since just before the Republican Convention. In one rendering, promulgated almost unanimously by the media, Bush experienced a double-digit convention surge and held onto most of this lead until Kerry brought the race back to even with his sterling debate performance. This widely held interpretation is almost certainly wrong, but two plausible interpretations remain. The first, supported by the preponderance of polls, tracks a modest post-convention bounce for Bush and an offsetting modest bounce for Kerry after the initial debate. The second, supported by at least two respected polling agencies, finds no real bounce after either media event. We don't know which of these is correct, but it would certainly be refreshing if the American electorate was making up its mind on the basis of real issues and not staged media circuses that center on essentially unreadable polling results.
Kicking the habit
Three things are worth remembering, if you can't kick the poll-watching habit:
(1) Any individual poll can be off by 15%.
(2) Any collection of honestly conducted polls, looked at together, will show a very wide range of results and you won't be able to tell which of them is right.
(3) Even the collective results of a large number of polls probably will not give you an accurate read on a close election.
From these three points comes the most important conclusion of all -- don't let the polls determine what you think or what you do.
Watch out for the pushers
Finally, let's look briefly at the way the mass media -- the pushers of this statistical drug -- use the polls to build their ratings or sales and advance their political agendas.
So why not the same in reverse? Based on subsequent polls, the media could easily have claimed that Kerry was on his way to a remarkable comeback -- a number of polls seemed to indicate this within days -- which would have triggered the same pattern in reverse. They didn't do it, however, and as a result created an ongoing pattern of demoralization among Kerry supporters and confident enthusiasm among Bush supporters for the better part of a month.
This political favoritism was, in fact, part of a larger pattern in which even the "liberal media" give the administration a "pass" on certain issues. (The New York Times and the Washington Post have even admitted that they did this on the run-up to the war.) Such favoritism is by no means inevitable, as the exposure stories on Abu Ghraib demonstrate and as the present post-first-debate Kerry "bounce" makes clear enough. Driven by poll-addicted reporters, that "bounce," based on no less reliable polling procedures than the original "Bush Convention Bounce," is getting a full measure of media attention, belatedly but effectively reversing the exhilaration-demoralization equation.
The emotional roller coast that results from misleading fluctuations in poll results, managed by manipulative media outlets is the most dramatic symptom of the larger problem. They keep us riveted on the minutia of the debates (in this case, "presentation and demeanor" are the major foci of the analyses of why Kerry won), while distracting the electorate from the underlying issues that have animated people's discontent with the Bush administration in the first place. Lost in the excitement over the Kerry first-debate victory are his promises of more troops and a more aggressive foreign policy. The rise in the polls makes this belligerent posture acceptable, and even dedicated anti-war activists end up suspending their politics in the excitement over the return of the Presidential race to a "statistical dead heat."
Our reliance on polls for political validation combines with unscrupulous press coverage of these polls to create a lethal threat to our political sanity and our political effectiveness. Our addiction to polls has done more than enhance the already unacceptable power of the media; it has also redirected our attention and efforts away from policy and toward trivial personality contests at a time when much is at stake.
Isn't it about time we began to think about how to kick the habit?
Michael Schwartz, Professor of Sociology at the State University of New York at Stony Brook, has worked for 30 years measuring and analyzing public opinion. Once upon a time, he was also a founding partner of MarketCast, where he pioneered the use of multivariate analysis in measuring attitudes toward movies while designing and executing over 1000 attitude surveys for major movie studios. He writes regularly for Tomdispatch.com. His email address is firstname.lastname@example.org.
Copyright C2004 Michael Schwartz
[This article first appeared on Tomdispatch.com, a weblog of the Nation Institute, which offers a steady flow of alternate sources, news, and opinion from Tom Engelhardt, long time editor in publishing and author of The End of Victory Culture and The Last Days of Publishing.]