A presidential poll
To introduce students to polling and its complexities, work with students to prepare for an overnight poll on the presidential election:
In professional presidential election polls, one frequently used format is the following: If the election for president of the United States were being held today, would you vote for the Republican George W. Bush, Democrat John Kerry, or independent candidate Ralph Nader?
But even this seemingly neutral question presents problems. Might the first-named candidate have an advantage? (Pollsters tend to rotate the names.) Should the party designation be stated? (Some argue it can unduly influence the respondent.) Should other candidates be named? Should Nader be omitted? Should the question omit all names and instead ask, If the election for president of the United States were being held today, whom would you vote for? This question, like the original one, assumes of course that the respondent will vote for someone, but in fact he or she may not vote at all.
Consider with students: What question or questions should be on their poll? Students should understand that how a question is worded may influence how it is answered.
Write proposed questions on the chalkboard, then consider them closely with students. Should a question include the names of all candidates or should the respondent be asked to name the one of his/her choice? Should a question ask if the respondent is registered to vote? is likely to vote? Decide on the exact wording to be used by all student pollsters.
The poll might be addressed only to students in your school who are eligible to vote. Such a poll could provide insights into the likelihood of their voting as well as into the preferences of potential new voters. Or the poll might be addressed to both potential student and adult voters. How many people should each student be expected to poll? Should polling be done face-to-face? By telephone? By e-mail? Some combination? Does the method make any difference? In exactly what form will students report results?
All of these matters should begin to make students aware of issues and problems in polling.
In class the next day, ask students to report the results of their polling efforts, including any problems. Consider with them the degree of accuracy class poll results might have. Among the questions to address:
- How many people did we poll?
- Do these people constitute a random sample of the U.S. adult population? (Do students understand what is meant by "random sample"?)
- How many of these people might change their minds? are undecided? are likely to vote?
- What evidence do we have for an answer to any one of the above questions?
- To what degree do you think our poll produced significant results? Why?
After discussing these questions—and especially the concept of "a random sample" —ask students to consider the reading below.
Issues and Problems in Polling
In 1936 the Literary Digest, a well-known magazine at the time, conducted a poll that predicted Alf Landon, Republican, as the landslide winner of the presidential election that year. There was a landslide winner, but it was Franklin Delano Roosevelt, who captured 523 of 531 electoral votes and 62.5 percent of the popular vote. The Literary Digest went out of business.
What went wrong? Modern polling was relatively new at the time. The Literary Digest got its information by sending postcards to telephone and automobile owners as well as magazine subscribers and asking for their voting choice. But this was 1936. The country was in a severe economic depression. Millions of Americans were jobless and struggling to put food on the table. Many didn't have telephones and couldn't afford magazine subscriptions, much less a car. About 23 percent of those who received postcards answered. A majority were probably relatively prosperous and more likely to be Republicans. A significant majority picked Landon.
The Literary Digest's biggest mistake was that it did not poll a random sample of the population, a key element in today's polling. A sample is a representative of a larger group and, if selected truly randomly and in enough numbers, will reflect with some accuracy the views of that larger group. It's comparable to a blood test for anemia or diabetes in which the doctor requires only a random sample in a small tube that will reflect a pattern similar to the rest of the blood in a person's body.
Another Literary Digest error was to rely on responses well in advance of the election. There are always people who are undecided until the last minute and others who change their minds in the final days of a campaign.
Dr. George Gallup became well known as a pollster at the time of the Roosevelt election because he correctly predicted the winner. But he, too, blundered in the 1948 campaign when he predicted a victory by Thomas Dewey over Harry Truman by five to 15 points. Truman won by more than four percentage points. Once again the problem was inaccurate sampling, undecided voters, and late voter shifts.
Since those days the Gallup organization and other polling groups have had more than a half century to refine their techniques and produce more accurate results. Political polling is now usually done through computer-generated, randomly selected telephone calls of some 750 to 1,500 potential voters. But today almost everyone has a telephone. Polling organizations also adjust for such factors as geographic region, sex, race, marital status, and age. This adjustment, while necessary, is also a possible source of error. Pollsters continue their efforts right up to an election to get results from the undecided and to catch those who have changed their minds. But no matter how scientific they claim to be, polls that predict election results often still get them wrong—though usually not as wrong as the Literary Digest was in 1936.
Consider the results of polls just before election day on November 7, 2000. Eleven national poll results were released in the two days before the election. Only two got the actual vote right, showing Al Gore leading George W. Bush (CBS and Reuters/MSNBC/Zogby) by 45 percent to 44 percent and 48 percent to 46 percent. One showed the candidates tied (Harris). But eight showed Gore losing by as many as five percentage points, which would have meant a Bush landslide. The eight polls were wrong by as much as eight points, though some others were within stated "margins of error."
Most pollsters will state a 95 percent confidence that their results will be within a 3 or 4 percent margin of error. This means that if an attempt were made to poll every adult in the nation with the same questions, in the same way, and at the same time as the actual poll was taken, the results would be plus or minus three or four percentage points 95 percent of the time. Which means, of course, the pollster is admitting that 5 percent of the time results may be off by more than three or four percentage points.)
Now consider a poll released by the Washington Post and ABC News on November 14, one week after Election Day when the final results were still uncertain, that showed 45 percent of the public wanted Bush to become president, 44 percent preferred Gore, 6 percent favored neither, 4 percent had no opinion, and 1 percent wanted "other."
But when the actual vote total from Election Day was completed, it showed
Gore.....50,996,116 (48 percent) - 266 electoral votes
Bush.....50,456,169 (48 percent) - 271 electoral votes
Other.... (4 percent)
Gore won the popular vote by 539,947 votes. In short, a major poll was wrong about the results of an election that had already taken place.
Another vital factor that may undermine the accuracy of political polls is uncertainty about who will actually vote on Election Day. Almost anything can turn a likely voter into a non-voter—a rainy day, an appointment, limited interest, unfinished work. So readers of polls should take into serious consideration the word "likely" when pollsters refer to their respondents as "likely voters."
During the 2000 presidential campaign the "expert" predictions were for a low turnout. This usually means that the most motivated voters, the best educated, and the richest will provide most of the votes. In this case, argues Anna Greenberg in The Nation, it also meant that the turnout would favor Republicans "since high socioeconomic status is associated with political participation and conservative political preferences. To accommodate these predictions, the polls screened tightly for those most likely to vote...and in some cases 'weighted up' the GOP share of the sample. All those adjustments meant that most of the national polls going into Election Day showed a 2- to 5-point Bush lead." (Anna Greenberg, "Why the Polls Were Wrong," The Nation, 12/14/2000)
During the 2000 campaign, pollsters were also very much aware that they had overstated the size of Bill Clinton's lead over Bob Dole in the 1996 presidential election. One result was that they tended to ignore the big get-out-the-vote efforts by labor unions and civil rights organizations, both traditional supporters of the Democrats and thus of Gore.
Another reason why so many pollsters were wrong in 2000 is that "media outlets are competing for market share in an ever-expanding universe of polling data," writes Greenberg. In short, there is lots of competition among polling organizations, and they tend to develop polling models (including the size and nature of the electorate) that allow them to claim that they are using the most scientific method for predicting the result of an election. It is difficult for them to change their model in the middle of a campaign based on new information about the electorate. "There is no insidious conspiracy to rig the polls in favor of one candidate or another. The national pollsters who partner with media outlets are respected survey researchers." But, for instance, in 2000 more people turned out to vote than pollsters expected, and this skewed their results.
In 2004, as in 2000, polling is playing a major role in the presidential campaign. The media report and discuss the latest poll results endlessly—and not just those on who's ahead. What do voters think about Kerry's personality? Bush's plan for dealing with Iraq or taxes? Which candidate has the better healthcare proposals? Which is more honest? Reporters and pundits analyze the slightest changes in what polls report about voter perceptions. These polls are as subject to error as those that claim to tell us who's ahead.
Polls become tools for political manipulation. Political consultants for Bush and Kerry rely on the polls to propose shifts in what the candidates should say and where, hoping to influence not just voters but also the next set of polls. For poll results themselves can influence how and even whether people will vote. They can also play a big role in how successful fundraisers are, since positive poll results can help raise money for a candidate.
In the meantime, poll results fluctuate daily. Bush's numbers go up after the Republican convention, then down after his first debate with Kerry. Polls say the candidates are deadlocked. They say Kerry is picking up steam. They say Bush is inching ahead. For the media, competition and conflict sell "news." They report the presidential campaign as if it were a horse race: Who's ahead in the polls? Why is so-and-so slipping behind? Can he catch up? What must he do and why? Who will make it to the finish line first? In the process, the issues and problems that the campaign is supposedly about can be lost.
Despite their known limitations and history of inaccuracy, polls help drive the election process. At their best, they can highlight the issues that most concern people and illuminate how they are thinking and feeling. They can also be fodder for media trivialization.
Michael Schwartz, an expert on polling, says there are three things worth remembering about polls:
1) Any individual poll can be off 15 percent.
2) Any collection of honestly conducted polls looked at together will show a very wide range of results and you won't be able to tell which of them is right.
3) Even the collective results of a large number of polls probably will not give you an accurate read on a close election.
"From these three points comes the most important conclusion of all: Don't let the polls determine what you think and do." (Michael Schwartz, Professor of Sociology, State University of New York at Stony Brook. Schwartz has worked for 30 years measuring and analyzing public opinion. See www.tomdispatch.org for 10/4/04.)
Meanwhile, with Election Day 2004 looming, a New York Times headline
AS DEADLINES HIT, ROLLS OF VOTERS SHOW BIG SURGE
The report declares, "A recent surge of potential new voters has swamped boards of election from Pennsylvania to Oregon, as the biggest of the crucial swing states reach registration deadlines today. Elections officials have had to add staff and equipment, push well beyond budgets and work around the clock to process the registrations.
"'Everything we're seeing is that there has been a tremendous increase in voter registration,' said Kay Maxwell, president of the League of Women Voters. 'In the past, we've been enthused about what appeared to be a large number of new voters, but this does seem to be an entirely different level.'
"Registration numbers are impossible to tally nationwide, and how many of the newly registered will vote is a matter of some debate...."
A Pennsylvania director of voter services said, "The vote was so close four years ago, people are now thinking, hey, maybe my vote does count."
Who will show up at the only polls that count? And how will they vote? No matter what the pollsters predict, we cannot know until November 2.
1. Why was the Literary Digest poll of 1936 so wrong?
2. What do pollsters regard as key factors in "scientific" polling"?
3. Why, despite their best efforts, may the pollsters' results be inaccurate?
4. Why were the Gore-Bush poll results inaccurate?
5. How can polls be "tools for political manipulation"? By whom and why? "Fodder for media trivialization"? By whom and why?
6. What does Michael Schwarz regard as his most important conclusion? Why?
7. What is the potential significance of the great increase in voter registration this election year? Why is it "potential"?
A mind-boggling amount of information about polls and polling is available on the web. A sampling:
Project Vote Smart: justfacts.votesmart.org/. Click on this non-partisan organization's site map, click again on "polling" and you will have links to a number of the best-known polling organizations).
Public Agenda: www.publicagenda.org. See this site for a list of important questions/answers on polling.
For links to a great many mainstream polls as well as to liberal commentaries, go to www.betterworldlinks.org.
This lesson was written for TeachableMoment.Org, a project of Morningside Center for Teaching Social Responsibility. We welcome your comments. Please email them to: firstname.lastname@example.org