Vanderbilt's First and Only Nonpartisan Political Journal

Vanderbilt Political Review

Vanderbilt's First and Only Nonpartisan Political Journal

Vanderbilt Political Review

Vanderbilt's First and Only Nonpartisan Political Journal

Vanderbilt Political Review

The Problem with Polls

The Problem with Polls

In the wake of the 2015 election in the United Kingdom, statistician Nate Silver claimed, “the world may have a polling problem.” Polls leading up to the election incorrectly predicted a majority of the races, as well as the victory margin for the Scottish statehood referendum. Similarly, polls have failed to accurately predict several elections in the United States. For example, before the recent Democratic primary in Michigan, most polls had Hillary Clinton beating Bernie Sanders by over twenty percentage points, however, after the votes were counted Sanders ended up beating Clinton 50 percent to 48 percent.

Silver and many other journalists believe that these problems are indicative of larger issues within the polling community. In 2015, Cliff Zukin of The New York Times blamed the growth of cellphones and the decline of willing survey participants for the “polling crisis.” Because of the 1991 Telephone Consumer Protection Act, it is illegal to call cellphones using automatic dialers, which limits access to a truly random sample of consumers. This pushes the cost of conducting telephone polls up, as more time must be spent manually dialing random numbers. Additionally, since the late 1970s, the response rate for telephone surveys has fallen from 80 percent to 8 percent in 2014. This is problematic for both getting a random sample of the population, as well as getting enough responses to draw true conclusions.

The problems with polls are not recent innovations; their fallibility dates back to the early twentieth century when newspapers began trying to predict election outcomes. In 1936, Literary Digest mailed thousands of “ballots” to its mailing list in order to predict the winner of the Landon-Roosevelt election. Based on the results of their poll, the Digest declared that Alf Landon would beat FDR in a landslide, but as history has shown, this was not the case. George Gallup understood that the Digest’s mailing list, taken from phone and automobile directories, was not a representative sample of the United States’ population, and that Republicans would be overrepresented and Democrats would be underrepresented. Indeed, the results of the election proved Gallup right, and future polls modeled after Gallup’s work were sure to use a quota of respondents proportional to the population of the United States (if the U.S. population is 50 percent male and 50 percent female than a poll’s sample should be made up of 50 percent males and 50 percent females).

A poll’s sample size and makeup are not the only opportunities for error; the types of questions pollsters ask also have a huge impact on a poll’s results. For example, in 2008 the Pew Research Center found that there was a huge difference between asking respondents “What one issue mattered most to you in deciding how you voted for president?” while providing a list of issues, or asking the question without examples of issues. When asked the open-ended version of the question, only 35 percent of respondents answered that the economy was the largest factor in determining their vote, while 58 percent of those who were asked the close-ended version gave the same answer.

Similarly, the wording of a question can impact how people respond. In 2003, Pew asked Americans if they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” and 68 percent of respondents said they favored military action. However, when people were asked if they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties,” only 43 percent favored military action. This shows how important context is when asking a question, as well as how easily poll results can appear biased.

The Vanderbilt Political Review recently witnessed firsthand how difficult it is to write and distribute an unbiased and representative poll. We sent out a Google Forms survey to different departments on campus in order to measure the political interest and affiliation of faculty members. Not only did we receive a wide array of responses, but, as to be expected from a poll of faculty at a top research institution, we also got lots of feedback about the survey itself.

One professor told us, “If this were a professional poll, I would have stopped, but [because] I really like what you are trying to do, I completed it,” while another said, “I think most of the questions are very badly thought through.” Yet another professor sent us a note saying, “Some questions were difficult to answer because the question is to complex for a simple answer.  For example, asking if I can trust ‘the government’ – what’s ‘the government’?  The executive branch?  Congress?”

All respondents who sent us feedback gave helpful advice for creating another poll in the future. The most interesting part about this exercise, however, is not that Vanderbilt faculty had problems with the wording of our survey, but that the questions we used came from past Pew, Gallup, and other influential polls, and that they had problems with them. For example, one question posed to faculty was “Do you feel things in this country are generally going in the right direction, or do you feel things have pretty seriously gotten off on the wrong track?” This question had two responses: “right direction,” and “wrong track” and came from a 2015 Washington Post survey, yet a professor critiqued its lack of nuance. Another question asked Professors, “In general, do you think affirmative action programs designed to increase the number of black and minority students on college campuses are a good thing or a bad thing?” and offered two responses: “Good thing” and “Bad thing.” The exact wording for this question, as well as its possible answers, came from the Pew Research Center, yet one Vanderbilt professor told us “the specific mention of ‘Black and other minorities’ in the…question…[deployed] long-standing cultural myths regarding the targets, functions, and outcomes of affirmative action. The specific citation of ‘Blacks’ in the question speaks to the controlling image of African-Americans, in particular, as the recipients of unmerited favor in higher education.”

While the faculty response to this poll may be because of the context – an undergraduate organization created it, after all – I believe that the professors’ opinions regarding our questions speak to a deeper flaw in the ways polls are conducted and used in this country. If Vanderbilt Professors think that the wording of questions from Pew and other professional polling organizations is problematic, then something should most likely be changed within those organizations, especially as the poll itself is becoming one of the most important tools in American politics.

[Image: http://www.trbimg.com/img-50222f13/turbine/chi-histdewey_truman20080104104817/500/500×281]

More to Discover
Activate Search
The Problem with Polls