Why is polling inaccurate




















Across a set of 48 opinion questions and answer categories, most answer categories changed less than 0. The average change associated with the adjustment was less than 1 percentage point, and approximately twice that for the margin between alternative answers e. The maximum change observed across the 48 questions was 3 points for a particular answer and 5 points for the margin between alternative answers.

One 3-point difference was on presidential job approval, a measure very strongly associated with the vote. Two other items also showed a 3-point difference on one of the response options. All other questions tested showed smaller differences. Opinion questions on issues that have been at the core of partisan divisions in U. Preference for smaller versus bigger government, a fundamental dividing line between the parties, differed by 2 points between the versions.

Perceptions of the impact of immigration on the country, a core issue for Donald Trump, also varied by 2 points between the two versions. The share of Americans saying that government should do more to help the needy was 2 points higher in the tilted version than the balanced version.

Despite the fact that news audiences are quite polarized politically, there were typically only small differences between the two versions in how many people have been relying on particular sources for news in the aftermath of the presidential election.

The share of people who said that CNN had been a major source of news about the presidential election in the period after Election Day was 2 points higher in the tilted version than the balanced version, while the share who cited Fox News as a major source was 1 point higher in the balanced version than the tilted version. The complete set of comparisons among the 48 survey questions are shown in the topline at the end of this report.

Opinions on issues and government policies are strongly , but not perfectly , correlated with partisanship and candidate preference. A minority of people who support each candidate do not hold views that are consistent with what their candidate or party favors. This fact lessens the impact of changing the balance of candidate support and party affiliation in a poll. Three examples from a summer survey illustrate the point.

Shifting the focus to party affiliation among nonvoters, we see even less fidelity of partisans to issue positions typically associated with those parties. Adding more Trump voters and Republicans also does add more skeptics about immigration, but nearly a third of the additional Trump voters say immigrants strengthen American society, a view shared by about half of Republican nonvoters.

This means that our survey question on immigration does not change in lockstep with changes in how many Trump supporters or Republicans are included in the poll. Similarly, the Biden voter group includes plenty of skeptics about a larger government.

Pump up his support and you get more supporters of bigger government, but, on balance, not as many as you might expect. Not all applications of polling serve the same purpose. We expect and need more precision from election polls because the circumstances demand it.

In a closely divided electorate, a few percentage points matter a great deal. In a poll that gauges opinions on an issue, an error of a few percentage points typically will not matter for the conclusions we draw from the survey. Those who follow election polls are rightly concerned about whether those polls are still able to produce estimates precise enough to describe the balance of support for the candidates.

Election polls in highly competitive elections must provide a level of accuracy that is difficult to achieve in a world of very low response rates. Only a small share of the survey sample must change to produce what we perceive as a dramatic shift in the vote margin and potentially an incorrect forecast. In the context of the presidential election, a change of that small size could have shifted the outcome from a spot-on Biden lead of 4.

Differences of a magnitude that could make an election forecast inaccurate are less consequential when looking at issue polling.

Unlike the measurement of an intended vote choice in a close election, the measurement of opinions is more subjective and likely to be affected by how questions are framed and interpreted. Moreover, a full understanding of public opinion about a political issue rarely depends on a single question like the vote choice. Often, multiple questions probe different aspects of an issue, including its importance to the public. Astute consumers of polls on issues usually understand this greater complexity and subjectivity and factor it into their expectations for what an issue poll can tell them.

The goal in issue polling is often not to get a precise percentage of the public that chooses a position but rather to obtain a sense of where public opinion stands.

For example, differences of 3 or 4 percentage points in the share of the public saying they would prefer a larger government providing more services matter less than whether that is a viewpoint endorsed by a large majority of the public or by a small minority, whether it is something that is increasing or decreasing over time, or whether it divides older and younger Americans. But good pollsters take many steps to improve the accuracy of their polls.

Good survey samples are usually weighted to accurately reflect the demographic composition of the U. The samples are adjusted to match parameters measured in high-quality, high response rate government surveys that can be used as benchmarks. Many opinions on issues are associated with demographic variables such as race, education, gender and age, just as they are with partisanship.

At Pew Research Center, we also adjust our surveys to match the population on several other characteristics, including region, religious affiliation, frequency of internet usage, and participation in volunteer activities. And although the analysis presented here explicitly manipulated party affiliation among nonvoters as part of the experiment, our regular approach to weighting also includes a target for party affiliation that helps minimize the possibility that sample-to-sample fluctuations in who participates could introduce errors.

Collectively, the methods used to align survey samples with the demographic, social and political profile of the public help ensure that opinions correlated with those characteristics are more accurate.

As a result of these efforts, several studies have shown that properly conducted public opinion polls produce estimates very similar to benchmarks obtained from federal surveys or administrative records. While not providing direct evidence of the accuracy of measures of opinion on issues, they suggest that polls can accurately capture a range of phenomena including lifestyle and health behaviors that may be related to public opinion.

A lack of trust in other people or in institutions such as governments, universities, churches or science, might be an example of a phenomenon that leads both to nonparticipation in surveys and to errors in measures of questions related to trust.

There was one poll that got Florida right — again. Rasmussen was also the major poll that came closest to predicting the outcome of the election. Its secret? Be anti-social. Instead of using human poll-takers like the other major polls, Rasmussen uses a pre-recorded voice. Human beings are social animals, and that's just as true when they're answering a telephone survey as when they're arguing on Twitter. In an America where Trump supporters are routinely called "racist" or worse , it's no surprise that many of them prefer to keep their political leanings to themselves.

Often ridiculed as the "shy Trump voter" hypothesis, the technical name for this phenomenon is social desirability bias. It's the most obvious explanation for the fact that while other pollsters predicted a Biden blowout, the Rasmussen poll showed a narrow lead for Trump. No social relationship, no social desirability bias.

Or at least, that's the theory. But there's another reason why all the polls are more error-prone than ever, including Rasmussen's. It's the dirty little secret the pollsters don't want you to know: virtually no one is answering their questions. In the age of the mobile phone, very few people answer calls from unlisted numbers, and even fewer want to talk to a pollster — who, for all they know, may be a fraudster in disguise. The Pew Research Centre reports that its response rates have plummeted from 36 per cent two decades ago to just 6 per cent now.

And Pew is a not-for-profit outfit that doggedly attempts to contact every sampled phone number at least seven times.

Commercial polling firms don't have that luxury. No major commercial polling company is brave enough to reveal its response rate. The risk is small. It is considered a reasonably safe gamble. It follows that if two or more polls get answers to the same questions which differ by no more than these normal errors they have, for all practical purposes, achieved the same results.

It would be mere chance, however, if they came out exactly the same percentages. If the polls announce results which differ by a larger margin than the normal error permits, then the difference is significant, and is caused by some other reason besides chance. The samples may not have been equally true cross sections of the population, or the questions may have been worded differently in each poll.

The champions of polling methods say that the evidence is strong that the major polls get the same results for all practical purposes.

Moreover, their figures tend to vary less and less from true figures—such as election returns as years go by. In the Gallup Poll was 6 percent wide of the actual percentage division of the votes cast for Landon and Roosevelt; it was 4. However, in the Gallup Poll was much more off the mark in guessing the outcome of the electoral votes. It indicated a fairly close race—so far as electoral votes are concerned—between Roosevelt and Dewey, but actually Roosevelt obtained votes and Dewey only Thus, its state sampling was much less accurate than its national sampling.

The Fortune Elmo Roper Poll has varied only 1 percent from the results in each of these elections. The pollers claim that these results are better than more haphazard methods used in the past.

They far surpass, for example, the Literary Digest poll of which was not based on a carefully selected cross section of the population and which resulted in an error of 19 percent. That error helped to end the existence of the Literary Digest. To summarize the means by which one can try to tell the difference between reliable and unreliable polls:. Do you believe there is likely to be a major European or Asiatic war in the next two or three years?

Those who think this is our war are wrong, and the people of this country should resist to the last ditch any move that would lead us further toward war. GI Roundtable Series. Corey Prize Raymond J. Cunningham Prize John H. Klein Prize Waldo G. Marraro Prize George L. Mosse Prize John E.

Palmegiano Prize James A. Schmitt Grant J. Beveridge Award Recipients Albert J. Corey Prize Recipients Raymond J. Cunningham Prize Recipients John H.



0コメント

  • 1000 / 1000