Margin of error (MOE) is a statistical term that is intended to convey “accuracy”, typically around a percentage observed in a study. In political polling this is often use to describe agreement with an attitude (e.g. 25% of Americans agree that we should ban muslim immigrants from entering the country, ABC News poll, Nov. 15th, 2015), or the likelihood of a behavior, ‘Poll shows a five point lead for Obama in North Carolina’, National Journal, Oct. 23-27th, 2008).
If you poll 400 people representative of the population you’re trying to understand, and 75% of them say that they do not agree with the proposal to ban Muslims from immigrating to the US, then you can conclude with 95% confidence that the real percentage of people who disagree with that proposal falls within +/- 4.9% of your 75% polling figure.
Sounds pretty good, right? Yes, the central limit theorem and sampling theory are wonderful things.
The Can’t Say/Won’t Say Issue
But what if the data you were capturing in that expressed attitude or intended behavior did not actually reflect the behavior you were trying to predict? What if people were either unwilling to explicitly report their true attitude or unable to predict their future ballot booth behavior? What use, then, is your MOE around those explicit answers in your survey?
On the eve of the first Presidential debate between Hillary Clinton and Donald Trump, the headlines are reporting a virtual dead heat in the polls. The explicit polling is showing Clinton with a 49 to 47% lead over Trump and is explained as equal using the MOE: “Clinton’s two-point edge among likely voters, in both the four-way and two-way ballot tests, is within the survey’s 4.5 percentage-point margin of sampling error.”
The Futility of Explicit Polling
Last Fall, ahead of the Republican debates we showed the futility of explicitly polling people on questions related to this election. Trump’s initial proposal to place a temporary ban on Muslim immigrants entering the country received very modest explicitly stated approval (25% with a MOE of +/- 3.1%). Yet, a Sentient Prime test of implicit attitudes toward Trump’s proposal revealed that 53% held an unexpressed positive view, and Trump went on to vanquish 16 competitors for the Republican nomination.
In the historic 2008 election, explicit polling among independents in North Carolina showed Obama with a sizable lead among independents (+18 points) and an advantage in the state overall. However, Sentient Prime implicit data revealed a significant unexpressed favorability toward McCain (+10 points).On voting day, the independents’ ballot booth behavior matched the implicit, not the explicit data, with actual independent votes favoring McCain and making the outcome razor thin in North Carolina.
Most recently in this past primary season, we found that combining true System 1 implicit measures with explicit System 2 measures provides more accurate forecasts of actual voting behavior. The combined explicit and implicit model was nearly 70% more accurate in predicting the outcomes of 14 state primary races.
What Should We Make of the MOE?
So how should we interpret the explicit polling numbers this fall as the November election approaches?
The answer is putting a MOE around numbers that do not accurately reflect true attitudes gives you a false sense of certainty in the outcome you’re trying to predict. So you have a few options:
- use those explicit polling numbers as your estimate of the winner
- just go with your gut feeling, or
- conduct your own implicit test to quantify the Nation’s gut feeling
To find out how you really feel about Clinton versus Trump, you can take this 2-minute implicit test here, the margin of error is small and the results are accurate.