© 2024 South Carolina Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Don't Be Too Tough on the Polls. It's Actually Not Their Fault

Scott Morgan
/
South Carolina Public Radio

The 2016 presidential election was, by any account, notable. It was also largely a surprise how it turned out. Regardless of ideology, most people assumed a Hilary Clinton victory, and that perspective was informed by poll after poll that showed her cruising to a comfortable win.

Post-election, a lot of people questioned the validity of polls that said one thing while actual results seemingly showed something entirely different. And, a lot of people still question polls, wondering how valid they are heading into a 2020 presidential election that promises to be, by any account, lively.

Under all this is the key question: Did election polls in 2016 actually get it all wrong?

No, actually, they didn’t. Not the national polls, at least. And certainly not the academic political science models that pretty much all accurately projected the eventual Donald Trump win, says Dr. Scott Huffmon, political science professor at Winthrop University in Rock Hill and director of the Winthrop Poll – a “snapshot” (not a predictor!) of South Carolina and the South in general since 2006.

“From the mid-2000s to 2016, national polls were actually nailing it,” Huffmon says. “The national polls said Hilary would win the popular vote by 2 to 4 percent; she won the popular vote by 2 to 4 percent.”

The key words in there are “national polls” and “popular vote.” National polls, after all, measure national sentiments – a grand-total kind of measurement. And when all the opinions, as a country, were tallied up by pollsters, more people in 2016 were saying they would vote for Clinton than for Trump, and that's what happened across all 50 states.

But with the Electoral College, the aggregate number of votes for one candidate over the other is not the deciding factor.

Think of it like the World Series. You could outscore the other team by 100 runs over seven games. But if you only win three games, you don’t win the series, no matter how big your run total.

The national polls in 2016 did not measure the Electoral College. So what they showed – a Clinton popular vote victory – was actually spot on, Huffmon says.

Meanwhile 2016 political science models – academic calculations that count opinion polls along with other factors, like the economy and Electoral College patterns – consistently predicted that Trump would take the White House. So those were right too.

People just didn’t believe it at the time.

“You had all these poli-sci models saying Trump’s going to win,” Huffmon says, “and individual political scientists going, ‘That can’t possibly be right, that’s not what the polls are looking like.’”

The doubt was at least partly due to the wildcard kind of candidate Trump was – an outsider who did not at all follow traditional, even sacrosanct conventions of how to run a presidential campaign. And, Huffmon says, that doubt was also partly due to the fact that the polls and poli-sci models were, uncharacteristically, tracking two differing sets of outcomes.

“Usually, the outcome tracks pretty darn well with the polling,” he says. It was just that the national polls were so favorable to Clinton that a lot of poli-sci model curators couldn’t square the numbers up.

In fact, consider this Tweet from one of the most noteworthy and respected elections gurus in the country, Alan Abramowitz of Emory University. On May 3, 2016, a full six months before Election Day, he tweeted: “Trump’s nomination means Hilary Clinton now almost certain to win in November and by a comfortable margin.”

If hindsight makes that statement worthy of a chuckle or two, consider that Abramowitz wrote it in opposition to what his own, highly respected poli-sci model was telling him – that Donald Trump would win, with just over 51 percent of the vote. So even Abramowitz, with a pile of data in front of him, turned towards what polls (not political science models) were saying, likely in part due to how utterly Clinton-friendly the poll numbers looked.

So when it comes to what went wrong with polls in 2016, the answer nationally is, nothing. It wasn’t the numbers in the polls that were wrong, it was the assumptions about what they were predicting. And Huffmon would be the first to tell you that a poll should not be thought of as a predictor; merely a snapshot of sentiments at the time they were collected.

All that said, there's a lot of nuance to consider. Not every poll was right. While national polls were getting it right, statewide polls weren’t always so accurate. Winthrop’s polls of the South and the state ended up being accurate with the overall answers, if not always exact about the margins. In other words, Winthrop correctly showed who would win certain states, but by how much tended to fluctuate.

Meanwhile, other states’ polls were all over the place. And much of the reason had to do with how state-level polling was conducted, particularly in states not so used to being polled as, say, Ohio or Florida.

“In a lot of the states that weren’t regularly doing polling, not all of them had started moving as much to doing cell phones,” Huffmon says.

Pollsters would call landlines, he says, and then apply statistical weights – compensating factors that account for various kinds of people not fully represented in a survey – in order to craft a more accurate picture of the real-world population they were sampling.

“The problem is, those weights had to get more and more outlandish,” he says.

There’s a lot to unpack here. First, calls to landlines were always a polling organization’s bread and butter. An autodialer could generate several calls at once and still get a broad, representative sample of the voting population with a manageable margin of error, even if they had a high failure rate (i.e., people not answering or refusing to take the survey).

But on the cusp of the third decade of the 21st century, fewer and fewer people have landlines, much less answer them (Huffmon admits to being in the latter category). In South Carolina, he says, effectively 75 percent of people are really only reachable by cell. That would be no problem for polling agencies targeting smaller areas like a state or metro, if they were allowed to plug cell numbers into the autodialer. But they can’t. It is illegal in the United States to have a machine generate a polling call. A live person has to dial the number.

That, of course, makes getting large numbers of people cumbersome for polling organizations, though it should be noted that the Winthrop Poll (not yet wrong about a major election it’s tracked) does have live people calling cell numbers to get a more accurate representative sample of the voting population across the South.

Organizations that don’t have the resources (or possibly the inclination) to be able to call large numbers of cell phones by hand tend to rely on the old-fashioned, and increasingly obsolete, method of autodialing landlines. The issue, though, is that as landlines become less common, the only people who have them tend to be older and more conservative. So when Huffmon talks about statistical weights getting more outlandish as time moves along, he means this kind of dynamic, where the type of voter an agency is getting the most does not accurately reflect an area’s overall demographics.

For example, about 51.5 percent of adults in South Carolina are female. So a poll that got exactly 50 percent women and 50 percent men in the Palmetto State would need a minor statistical weight factored in that would count women slightly more than men.

But, in a population of several million adults, in a state where the median age, according to the U.S. Census, is just under 40 years old, and where about 40 to 45 percent of voters identify as some level of progressive, getting predominantly residents over 50 who lean conservative does not accurately represent the panoply of ages and political leanings in South Carolina. So pollsters relying on landlines need to apply weightier and weightier statistical compensation to try to figure out how the overall population of the state feels.

And as these weights get more and more “outlandish,” as Huffmon puts it, the effect is increasingly like trying to fill bigger and bigger holes in the wall with caulk. At some point, someone is going to have to just hang a new piece of drywall.

Exactly how to adjust for a population that’s getting further and further out of reach is something Huffmon says is keeping political scientists and poll curators everywhere up at night. One solution could be online polling, but apart from the U.K-based U-Gov, online polling is wildly unpredictable – and nobody, Huffmon says, really knows exactly why U-Gov seems to be able to get such good results online yet.

So replicating it will have to wait. And probably until after 2020.