A conversation with Doug Rivers, Chief Scientist of YouGov

Doug Rivers is a Senior Fellow at the Hoover Institute and a Professor of Political Science at Stanford. In addition to consulting for CBS News, he is the former CEO of YouGov/Polimetrix and current Chief Scientist and Director of YouGov PLC, a polling organization that uses data science and research practices to further the process of collaborative decision making. They do extensive polling for many elections, including our most recent 2016 presidential election. Below is a transcript of our conversation, lightly edited for clarity.

***

Maddie McConkey: Can you explain how YouGov’s polling works, and how similar it is to the rest of the industry?

Doug Rivers: There are a lot of different ways to do polling these days. There’s telephone polling, where they either use voter registration lists, or random digit dialing, and those are two different methods of doing telephone polling. Most internet surveys are done by just doing advertisements to get people to join an internet panel, where they then take surveys. There are some companies that do telephone recruitment to build internet panels. That means recruitment by phone or even by mail to get somebody to join an internet panel. But all internet polls use panels, of pre-recruited people who then take surveys.

MM: Which type of polling does YouGov do?

DR: We have what’s called an opt-in panel, where we advertise to people, and then once they join a panel, we eventually have hundreds of thousands of people who are representative of the population.

MM: So there’s no “shy voter” situation?

DR: This is even worse for telephone polls, that people’s willingness to take a survey goes up and down over time. We found, for example, that during this election, when the Access Hollywood videotape was released, that Trump supporters were about 4% less likely to take a survey than Clinton supporters. When the Comey statement came out, we found that Clinton voters were 4% less likely to take a survey than Trump voters. And we know that because we’ve asked people over time which candidate they support, and there was almost no switching.

MM: So that’s how the “shy voters” manifest themselves.

DR: Yeah, I call them “shy respondents”. It’s not that the Trump voters would lie to you about voting for Trump, they just didn’t want to take a survey because there was nothing good that they were going to get to talk to you about. The ABC tracking poll had Clinton +12 points ten days before the election. It then swung to Trump +2, and then back to Clinton +4. There’s no way that there was 16 points of movement over that time.

MM: What is it about your polling that prevents those swings?

DR: Because we are re-contacting the same people. If one group responds at a lower rate, we keep the sample composition the same, given their past preferences. Because we measured four years ago who was voting for Obama and who was voting for Romney, our samples always had the same numbers of Obama and Romney voters in them.

MM: Your 2014 polling tended to lean Democratic by a couple of points. Did the same thing happen this year?

DR: Our final polls were 3.6–3.8% Clinton on the national vote. It’s looking like the actual final national vote will be Clinton 1.7–1.8%. That’s two points high on the lead, but about one point too high on the proportion. If it had just been one point error uniformly, we would have done just fine. In the battleground states the errors were larger, and they were all systematically in the Democratic direction.

MM: Is there something about the Midwest, like a ‘Midwestern-values’ thing, that makes the polling systematically biased just in the Midwest?

DR: We don’t know entirely at this point, but I have some surmises. It appears that the turnout models were off generally, particularly in the Midwest. The places that voted for Obama in 2012, had flat to declining turnout, and the places that voted for Romney in 2012 had significantly higher, about 5 point higher turnout rates. And we think that our turnout models in those places were systematically off. It’s certainly not the case that the state-level polls were always off.

I did a short blog post thing on this. I took nine states that we had polled in the last four weeks before the election, and we compared all the polls. And I would say in six of the nine states, we were right on target, but there were polls that were too high and too low. In some of the states, we did pretty well, and in some states the polls are just systematically off, and everybody is off in the same direction, so there has to be some bias in those states. And the bias could be fractions of people, it could be a late shift from when the polls were conducted and the election, I don’t think that really happened, but it could be. It could be the turnout models were off. All those are possible explanations.

MM: Where does polling go from here?

DR: The first thing is, it’s getting harder and harder to poll. 30 years ago you would call people up and two thirds of them would actually talk to you on the phone. That’s now down under 10%. The phone polls this year were quite bad, so certainly within campaigns, it’s led people to question whether they can continue using telephone polling. The problem with internet polling is that you don’t have a mechanism that guarantees you a representative sample, so it requires statistical models to correct for sample skews for it to be a representative sample. There’s always room for improvement on that; we are getting better over time, but there’s plenty of room for improvement.


Maddie McConkey is an undeclared freshman.