I am very excited to continue to help analyze polling data for MSN on important topics of the day. MSN’s opt-in polling is very cost-effective and fast to run, and this column helps explain how it is also accurate.

MSN runs polls on both its front-page (and some back-pages) and that data is collected by MSN. MSN first reports the raw counts for everyone see as they answer. But, we know that the respondents to the poll do not exactly match the US population, so we take the raw data and run it through a quick process. Step 1: model what the breakdown of the US adult population is by: age, gender, education, party identification, and location (e.g., what percentage of US adults are 18-29 year old, women, some-college, independents, from Kansas). Step 2: model how those groups of respondents would answer each questions (e.g., what percent of 18-29 year old, women, some-college, independents, from Kansas would answer a, b, c, or do to a given question). Step 3: project the model of how they answer the poll onto who is in the population.

This method can work with any type of question from political outcomes to sentiment around hamburgers or hot dogs. We just need a few answers to the poll question and a few key demographics of the person that answered it. We do not report margin of error because we do not believe it can be accurately estimated for either this method or traditional public polling. But, we believe the error to be comparable.

We know this method works, because we tested it in 2016 on MSN with the presidential election. In 2016, the MSN polls had binary accuracy that was similar to top public polling (46 of 51 states “correct”) and consistently pointed to Clinton’s trouble in the rust belt that ultimately cost her the election to Trump, with: a more Trump leaning voter population and more support for Trump from key demographics. We just could not figure out why Wisconsin, Michigan, and Pennsylvania looked to different from the public polling; turns out, it was because we were right!


The target population that my colleagues and I created for the MSN polling in 2016 more closely resembled the true voting population than the estimates from the public polling. This is best demonstrated by Nate Cohn’s interesting experiment of giving four different pollsters (and himself) the exact same polling data and see what topline numbers they generate. I was happy to be in this experiment and used our model of the voting population for Florida. We ultimately had a more Republican/Trump make-up state-by-state: older, whiter, and less educated, than the what the polls estimated. That is why for the same polling data, our projections were constantly more Trump (Trump won by 1 point in Florida).


Right now we will focus more on the US adult population, but revert to the voting population when relevant. This is a unique opportunity to learn about the American population and how they feel about the important (and fun) issues of the day.