The Atlantic's The Geography of Partisan Prejudice: Post Scriptum

 

The Big Question and our Larger Vision


When we were approached by The Atlantic in early 2018, we developed a bold idea together. Develop a map (both geo-spatial and geographic) of what we as researchers call affective polarization, or political tolerance. The method: collect a (fairly) large survey of about 2,000 respondents, isolate demographic and neighbor-hood-compositional characteristics predictive of a uni-dimensional measure of tolerance toward out partisans, and assess how these traits vary county-by-county – I will not review the methods again, spelled out diligently here. Some caveats we have made clear from the beginning: To assess true (as opposed to modeled) variation in tolerance by county, one would need access to a hundred-Million-N survey. Even if we had access to this kind of survey (which we don't), there is still a trade-off when it comes to cost and time. Do we gain anything when we interview Americans in every county, especially considering trade-offs with cost and time to collect data? The answer, I think, depends on the outcome of interest. Additionally, political tolerance is a latent phenomenon – difficult to measure: simply asking "are you politically tolerant" would lead to severe measurement error, which is why we rely on a multi-item-scale capturing this latent trait derived from years of research on this topic (for some of my work on antecedents of – and potential cures for – affective polarization, see here, here).

In some ways, this effort represents the larger vision of PredictWise: Using sharply declining cost of data collection (structured survey data AND unstructured ambient data), access to large-scale voter file and ambient data, as well as advances in computation and statistics, to answer questions pertaining to important phenomena beyond the horse race, with the goal of ultimately giving progressive campaigns ammunition for data-driven message selection, targeting, and tracking of RoI. Data collection around our core effort has been ongoing since 2017, putting the raw data base of PredictWise at 300,00 respondents, and Millions of ambient data points (read: application usage behavior etc.) per respondent. We have deployed this technology at the aggregate – identifying congressional districts we believe to be amenable to long-running progressive campaigns despite dismal-looking horse race estimates (here), and at the individual, providing superior and faster ways to cut lists of persuadable targets for digital ad buys (here). For The Atlantic story, we collected custom data in April 2018, and scored counties by estimated political tolerance (full methods: here). The goal of this undertaking: combining quantitative methods with qualitative reporting, or what I would call data-driven ethnography. Ultimately, we tried to find both quantitative and qualitative evidence in support of positions toward the top or bottom of our tolerance scale. Here is a preview:

Suffolk County, MA:

  • Population: 797,939 (2015)
  • Density 13,758/sq mi (5,312/km2)
  • PredictWise Polarization Score: 65.39
  • Mean Partisan Identification Homogeneity Score at Census Block level, where -100 is 100% Dems, 0 is 50% Reps and 50% Dems, and 100 is 100% Rep ): 80.50
  • Identified couples: 55,854
  • Agreement rate among couples on party identification: 89.99%

In contrast:

Jefferson County, NY:

  • Population: 117,635 (2015)
  • Density 92/sq mi (36/km2)
  • PredictWise Polarization Score: 46.13
  • Mean Partisan Identification Homogeneity Score at Census Block level, where -100 is 100% Dems, 0 is 50% Reps and 50% Dems, and 100 is 100% Rep ): -26.97
  • Identified couples: 38,708
  • Agreement rate among couples on party identification: 75.68%

Plus, please read the fantastic piece by Amanda Ripley on more qualitative evidence suggesting Jefferson County stands out: here.


Critique: Fair and unfair


This story has received a ton of attention, and, not surprisingly, a ton of critique. One stream of criticism relates to obsessing over perceived precision in all 3,000 counties ("hey, I live in McCormick county, you guys cannot be serious!"). I address this critique at some length our Methods addendum. The concerning part of this critique: as far as I know there is no ground truth map of political tolerance – too often we quickly discard non-intuitive results based on a set of preconceived ideas (in fact, PredictWise is in the business of changing this). One element that this critique gets especially wrong is identifying the counterfactual to our work, which is not a perfect US map of political tolerance, but no data at all.  Another stream of criticism attempts to reverse engineer our algorithm on the spot (which is public: here) – and not surprisingly gets it wrong every single time. This is a concerning form of engagement (mostly because it reveals a very low ability-to-arrogance ratio), but nothing we need to seriously concern ourselves with.

Of course, there has also been fair critique. Why do we have sharp discontinuities among some states? We have pointed out the trade-off between including party in our models, which necessitates counting partisans at the county level – a notoriously difficult task, and not including it, which would omit an important driver of tolerance. Yes, our counting rules can differ from state-to-state (which is the only way in which artifactual discontinuities at state borders can arise), but again the ground truth is not known. If you live in Florida, you might reside in the same media market as your peers across the state border, but you still receive a different dosage of digitally targeted political content, and your neighborhood conversations are likely different as well.


Going forward


Like it or not: We have opened a debate about what is driving political tolerance, and how drivers vary across the US. By no means do we argue that we have presented a definitive answer on the subject. Instead, we hope to kick off the virtuous cycle of academic-style research: replication, testing, improving. For what it's worth: looking into how we count partisans off of voter file data would be a great place to start. Of course, I would be very much honored being a part of these efforts going forward, and I want to thank all of you who have provided substantive critique so far. If you are interested in the raw data, click here, and please do get in touch for the password: [email protected]