Polling: Online Panels (Part 2 of 4)

This is the second piece of our series, Evolution of Polling Samples from RDD to RDE

In part 1: we describe a polling industry that is ripe for transformation as Random Digit Dialing (RDD) becomes increasingly tenuous.
In part 3: we discuss  Assisted Crowdsourcing, a brand-new form of polling.
In part 4: we introduce Random Device Engagement (that is what we do!).

Part 2: Online Panels: Non-Organic and Non-Random

Online Panels:online panels collect responses either via a fully opt-in structure, including a sign-up page, or start with an RDD-telephone (and/or supplemented with cell phone) or mail recruitment. Panelists are then recruited to participate in specific surveys, for example via email invitation to the page of the panel provider. The mode is a mix of desktop, tablet, and smartphones, depending on the device of choice from which the invitation is opened. The mode has very low coverage (very few people opt-in to panels), but RDD-based panels, which start out with random methods of recruitment, have better coverage. Response rates, although generally decent from panelists, are low when one considers the low degree of opt-in to the panel. This makes them hard to compute accurately.

As with all discussions around polling, it is critical to delineate two distinct things: data collection and analytics. Data collection is how respondents are gathered. Data analytics is how the collected data is turned into market intelligence. Nothing prevents the most advanced analytics from being used on online panels, so we will stick to data collection only in this post. Online panels – curating panelists via some online interface – are an increasingly popular method of gathering respondents.

This has a number of advantages:

  1. Panels provide repeated and connected users: over-time trends can be analyzed, and any custom polling built on top of baseline tracking can be guided by priors derived from data – a serious innovation.
  2. Relatively cheap and fast: marginal polling is relatively inexpensive and can be done faster than traditional random digit dialing.

Curating panels comes with a number of serious disadvantages:

  1. Locked into one model of data collection: Polling firms that are locked in to a specific mode of data collection will be hit with tremendous costs because the old infrastructure will have to be dismantled as technology shifts over time. And, no one can predict how long web panels will be a viable mode of data collection as web usages shifts to mobile and beyond (yes, you are reading this right: we want you to think virtual reality here). And, many companies that build their polling around this form of panel are locked into non-transferable unique identifiers of each respondent. This has some short-term benefits, but it will make it very costly when the companies need to shift data collection as technology evolves.
  2. Panel fatigue: A myriad of research has documented that repeated participation in polls of panelists can lead to panel fatigue, resulting in non-response error or measurement error. The applied scenario: respondents might be eager to fill out surveys correctly and with care, but this willingness declines the more respondents are invited to participate in surveys, especially if respondents are at risk to lose “panel status”. Instead of providing meaningful answers, respondents then click random answer options, or gravitate toward “Don’t know”.
  3. Panel effects/Panel conditioning: Slightly different from panel fatigue are panel effects, or panel conditioning.  Even if panels recruit a sample that looks like the perfect cross-section of the desired target population at the time of recruitment, the demand to answer political surveys turns these initially representative panelists into a bunch of very politically aware citizens. Panel conditioning has plagued a number of panels or panel-like setups. In the worst case scenario, all panelists will have acquired a base degree of political sophistication as a consequence of being professional political survey takers. In that case, even the most advanced bias correction algorithms will fail because of sharp separation: Among the panelists, no one (read: zero) who mimics the stratum with low levels of political sophistication is left.
  4. Mix of web and mobile not clean: web panels tend to engage respondents either desktop, or on their mobile devices, but the infrastructure may, or may not, be very adaptive. Either way the users are engaging in different experiences conditional on the device of engagement. which are hard to control for.
  5. Non-Organic: In panels, respondents are not engaged in their natural (read: organic) environment. Instead, an alternative digital environment is created, with the potential of introducing measurement error. As respondents are taken out of their normal routine, thought processes can deviate from those in more natural environments, leading to artificial considerations that can unduly influence item response.

Bottom Line: Online panels have the ability to track public sentiment over time more easily than RDD, and are able to leverage the longitudinal panel structure of the data to parse out true swings from artificial movements. In addition, clients of custom polls can be guided by a plethora of prior baseline data when writing the poll. But, reliance on data collection methods and dangers of panel fatigue and panel conditioning mean that insights can be seriously biased, especially if the panel exists for a longer period of time (and panels, as a class, exist for a longer period of time) and it is getting harder to recruit a fresh replacement sample.