We use really simple and transparent methods for creating forecasts for the gubernatorial and senatorial elections. Everything I do is outlined is this forthcoming paper. The method is unchanged from 2012, but the coefficients are updated with 2012 data.

I consider three different types of data:  fundamental, polling, and prediction markets. Fundamental data includes: incumbency, past election results, change in economic indicators, presidential approval, state ideology, and biographical data. Polling data includes aggregated traditional polls Huffington Post’s Pollster and Real Clear Politics. Prediction market data includes prices on contracts from Betfair.

All of the data needs to transform from raw data into predictions. For fundamental data I take advantage of historical correlations, tested for out-of-sample robustness, to match current variables to likely outcomes. For polling I ameliorate several different biases, including the anti-incumbency bias (where incumbents poll lower early than they do on Election Day) and reversion to mean (where big lead tend to contract). For prediction markets I focus on the favorite-longshot bias where prices tend to be under-confident.

I transform the raw data into three separate probabilities of victory and then combine them to form a single probability of victory. The combined probability of victory is accurate, updates regularly, answers the key question of most stakeholders, and easily scaled from Electoral College to senatorial to gubernatorial.

There is no question that there are more complex forecasts out there, but they are no more accurate than my forecasts. Why? Because they lack the identification to verify their “improvements”. And, because of their complexity, their forecasts do not easily scale to gubernatorial or House elections.

Updating Predictions: senatorial, senatorial balance of power, and gubernatorial.