## Oscar Night 2015 – Updated Outcomes

The most updated outcome with Oscar eve predictions (which were posted at 1 PM ET on Oscar Day):

The most updated outcome with Oscar eve predictions (which were posted at 1 PM ET on Oscar Day):

**12:06 AM ET: **20 for 24 with Birdman's win!!! Another huge victory for prediction markets. Perfectly calibrated with average 82% for most likely candidate in the 24 catagories = expected outcome of 20 wins. And, a prediction markets did a great job in raising probability of victory of Birdman to 90% as Oscar night unfolded …

**11:56 PM ET: **With Actor Eddie Redmayne (The Theory of Everything) and Actress Julianne Moore (Still Alice) both winning, we are 19 for 23 with just one to go!

**11:43 PM ET: **The markets were increasingly convinced as the night progressed and it happend … Birdman's Alexandro G Inarritu take the Director award! Brings us 17 of 21 with just 3 to go …

**11:35 PM ET:** OK – now 16 of 20 with the Imitation Game's win for Adapted Screenplay, but another tight loss for Original Screenplay where had Grand Budapest Hotel, but Birdman second …

As promised we got 8 of 10 major movie categories binary correct at the Golden Globes. Boyhood took home its three statues, but Birdman came up short with 2 of 3 excepted wins. The surprise winners, The Grand Budapest Hotel for Best Picture (comedy or musical) and Amy Adams in Big Eyes for Best Actress (comedy or musical), were both our second most likely choices.

The Golden Globes are Sunday, January 11 and, as usual, we have some bold, market-based, predictions on PredictWise. The methodology for the Golden Globes is the exact same as the Oscars, which has not been previously tested out-of-sample for the Golden Globes, but we are excited to see how they do. We have predictions for 10 categories and here are our pre-event predictions:

Microsoft Prediction Lab tested the wisdom of the crowd in 507 elections this fall and did pretty well (here are the posted final predictions from election eve): 33 of 35 (so far) in the U.S. Senate, 30 of 36 in the gubernatorial elections, and 419 of 435 in the U.S. House. This is in terms of binary outcomes (i.e., who won and loss), but I will get into the probabilities below.

In the senate, there were two reasonable and well calibrated “misses”. The final prediction was 61% that Greg Orman would knock off incumbent Republican Pat Roberts in the Kansas. And, 62% that incumbent Democratic senator Kay Hagan would hold off Thom Tillis in North Carolina.

There are three main effects of the 2014 election the 2016 election. First, the Republicans are slightly more likely than before the election to capture the presidency, but the Democrats are still favored. Second, Scott Walker is much more likely to get the Republican nomination, while Jeb Bush is slightly more likely. Third, Mitt Romney is much less likely to get the Republican nomination. There is not really any effect on the Democratic nomination.

The Democratic nominee is 58% likely to win the 2016 presidential election; this is down ever so slightly from before 2014 Election Day. Presidential elections have a much larger voting pool, which is more Democratic, than midterm elections. And, I will let other people debate the motivation of the votes on Election Day 2014, but Obama will not be on the ballot in 2016.

Scott Walker shot up as the major solid, right-wing Republican during the 2014 elections. He won reelection convincingly in a Democratic state, Wisconsin. But, the key thing, is that unlike Mitt Romney or other blue state Republicans, he ran as a solid right-wing Republican.

There are 35 senate elections (excluding Louisiana) and 36 gubernatorial elections. We had expected vote shares for all of them, in terms of two-party vote share, which were primarily generated from traditional polling.

In 54 of 71 (76%) elections the Republican candidate over-performed (28 of 35, 80%, senate and 26 of 36, 72%, governor). The average error (the bias in one direction) was 2 percentage points (the average absolute error was just 2.8 percentage points). The bias was a little more extreme in the gubernatorial elections (2.3 percentage points), than in the senatorial (1.8 percentage points). Since almost all errors were in the same direction, the absolute error is not much larger than the error.

In a preliminary look at Election Eve predictions for the 507 elections, we did pretty well.

**Senate (34 of 35):** I am going to hold out Louisiana for now and assume there were 35 elections. In those 35 election we had the binary winner in 34 of them, with NC going Republican, despite 85% for the Democratic incumbent. The average probability for the leading candidate was 95%, which translates into an expected outcome of 33, so we overshot by 1.

**Governor (32 of 36):** I am going to assume that the results in Alaska, Colorado, Connecticut, and Vermont all hold. In all of these elections we were leaning towards the current leader. Florida, Illinois, Kansas, and Maryland all leaned Democratic prior to the election and were captured by Republican. We had an average probability of 92% for the leading candidate going into the election, which translates into an expected outcome of 33, so we undershot by 1.

**1:30 AM ET: **This is what under-performance versus the polls looks like:

Five weeks ago I launched a new website, with a few friends, including Miro Dudik and David Pennock, called Microsoft Prediction Lab. The website consolidates research into both non-representative polling and prediction games. I have spent years understanding how various raw data: polling, prediction markets, and social media and online data, can be transformed into indicators of present interest and sentiment, as well as predictions, of varying populations. Then, how decision makers allocate resources with the low latency and quantifiable market intelligence that we produce. Microsoft Prediction Lab allows us to continuously innovate not only on the path of raw data to analytics to consumption, but the collection of the data itself.