In the final stages of the UK election on May 7, 2015 the polls showed Labour in the lead for seats. It is reasonable to say it was a statistical dead heat. Despite the polling, prediction markets had the Conservative party over 65% to get more seats than Labour every day from March 10, 2015 onward.
Author: David M. Rothschild
Kentucky is 55% to win the tournament, same as they were before the Elite Eight. They did not look dominate so advancing a round was not as helpful as it should!
1) Kentucky is 55% to win the tournament, up just 11 percentage points from the start of the tournment.
2) The other three games are all pretty tight. Michigan State is a 58% favorite, Duke is a 57% favorite, and Arizona is a 52 favorite.
1) Of the three top seeds left, Duke is the most at risk.
2) Wichita State (7) v. Notre Dame (3) is the tighest pick with the market-based forecasets at 54% for Wichita State and the FiveThirtyEight's model at 53% for Notre Dame.
3) Two (7) seeds are favored over (3) seeds!
Day 3 was another day of top-seeds or expected favorites winning … will the exception of the top seeded Villanova. This is a huge blow to the FiveThirtyEight predictions which had a big weight on Nova: a 16% likelihood of winning the tourney going into the round of 32 (12% going into the round of 64). Prediction market-based forecasts were half of that.
In the game-by-game predictions for Sunday, March 22, the market-based PredictWise forecasts are again similar to FiveThirtyEight today … But, keep on eye on Maryland v. West Virginia where PredictWise's live predictions are ont he other side of the outcome.
There is only one lower seed favored to win on Day 3 of the tournament – Utah (5) is 64% over Georgetown (4) in the South. With about 75 probability on the favored teams we expect 2 of them to go down, with Notre Dame as the most likely. Number 1 seeds Villanova and Kentucky, as well as number 2 seed Arizona are all heavily favored in their games. The market-based predictions are earily similar to FiveThirtyEight today … What the predictions move live on PredictWise here.
Below I compare the prediction market-based predictions on PredictWise with FiveThirtyEight’s statistical model and the New York Times’ Upshot innovative pari-mutuel betting game. First and foremost the three methods are extremely similar. In just 10 of 32 first round games are any of the three models more than 10 percentage points different.
1) PredictWise and Upshot both have 8 seed Oregon winning, slightly, but FiveThirtyEight has them at just 41%. PredictWise and FiveThirtyEight have 11 seed Texas winning, slightly, but Upshot has them at 42%.
2) All three methods have 10 Ohio State favored over VCU. Besides Texas, those are the only two double-digit teams favored. 11 seed Ole Miss is 40% to win and 10 seed Davidson is 43% to win; both strong upset potential.
Here are the pre-tournament market-based predictions for the 2015.
This data is driven by a mix of Betfair (prediction market) and bookie data. Step 1: construct prices from the back, lay, and last transaction odds, in the Betfair order book or, when not available, the lowest odds to buy from a major bookie. For Betfair data we take the average of the cheapest cost to buy a marginal share and the highest price to sell a marginal share, unless the differential is too large or does not exist. Step 2: correct for historical bias and increased uncertainty in constructed prices near $0 or $1. We raise all of the constructed prices to a pre-set value depending on the domain. Step 3: normalized to equal 100% for any mutually exclusive set of outcomes.
Our Oscar predictions have been 19 for 24, 21 for 24, and 20 for 24 over the last three years, in the binary outcome space (i.e., the most likely candidate won the Oscar). Of the 12 “misses” 11 have been the second most likely and one has been the third most likely. But, our predictions are not probabilities for a reason; if we only cared about which candidate was the most likely and not how likely, we would not bother calibrating the difference!
What we are most proud of is the calibration of the Oscar predictions. In the 72 categories (24 per year) we have forecasted in the last three years, the average forecast for the leading candidate was 82%. Thus, on average, we expected to “win” a category 82% of the time and “lose” a category 18% of the time. Thus, 0.82*72 = 59 “wins” and 0.18*72 = 13 “losses” in expectation. Our 60 “wins” is pretty well calibrated!