Predicting the Oscars for me is not about the Oscars per se, but the science of predicting. The challenge was to make predictions in all 24 categories, when most predictions only do 6. The challenge was to make predictions that move in real-time during the time period between the nominations and the Oscars, when most predictions are static. The challenge was to make to predictions that were accurate, not just in the binary correctness, but in calibrated probabilities. The challenge was to make these cost effective predictions, so that they could not only scale to 24 categories, but be useful in making predictions in varying domains.Prediction market data, including Betfair, Hollywood Stock Exchange, and Intrade, combined with some user generated data from WiseQ, allowed me to meet all of these challenges.

I was able to produce predictions for all 24 categories, expanding down the list through film editing, sound mixing, etc. I showed how these predictions moved in real-time during the period between the Oscar nominations and the Oscars. For example, Argo zoomed upward in the best picture and adapted screenplay categories as Zero Dark Thirty plunged in best actress and original screenplay. I was very accurate with 19 of 24 categories correct and the winners in the other 5 categories showing reasonably high probabilities. Prediction market data and experimental prediction games harnessed the wisdom of the crowds to allow me to scale easily to all 24 categories. These same data/models will allow me to easily expand to all sort so domains in the near future.