jesse
@ February 8, 2013


----------
0

This season, I made weekly predictions of Super Bowl, conference, and division odds for all 32 teams. Now that the season is over, I can evaluate how these predictions performed.

pred vs act occurrence.JPG

The predictions did reasonably well, but in general I tended to be overconfident in my predictions - that is, events were less likely to occur at the high end, and more likely to occur at the low end. There is no single bad-luck event I can point to. Baltimore winning the Super Bowl (and the AFC, for that matter), Washington winning the NFC East, and Denver winning the AFC West were all events which I viewed as highly unlikely either on (and other than Denver, I kept on viewing it skeptically right up until it happened). A further analysis points to exactly where the pain was.

pred vs error.JPG

Almost all of my error was concentrated in events which I predicted would occur between 0-5% of the time. Some of these I was quick to correct (preseason I predicted Atlanta would win the NFC South only 4.8% of the time, but that was up to 92% by week 4). Others it took me awhile to correct (my preseason odds for Washington winning the NFC East were 2.2%, but this was still as low as 3.8% after Week 12). Others I never corrected (in my final simulation after Wild Card weekend, out of 500 runs, Baltimore won the Super Bowl exactly zero times - my computer really hated Baltimore). Despite this late season error, I did get more accurate as the year wore on.

season quarter vs error.JPG

This data will be useful for calibrating my model. It also means that, next season, I should be more accurate. Or maybe next year a mediocre 3-6 team won't suddenly finish the season 7-0 and take the NFC East. (Sorry, still bitter.) (Go Giants.)

 

 


----------




Blog directory

Powered by Movable Type 4.1