Thursday, October 1, 2015


Below please find the 2015 week four NFL predictions based on the model we built last year. This model uses a learning algorithm fed by data from just the last three weeks. So, there is no expectation of great success just yet. Last week, the model was 62.5% correct. Week two came in at 50%. Let's see if this week can surpass both.

So here goes - (The % indicated after each game represents the crowd-sourced prediction from as of this afternoon):
  • PIT over BAL - 57% agreement
  • NYJ over MIA - 68% agreement
  • IN over JAX - 94% agreement
  • BUF over NYG - 83% agreement
  • CAR over TB - 92% agreement
  • 2014 Model Split
    • Based on normal averaging
      • WAS over PHI - 20% agreement
    • Based on moving averaging
      • PHI over WAS - 80% agreement
  • OAK over CHI - 81% agreement
  • ATL over HOU - 92% agreement
  • CIN over KC - 80% agreement
  • CLE over SD - 13% agreement
  • GB over SF - 95% agreement
  • ARI over STL - 95% agreement
  • DEN over MIN - 87% agreement
  • DAL over NO - 68% agreement
  • SEA over DET - 94% agreement


Games to watch JAX @ IN, PHI @ WAS, and CLE @ SD. My intuitions are mixed on the KC @ CIN game. I wouldn't be surprised to see KC win that one. As far as OAK @ CHI, the model is clearly in favor of OAK, but I could see this game going either way. (The shutout by SEA last week really affected the numbers for CHI this week.)

The 2014 Model Split above occurred this week because this is the first week that our normal averages and moving averages diverged. Even so, the PHI @ WAS game was the only prediction affected. As the season progresses, we should see more splits. The experiment with moving averages is really to see if we can capture the dynamics that determine the intuitions of the crowd without looking at individualities. (See Predicting the Winners vs. Predicting the Crowd.) The new model which we will roll out in alpha test soon (maybe next week) is not a revised version of the 2014 model, but an approach based on our earlier experiments with dynamic associative networks and feature detection. Again, we do not expect the new model to come out of the starting gates working. It will need several adjustments for weights and threshold functions and maybe the consideration of additional variables. This may occupy us for the next few years. We shall see.

No comments:

Post a Comment