Tuesday, September 29, 2015

WEEK THREE NFL RESULTS

Here are the prediction results for week three of the regular season NFL games using the method we employed last year:
  • WAS over NYG - (Actual Score: 21 to 32)
    • Model: Incorrect - Crowd: Correct
  • DAL over ATL - (Actual Score: 28 to 39)
    • Model: Incorrect - Crowd: Correct
  • TEN over IN - (Actual Score: 33 to 35)
    • Model: Incorrect - Crowd: Correct
  • CLE over OAK - (Actual Score: 20 to 27)
    • Model: Incorrect - Crowd: Incorrect
  • CIN over BAL - (Actual Score: 28 to 24)
    • Model: Correct - Crowd: Correct
  • NE over JAX - (Actual Score: 51 to 17)
    • Model: Correct - Crowd: Correct
  • CAR over NO - (Actual Score: 27 to 22)
    • Model: Correct - Crowd: Correct
  • NYJ over PHI - (Actual Score: 17 to 24)
    • Model: Incorrect - Crowd: Incorrect
  • HOU over TB - (Actual Score: 19 to 9)
    • Model: Correct - Crowd: Correct
  • SD over MIN - (Actual Score: 14 to 31)
    • Model: Incorrect - Crowd: Correct
  • PIT over STL - (Actual Score: 24 to 17)
    • Model: Correct - Crowd: Correct
  • ARI over SF - (Actual Score: 47 to 7)
    • Model: Correct - Crowd: Correct
  • BUF over MIA - (Actual Score: 41 to 14)
    • Model: Correct - Crowd: Correct
  • SEA over CHI - (Actual Score: 26 to 0)
    • Model: Correct - Crowd: Correct
  • DEN over DET - (Actual Score: 24 to 12)
    • Model: Correct - Crowd: Correct
  • GB over KC - (Actual Score: 38 to 28) 
    • Model: Correct - Crowd: Correct

The model was correct on 62.5% of the games, up from 50% for week two, which seems about where it should be to me. Next week's predictions should be a little higher, unless the model collapses, which it did on two weeks last year. So, we'll have to see.

The crowd did very well this week at 87.5% as compared to 44% last week. A possible explanation is that the crowd has informational access to individualities (such as injuries) and the model does not. Also, last year's model (which we are using here) is not sensitive to whether a team is home or away. The model currently under development is.

During week three, the model accorded with the crowd 75% of the time, which is up from 69% last week. In 62.5% of the cases, the model and the crowd were both correct, up from 31% last week, and in only 12.5% of the cases, they were both incorrect. Last week, they were both incorrect 38% of the time.

No comments:

Post a Comment