Wednesday, October 7, 2015

WEEK FOUR NFL RESULTS

Here are the prediction results for week four of the regular season NFL games using the method we employed last year:
    • PIT over BAL - (Actual score: 20 to 23)
      • Model: Incorrect - Crowd: Incorrect
    • NYJ over MIA - (Actual score: 27 to 14)
      • Model: Correct - Crowd: Correct
    • IND over JAX - (Actual score: 16 to 13)
      • Model: Correct - Crowd: Correct
    • BUF over NYG - (Actual score: 10 to 24)
      • Model: Incorrect - Crowd: Incorrect
    • CAR over TB - (Actual score: 37 to 23)
      • Model: Correct - Crowd: Correct
    • 2014 Model Split
      • Based on normal averaging
        • WAS over PHI - (Actual score: 23 to 20) 
          • Model: CorrectCrowd: Incorrect
      • Based on moving averaging
        • PHI over WAS - (Actual score: 20 to 23)
          • Model: Incorrect - Crowd: Incorrect
    • OAK over CHI - (Actual score: 20 to 22)
      • Model: Incorrect - Crowd: Incorrect
    • ATL over HOU - (Actual score: 48 to 21)
      • Model: Correct - Crowd: Correct
    • CIN over KC - (Actual score: 36 to 21)
      • Model: Correct - Crowd: Correct
    • CLE over SD - (Actual score: 27 to 30)
      • Model: Incorrect - Crowd: Correct
    • GB over SF - (Actual score: 17 to 3)
      • Model: Correct - Crowd: Correct
    • ARI over STL - (Actual score: 22 to 24)
      • Model: Incorrect - Crowd: Incorrect
    • DEN over MIN - (Actual score: 23 to 20)
      • Model: Correct - Crowd: Correct
    • DAL over NO - (Actual score: 20 to 26)
      • Model: Incorrect - Crowd: Incorrect
    • SEA over DET - (Actual score: 13 to 10)
      • Model: Correct - Crowd: Correct

    The standard 2014 model was correct on 60% of the games, down from 62.5% last week. 2014 model accuracy is thus Week 2: 50%, Week 3: 62.5%, Week 4: 60%. Using the same method last year, we also saw a slight dip in Week 4. Last year's accuracy was Week 2: 31%, Week 3: 56%, Week 4: 54%.

    The crowd came down significantly from last week at 60% correct from 87.5% correct, which is not surprising, given that the model was correct in predicting the crowd 87% of the time. The standard model rightly predicted WAS over PHI against the crowd, and it wrongly predicted CLE over SD against the crowd. Note, however, how close both games were.

    Note also that six of the seven above that the models got incorrect were close and could have gone either way. We're making progress. The trick will be whether we can work our way around exigencies. These methods combined with our emerging feature detection drive profiler may do that trick.

    No comments:

    Post a Comment