The model was correct on 62.5% of the games, up from 50% for week two, which seems about where it should be to me. Next week's predictions should be a little higher, unless the model collapses, which it did on two weeks last year. So, we'll have to see.
The crowd did very well this week at 87.5% as compared to 44% last week. A possible explanation is that the crowd has informational access to individualities (such as injuries) and the model does not. Also, last year's model (which we are using here) is not sensitive to whether a team is home or away. The model currently under development is.
During week three, the model accorded with the crowd 75% of the time, which is up from 69% last week. In 62.5% of the cases, the model and the crowd were both correct, up from 31% last week, and in only 12.5% of the cases, they were both incorrect. Last week, they were both incorrect 38% of the time.