When we compare our football predictions with the crowd predictions on the NFL website, we are not always trying to beat the crowd's predictions. With some models, we would like to match them. The reason has a long history in artificial intelligence and cognitive science. It concerns the question about whether artificial intelligences should be as smart as or smarter than human beings, but it also concerns the question about how human intelligence works.
From the practical, engineering standpoint, creating AI's that are equal to humans in intelligence is hardly a scientific gain. Who would want a self-driving car that was as prone to crash as a human being? The gain here is that they are better, and the same goes for AI doctors and surgeons, auto-pilots and a host of other automated systems, where we want automation to surpass our human abilities.
Recently in our lab, for instance, we were manually tracking drive statistics for NFL games. One intern, Andrey Biryuchinskiy, automated the process by writing a program to record the information directly from the web without human intervention. To verify that his program worked we decided to compare the results of his process with our human process. We did find several errors, not (so much) with the records of the program, but with the records compiled by us humans in the lab. Though we tried to be careful, mere counting, especially while analyzing four or more games a week, revealed how error prone human cognition can be. Andrey's work here not only saved us time, in fact, a lot of time, more importantly, it improved the accuracy of our data collection and, in turn, the viability of the model we are constructing.
But why try to build an AI that can predict what humans will predict? The answer to this question speaks to the analysis of human cognition, how human beings make decisions and make predictions. Sometimes, as was revealed in almost every week last year, the humans were right and the machine wrong. What do human beings look at, examine, analyze, etc., when they arrive at their predictions? What accounts for human 'expertise'? These are very difficult questions to answer. However, if we can build a model that can predict what humans will predict, we will have moved a little closer to being able to address them.
There is no doubt that if we do succeed in predicting the crowd, the way our mechanisms do so will differ greatly from the way human beings do it. But the effort will, nonetheless, give us a heuristic to point in a direction for further study. If we can build a successful model here, we will be able to pass on to others a function and say that somehow in the deep recesses of the human brain this same function is being realized. To be clear, this does not mean that the brain is using the function, but merely that whatever the brain is doing can be described by this function. How it does so is a question for the neuroscientists. Hopefully, our efforts will prove fruitful in this regard.