to the Cognitive Science Modeling Lab (CSML) at the
University of Evansville. CSML is a small lab devoted to agent-based modeling
and, primarily, to developing virtual circuits that can perform rudimentary
cognitive tasks. Our virtual circuits are also called by us "Dynamic
Associative Networks" (DANs) to distinguish them from the more traditional
Artificial Neural Networks (ANNs). DANs differ from ANNs in that they use no
predetermined network structure; rather, we rather promiscuously add nodes in
the network wherever needed to improve cognitive function. They also differ in
that there are no fixed weights. Instead, they use dynamic weights that are
determined by information-theoretic methods. Training with DANs is done by way
of case-based reasoning. Additionally, new information can be added to DANs
without re-training the network, a genuine advantage over more traditional
models.
The CSML has a long history of different kind of
projects. It began in the mid-1990's as the "Internet Applications
Laboratory" developing search engines for academic use. As the
"Digital Humanities" grew as an area of study, the lab's name was
changed to the "Digital Humanities Lab," and then, recently, the
"Cognitive Science Modeling Lab," to reflect more closely what we do
here. Over the years, the lab has been staffed by more than fifty students
working on a variety of projects, including internet search engine design,
agent-based exploration of traffic light patterns in Evansville, Indiana and
classroom simulations, and, regarding DANs in particular, on a range of models
that include 1) object identification based on properties and
context-sensitivity, 2) comparison of similarities and differences among
properties and objects, 3) shape recognition of simple shapes regardless of
where they might appear in an artificial visual field, 4) association across
simulated sense modalities, 5) primary sequential memory of any seven digit
number, 6) network branching from one subnet to another based on the presence
of a single stimulus, 7) eight-bit register control that could perform
standard, machine-level operations as with standard Turing-style computational devices,
and 8) rudimentary natural language processing based on a stimulus/response
(i.e. anti-Chomskian) conception of language.
After a decade of exploration in toy environments,
we at the CSML stepped out into a genuinely complex adaptive system in a real-world
stochastic environment, namely, the National Football League, where we are
attempting to predict winners and losers of games and, in time, also the point
spread. The NFL was chosen because, while it is massively complex, it is
relatively constrained by a regular schedule, a fixed number of teams, a set of
articulated rules, and regular stop points (unlike free-flow games such as
hockey, basketball and soccer) where data can
be discretely retrieved.
During the 2014-2015 season, we employed our first
model based on weighted averages determined by the percentage of points earned
by a team during 2014-2015 season play only. As expected, the network learned
as it went, starting in the 30 percentile for week two, then 50 percentile for
week three, then on to end the season averaging around 65% correct when
predicting winners. During two weeks, the model hit into the 80 percentile, but
it also fell to the 30 percentile for two weeks as well.
This season, 2015-2016, we are moving into genuine
network models based on our regular DAN methods described above. Pattern
matching data will be based on drive profiles ranging over all the NFL games
played over the last ten years. The method, however, will not permit robust
predictions of the current season until each team has played at least one home
and away game. In the meantime, we will roll out
predictions based on last year's method. Our predictions and results will be
regularly posted on this blog throughout the season.
properties and context-sensitivity, 2) comparison of simi-larities and differences among properties and objects, 3)shape recognition of simple shapes regardless of wherethey might appear in an artificial visual field, 4) associationacross simulated sense modalities, 5) primary sequentialmemory of any seven digit number (inspired by Allen andLange 1995), 6) network branching from one subnet toanother based on the presence of a single stimulus, 7)eight-bit register control that could perform standard, ma-chine-level operations as with standard Turing-style com- putational devices, and 8) rudimentary natural language processing based on a stimulus/response (i.e. anti-Chomskian) conception of language.