Tuesday, September 29, 2015


Here are the prediction results for week three of the regular season NFL games using the method we employed last year:
  • WAS over NYG - (Actual Score: 21 to 32)
    • Model: Incorrect - Crowd: Correct
  • DAL over ATL - (Actual Score: 28 to 39)
    • Model: Incorrect - Crowd: Correct
  • TEN over IN - (Actual Score: 33 to 35)
    • Model: Incorrect - Crowd: Correct
  • CLE over OAK - (Actual Score: 20 to 27)
    • Model: Incorrect - Crowd: Incorrect
  • CIN over BAL - (Actual Score: 28 to 24)
    • Model: Correct - Crowd: Correct
  • NE over JAX - (Actual Score: 51 to 17)
    • Model: Correct - Crowd: Correct
  • CAR over NO - (Actual Score: 27 to 22)
    • Model: Correct - Crowd: Correct
  • NYJ over PHI - (Actual Score: 17 to 24)
    • Model: Incorrect - Crowd: Incorrect
  • HOU over TB - (Actual Score: 19 to 9)
    • Model: Correct - Crowd: Correct
  • SD over MIN - (Actual Score: 14 to 31)
    • Model: Incorrect - Crowd: Correct
  • PIT over STL - (Actual Score: 24 to 17)
    • Model: Correct - Crowd: Correct
  • ARI over SF - (Actual Score: 47 to 7)
    • Model: Correct - Crowd: Correct
  • BUF over MIA - (Actual Score: 41 to 14)
    • Model: Correct - Crowd: Correct
  • SEA over CHI - (Actual Score: 26 to 0)
    • Model: Correct - Crowd: Correct
  • DEN over DET - (Actual Score: 24 to 12)
    • Model: Correct - Crowd: Correct
  • GB over KC - (Actual Score: 38 to 28) 
    • Model: Correct - Crowd: Correct

The model was correct on 62.5% of the games, up from 50% for week two, which seems about where it should be to me. Next week's predictions should be a little higher, unless the model collapses, which it did on two weeks last year. So, we'll have to see.

The crowd did very well this week at 87.5% as compared to 44% last week. A possible explanation is that the crowd has informational access to individualities (such as injuries) and the model does not. Also, last year's model (which we are using here) is not sensitive to whether a team is home or away. The model currently under development is.

During week three, the model accorded with the crowd 75% of the time, which is up from 69% last week. In 62.5% of the cases, the model and the crowd were both correct, up from 31% last week, and in only 12.5% of the cases, they were both incorrect. Last week, they were both incorrect 38% of the time.

Friday, September 25, 2015


There are added advantages of being an intern in the CSML. These are couched in the rules (or constraints) under which the lab operates. They are worth posting here to help others understand the "climate" of the lab.

  • Each of you will have night and weekend access to Olmsted Hall and to my office/lab. To get into the building after hours, you will need to call security and wait for them to unlock the building. The keycode for my office door is ____________. The room is OH 301. 

  • If you wish to use one of the iMacs for your project, we will set up accounts for you on each of the machines. Please only use your account or the “guest” account. You may use the machines for other purposes, though lab work takes priority. THERE IS NO PRINTER ACCESS.

  • You are welcome to use the lab for your other work as well, understanding that priority goes to lab work first.

  • Please be considerate of others in the lab. If you wish to listen to music, for instance, and are not alone, wear headphones.

  • You are welcome to bring one, and only one, friend with you (other than a lab worker) to study in the lab, provided that space is available. Remember that you are responsible for the actions of the person you bring.

  • You are welcome to use my desk, though I have priority over it.

  • Clutter really bothers me. So, PLEASE pick up after yourself. If you are the last to leave, please make sure that all chairs are pushed in, garbage is thrown out, windows are closed, etc. The lights will take care of themselves.

  • Do not erase anything that you did not put on the white boards. Additionally, please erase anything that you add that you will not need later. You can section off space on the white boards and add your initials to that section, if you plan to return later to work on stuff. (Remember that you can take a photo of the board to save any information you wish.)

  • Anything in the top drawer of the filing cabinet beneath the white board is free for your consumption. However, anything you leave in there may be consumed by anyone else. (The filing cabinet on my desk is OFF LIMITS.)


  • Jacob Green is the lab manager. What he says goes, unless Doc say something different in which case what Doc say goes.

Thursday, September 24, 2015


Below please find the 2015 week three NFL predictions based on the model we built last year. This model uses a learning algorithm fed by data from just the last two weeks. So, there no expectation of great success just yet. Last week, the model was 50% correct. Let's see if this week can surpass that.

So here goes - (The % indicated after each game represents the crowd-sourced prediction from nfl.com as of this afternoon):
  • WAS over NYG - 26% agreement
  • DAL over ATL - 27% agreement
  • TEN over IN - 26% agreement
  • CLE over OAK - 57% agreement
  • CIN over BAL - 62% agreement
  • NE over JAX - 98% agreement
  • CAR over NO - 84% agreement
  • NYJ over PHI - 71% agreement
  • HOU over TB (but barely) - 68% agreement
  • SD over MIN - 49% agreement
  • PIT over STL - 82% agreement
  • ARI over SF - 89% agreement
  • BUF over MIA (but barely) - 68% agreement
  • SEA over CHI - 96% agreement
  • DEN over DET - 87% agreement
  • GB over KC - 90% agreement
Games to watch: TB @ HOU and BUF @ MIA.

My own intuitions disagree with the IN @ TEN game, not merely because I am a Colts fan. Even though the numbers don't suggest so, I would personally expect to see a good game in KC @ GB.

Wednesday, September 23, 2015


Below is our student intern staff for the Fall 2015 semester and football season along with a few words about each.

Senior Jacob Green
Jacob Green (lab manager) is a senior cognitive science major with minors in philosophy, psychology and mathematics who comes to the university from Westport, Indiana. As lab manager, he is responsible for maintaining order in the lab, ensuring that interns are putting in the required time, answering questions, and reporting to me directly on the state of our projects. He also assists me with the end of semester evaluation of the interns and maintains our facility.

Jacob has worked in the lab on several projects, most of which have involved agent-based modeling. After completing his degree at UE, he plans to get his Master's Degree and then work building computer models and network systems. He says that his favorite thing about working in the CSML "is the openness of the environment and the encouragement to think outside the box."

Sophomore Jacob Ball
Jacob Ball is a sophomore computer science major who is also pursuing a minor in mathematics. He comes to the university from Owensville, Indiana. He is currently working in the lab on mechanisms that move data from our data store into the network for transduction onto the feature detection grid in our latest model. (See A Glimpse of Things to Come posted earlier to this blog.)

Jacob was a member of our internship staff last semester (along with the other Jacob pictured above). After graduation, Mr. Ball hopes to become an expert programmer for either Google or Apple. He says that his favorite thing about working in the CSML is that "the lab has all the elements of getting work done including silence, white boards, and computing power."

Senior Andrey Biryuchinskiy
Not all of our interns are named Jacob (though we have had several), nor are they all from Indiana. Senior Andrey Biryuchinskiy is a senior finance and economics major who comes to the university from Moscow, Russia. (He sounds like it too!) His current work in the lab is serving as the software architect for our new model and coordinator for the programming work for Mr. Ball and Ms. Olson (below.)

After graduating this December, he hopes to work in the banking industry financing renewable energy projects or attend graduate school in a Master's of Finance program. He says that "the complexity and variety of tasks assigned to students is what I really enjoy about the CSML. We receive guidance from Doc [that's me], but we are free to come up with the solutions on our own."

Junior Alycia Olson
Alycia Olson is a junior cognitive science and philosophy major pursuing a minor in computer science. She comes to the university from Omaha, Nebraska. Currently, she is working in the lab on scanning the patterns on the second transduction layer of our network and preparing output to be passed to the first decision layer.

After graduating from UE, she hopes either to go to graduate school or join the Air Force as an officer. Her favorite thing about the CSML is her excitement about "working with predictive modeling ... and the free tea." (Someone keeps eating the free cookies too, but I don't think it's her.)

Both Jacob and Alycia are learning from the seniors pictured above. One of our hopes in the CSML is that past interns will return in future semesters to continue to work on our projects and to help train the next generation of interns.

Senior Evan Snider
Senior Evan Snider is a marketing and creative writing double major from the neighboring town of Chandler, Indiana. He was also a member of last semester's NFL internship team. He serves the team with his knowledge of football. While we all know something about the game, Evan is the real football aficionado. (Andrey knows that the Dallas Cowboys wear blue and silver, and Alycia knows that the Seahawks are in Seattle, but this doesn't get us far when approaching the target we are trying to model.) Evan works in the lab because of his insight on what we should be tracking when things don't work and to help us understand what it is that we are tracking.

After graduation, Evan hopes to work in a marketing environment that allows him to use his skills as a creative writer. He says that his favorite thing about working in the CSML is that "it's great to be around other students with interests in cognitive science and learning in general."

Director Doc Beavers
And this is me, Tony Beavers, (AKA "Doc"), with my "what in the hell are you doing" face. I selected this picture out of sympathy for the pictures above. None of us are as goofy as these pictures suggest. In any case, my job is to direct the CSML, and my favorite thing about doing so is getting to work with self-motivated undergraduates who never cease to amaze me by what they teach me. I'm very proud of our team this year. We are making progress everyday, and I'm excited to see where we end up as this NFL season carries on.

Oh, and I should add that I also enjoy working with networks and predictive modeling from the perspective of complex systems. All of this fascinates me to no end. Cookies, tea, models and students: things don't get much better than this. Life is good! Yeah. Cookies. Yeah.


When we compare our football predictions with the crowd predictions on the NFL website, we are not always trying to beat the crowd's predictions. With some models, we would like to match them. The reason has a long history in artificial intelligence and cognitive science. It concerns the question about whether artificial intelligences should be as smart as or smarter than human beings, but it also concerns the question about how human intelligence works.

From the practical, engineering standpoint, creating AI's that are equal to humans in intelligence is hardly a scientific gain. Who would want a self-driving car that was as prone to crash as a human being? The gain here is that they are better, and the same goes for AI doctors and surgeons, auto-pilots and a host of other automated systems, where we want automation to surpass our human abilities.

Recently in our lab, for instance, we were manually tracking drive statistics for NFL games. One intern, Andrey Biryuchinskiy, automated the process by writing a program to record the information directly from the web without human intervention. To verify that his program worked we decided to compare the results of his process with our human process. We did find several errors, not (so much) with the records of the program, but with the records compiled by us humans in the lab. Though we tried to be careful, mere counting, especially while analyzing four or more games a week, revealed how error prone human cognition can be. Andrey's work here not only saved us time, in fact, a lot of time, more importantly, it improved the accuracy of our data collection and, in turn, the viability of the model we are constructing.

But why try to build an AI that can predict what humans will predict? The answer to this question speaks to the analysis of human cognition, how human beings make decisions and make predictions. Sometimes, as was revealed in almost every week last year, the humans were right and the machine wrong. What do human beings look at, examine, analyze, etc., when they arrive at their predictions? What accounts for human 'expertise'? These are very difficult questions to answer. However, if we can build a model that can predict what humans will predict, we will have moved a little closer to being able to address them.

There is no doubt that if we do succeed in predicting the crowd, the way our mechanisms do so will differ greatly from the way human beings do it. But the effort will, nonetheless, give us a heuristic to point in a direction for further study. If we can build a successful model here, we will be able to pass on to others a function and say that somehow in the deep recesses of the human brain this same function is being realized. To be clear, this does not mean that the brain is using the function, but merely that whatever the brain is doing can be described by this function. How it does so is a question for the neuroscientists. Hopefully, our efforts will prove fruitful in this regard.

Tuesday, September 22, 2015


Here are the prediction results for week two of the regular season NFL games using the method we employed last year:
  • DEN over KC - 
    • Model: CorrectCrowd: Incorrect
  • CAR over HOU - 
    • Model: Correct - Crowd: Correct
  • SF over PIT - 
    • Model: Incorrect - Crowd: Correct
  • NO over TB - 
    • Model: Incorrect - Crowd: Incorrect
  • DET over MN - 
    • Model: Incorrect - Crowd: Incorrect
  • ARI over CHI - 
    • Model: Correct - Crowd: Correct
  • BUF over NE - 
    • Model: Incorrect - Crowd: Correct
  • CIN over SD - 
    • Model: Correct - Crowd: Correct
  • TEN over CLE - 
    • Model: Incorrect - Crowd: Incorrect
  • ATL over NYG - 
    • Model: Correct - Crowd: Correct
  • STL over WAS - 
    • Model: Incorrect - Crowd: Incorrect
  • MIA over JAX - 
    • Model: Incorrect - Crowd: Incorrect
  • BAL over OAK - 
    • Model: Incorrect - Crowd: Incorrect
  • DAL over PHI - 
    • Model: Correct - Crowd: Incorrect
  • GB over SEA - 
    • Model: Correct - Crowd: Correct
  • NYJ over IN - 
    • Model: Correct - Crowd: Incorrect

The model was correct on 50% of the games. Since the model uses a learning algorithm, I was expecting it to be in the 30th percentile somewhere. So, it outperformed my expectations, but at the cost of the loss of the Colts, which is never good. (Data for these predictions was based on one week only.)

The crowd (based on the NFL Weekly Pick'em as of last Thursday afternoon) was correct on 44% of the games. So, the model beat the crowd, but this is only an incidental goal. Ideally, at least one of our models will accord with the predictions of the crowd 100% of the time. I will explain why this matters in my upcoming post, Predicting the Winners vs. Predicting the Crowd. During this particular week, the model accorded with the crowd 69% of the time. In only 31% of the cases were the model and crowd both correct. In 38% of the cases, they were both incorrect. Let's see how these numbers change over the next several weeks. If last year is any indication, they should improve.

Sunday, September 20, 2015


The CSML is staffed with student interns under my supervision and the supervision of a student lab manager as a complement to the university's cognitive science program. This year's lab manager is Jacob Green, a senior cognitive science major. Last year, the position was held by Cody Baker, a double major in cognitive science and applied mathematics who is currently pursuing his Ph.D. in mathematics at Notre Dame University. 

Even though the CSML maintains a tight affiliation with our cognitive science program, interns have come from all of the colleges at the university. Past majors have included athletic training, exercise science, engineering, computer science, mathematics, marketing, finance, economics, and even philosophy and creative writing. Interns have also ranged over all four college years. Freshmen and Sophomores register for the internship experience under COGS 292, while Juniors and Seniors register under COGS 492. The experience may be repeated, and selection for open spots is by way of application. Registration and tuition costs are handled by the University of Evansville.

The Spring 2015 CSML Internship Staff
Interns generally work on a common project as a single team or a set of teams, though self-motivated students who have their own projects in agent-based modeling or network modeling have also worked in the lab. Several single-student computer science senior projects have been undertaken in this capacity, most of which have won first place as the outstanding senior project in computer science for the year they were undertaken.

Internships may be for 1, 2 or 3 credit hours, with the following expectations: that a 1-hour internship will involve four hours of work in the lab per week, a 2-hour internship will involve seven hours of work, and a 3-hour internship ten.

The goal of the lab is to explore the intersection of computing, computer modeling, artificial intelligence, and cognition, in an open, creative, team-oriented environment. From the start, we have never been adverse to "reinventing the wheel," if reinventing it helps us to understand "the wheel" better, gain insight, and apply that insight to other problems.

As the CSML director, I agree completely with a statement that I heard cognitive scientist Paul Thagard make in a keynote address for an annual meeting of the International Association for Computing and Philosophy: "One thing I like about working with undergrads is that they have yet to learn what is impossible." This is a sentiment worthy of being posted on the door to our lab.

Saturday, September 19, 2015


Projects for our internship teams are undertaken in the Cognitive Science Modeling Lab (CSML), which is equipped with three 27" quad core iMacs, two of which are running at 3.2 GHz with 16 GB or RAM and plenty of storage space. Two of the computers are operated with a bluetooth trackpad and one with a mouse.

Overhead hangs a 40" Samsung smart television wired to an XBOX One with a Kinect unit (visible above the TV). The primary use of the XBOX at this time is to study the game of football by running simulations using Madden 16.

The TV/XBOX combination also offers video conferencing capabilities. The Kinect unit was purchased for later cognitive science experiments involving perception and action.

The lab also has two white boards, which are used for planning mostly and working out math problems. To allow several students the opportunity to use the board, students will take photographs of the state of the boards after their work so they can pick up later where they left off.

(You might also notice the hot pot on the small filing cabinet. This is available for all lab workers to make coffee and tea. Treats are available in the top drawer of the cabinent while they last. We have an open drawer policy - anything in the cabinent can be consumed by anyone, and the occassional generous lab worker will drop off treats for the others.)

Many students work around the conference table on their own labtops. The table is useful for laying out data print outs to inspect them for patterns, and of course the table is used for meetings.

Students have 24/7 lab access, and many will use the lab for a quiet place to do homework from other classes when they are not working on the football project.
The general hope with the lab is to create a friendly environment for educational play with an eye toward self-motivated discovery and problem solving.

Friday, September 18, 2015


In 2009, UE Cognitive Science and Computer Science double major Michael Zlatkovsky did his senior project on our Dynamic Associative Networks (DANs). His project is described on his companion website. Since 2009, we have made several innovations in our networks, including the addition of information-theoretic dynamic weighting and the use of threshold functions. Even so, the goal of the project has largely been the same: To create a system that can transform ordered input organically and recurrently into a structure that possesses cognitive abilities. The current choice of using the NFL as the domain for application is explained in the welcome message for this blog.

Michael's project was the first-place winner for "Outstanding Electrical and Computer Engineering Paper/Presentation" at the University of Evansville's Undergraduate Research Conference in Mathematics, Engineering, and Sciences (MESCON). At the time of this writing, Michael is working for Microsoft in Seattle. Go Seahawks!

In 2010, I published Typicality Effects and Resilience in Evolving Dynamic Associative Networks in the November Symposium Proceedings for the Association for the Advancement of Artificial Intelligence. The topic of our symposium concerned Complex Adaptive Systems.

Two year's later, UE Philosophy and Cognitive Science Double Major Christopher Harrison and I wrote Information-Theoretic Teleodynamics in Natural and Artificial Systems for A Computable Universe: Understanding Computation and Exploring Nature as Computation, edited by Hector Zenil with a forward by Sir Roger Penrose. 

Mr. Harrison and I also presented our research at the Institute for Advanced Topics in the Digital Humanities, Network and Network Analysis for the Humanities, sponsored by the National Endowment for the Humanities and the Institute for Pure and Applied Mathematics (IPAM) at the University of California, Los Angeles. The title of our presentation was Hybrid Networks: Transforming Networks for Social and Textual Analysis into Teleodynamic and Predictive Mechanisms. Slides are available by clicking on the previous link.

In 2013, UE students Drew Reisinger, Taylor Martin, Mason Blankenship, Christopher Harrison, Jesse Squires and I wrote a related paper, Exploring Wolfram's Notion of Computational Irreducibility with a Two-Dimensional Cellular Automaton, for Irreducibility and Computational Equivalence: Wolfram Science 10 Years after the Publication of A New Kind of Science (Emergence, Complexity and Computation), also edited by Hector Zenil.

Currently, Mr. Reisinger is pursuing a Ph.D. in cognitive science at Johns Hopkins University, Mr. Squires is working for Facebook, and  Mr. Blankenship is working for Ciholas Technologies in Newburgh, Indiana.


Below is an image of part of the pattern matching system that will be used in our forthcoming model. In this instance, the network is attempting to detect the style of play from the Patriots/Steelers game in the Bills/Colts game from last weekend and finding only a 3% match.

Attempted Pattern Detection of One Style of Play

When the network is complete, the pattern matching layer will attempt to match the current pattern, represented by the blue horizontal bars, with all the previous patterns encountered by the network simultaneously. The results will then be fed to another layer of the network that will correlate similarities and differences in styles of play of all games represented in its case-based reasoning set that will, in turn, yield a weighted set of matches with the game winner and the point spread, assuming that all works as we anticipate. If not, we have some other weighting techniques to try before resulting to tracking additional variables.

Thursday, September 17, 2015


It is of course premature to draw any conclusions about the viability of a model based on one instance, but in a rather remarkable play within the last 36 seconds of the game, the Broncos pulled out one touchdown over the Chiefs to end the game with a 31 to 24 win. The prediction was:
  • DEN over KC (but just barely) - 42% agreement
In this case, the model was correct, and the NFL crowd-sourced prediction at 42% agreement was incorrect.

Again, this doesn't mean much, but the game was indeed well worth watching. Click here to watch the final scoring play.


While our lab team is preparing a more sophisticated network model for NFL predictions than last year's model, I thought I would go ahead and make Week Two predictions using last year's method. 
Since this is based on a learning algorithm and we only have one week of data for the current season, there is no expectations that the predictions below will be correct. Last year's Week 2 predictions were only about 32% correct. Let's see if this year does any better using the same method.
So here goes - (The % indicated after each game represents the crowd-sourced prediction from nfl.com as of this afternoon):
  • DEN over KC (but just barely) - 42% agreement
  • CAR over HOU - 68% agreement
  • SF over PIT - 26% agreement
  • NO over TB - 95% agreement
  • DET over MN - 69% agreement 
  • ARI over CHI - 82% agreement 
  • BUF over NE (but counterintuitive) - 38% agreement 
  • CIN over SD - 60% agreement
  • TEN over CLE - 83% agreement
  • ATL over NYG - 54% agreement
  • STL over WAS - 92% agreement
  • MIA over JAX - 95% agreement
  • BAL over OAK - 92% agreement
  • DAL over PHI (but just barely) - 42% agreement
  • GB over SEA (but close) - 77% agreement
  • NYJ over IN - 17% agreement
Games to Watch: DEN @ KC, DAL @ PHI and SEA @ GB.


to the Cognitive Science Modeling Lab (CSML) at the University of Evansville. CSML is a small lab devoted to agent-based modeling and, primarily, to developing virtual circuits that can perform rudimentary cognitive tasks. Our virtual circuits are also called by us "Dynamic Associative Networks" (DANs) to distinguish them from the more traditional Artificial Neural Networks (ANNs). DANs differ from ANNs in that they use no predetermined network structure; rather, we rather promiscuously add nodes in the network wherever needed to improve cognitive function. They also differ in that there are no fixed weights. Instead, they use dynamic weights that are determined by information-theoretic methods. Training with DANs is done by way of case-based reasoning. Additionally, new information can be added to DANs without re-training the network, a genuine advantage over more traditional models.

The CSML has a long history of different kind of projects. It began in the mid-1990's as the "Internet Applications Laboratory" developing search engines for academic use. As the "Digital Humanities" grew as an area of study, the lab's name was changed to the "Digital Humanities Lab," and then, recently, the "Cognitive Science Modeling Lab," to reflect more closely what we do here. Over the years, the lab has been staffed by more than fifty students working on a variety of projects, including internet search engine design, agent-based exploration of traffic light patterns in Evansville, Indiana and classroom simulations, and, regarding DANs in particular, on a range of models that include 1) object identification based on properties and context-sensitivity, 2) comparison of similarities and differences among properties and objects, 3) shape recognition of simple shapes regardless of where they might appear in an artificial visual field, 4) association across simulated sense modalities, 5) primary sequential memory of any seven digit number, 6) network branching from one subnet to another based on the presence of a single stimulus, 7) eight-bit register control that could perform standard, machine-level operations as with standard Turing-style computational devices, and 8) rudimentary natural language processing based on a stimulus/response (i.e. anti-Chomskian) conception of language.

After a decade of exploration in toy environments, we at the CSML stepped out into a genuinely complex adaptive system in a real-world stochastic environment, namely, the National Football League, where we are attempting to predict winners and losers of games and, in time, also the point spread. The NFL was chosen because, while it is massively complex, it is relatively constrained by a regular schedule, a fixed number of teams, a set of articulated rules, and regular stop points (unlike free-flow games such as hockey, basketball and soccer) where data can be discretely retrieved. 

During the 2014-2015 season, we employed our first model based on weighted averages determined by the percentage of points earned by a team during 2014-2015 season play only. As expected, the network learned as it went, starting in the 30 percentile for week two, then 50 percentile for week three, then on to end the season averaging around 65% correct when predicting winners. During two weeks, the model hit into the 80 percentile, but it also fell to the 30 percentile for two weeks as well.

This season, 2015-2016, we are moving into genuine network models based on our regular DAN methods described above. Pattern matching data will be based on drive profiles ranging over all the NFL games played over the last ten years. The method, however, will not permit robust predictions of the current season until each team has played at least one home and away game. In the meantime, we will roll out predictions based on last year's method. Our predictions and results will be regularly posted on this blog throughout the season.
 properties and context-sensitivity, 2) comparison of simi-larities and differences among properties and objects, 3)shape recognition of simple shapes regardless of wherethey might appear in an artificial visual field, 4) associationacross simulated sense modalities, 5) primary sequentialmemory of any seven digit number (inspired by Allen andLange 1995), 6) network branching from one subnet toanother based on the presence of a single stimulus, 7)eight-bit register control that could perform standard, ma-chine-level operations as with standard Turing-style com- putational devices, and 8) rudimentary natural language processing based on stimulus/response (i.e. anti-Chomskian) conception of language.