Building Smarter Machines by Getting Smarter About the Brain

Photo: Pixabay CCO Creative Commons

Author: Brian Colder

Artificial intelligence (AI) is getting better all the time. You can see it all around you, from Alexa and Siri keeping your appointments and shopping lists, to news articles about self-driving cars, to a program called AlphaZero (Silver et al., 2018) that will probably never lose to any human in chess or GO. But even the smartest AI still couldn’t navigate its way through the social interactions on a playground full of second graders, even though second graders do it all the time. That’s because today’s most successful AI requires massive amounts of training data from the particular area where it will work in order to know how react in any situation, and no one’s tried to collect second-grade playground data. So the incredible power behind the brain of a self-driving car is useless outside of those cars – Teslas can’t play chess, Alexa can’t drive, and AlphaZero has no idea how to hold on to a shopping list.

Yet a single individual human can eventually figure out how to drive, find music to play, and play chess at least reasonably well, although the process takes many years. People are constantly learning and improving their performance on a wide variety of tasks, and this accomplishment points out a core difference between human brains and research and development in AI: people are general-purpose-learning-and-performance entities, whereas AI is most effective when it’s targeted to specific tasks.

The reason for this difference is clear when you consider how and why people and AI are designed. Humans deal with a wide variety of ecological constraints, such as the need to be successful food finders, shelter providers, mate-finders, and caregivers to children. The process of evolution worked under these pressures to give people general learning abilities, so we can learn a specific concept that applies to many different kinds of situations. For example, a child might learn that by working hard at basketball and putting in a lot of time and practice over the summer, she can come back in the winter and get a spot on the basketball team. When that child grows up, she can apply that lesson about the value of preparation and hard work over and over to get a spot in the college she prefers, build a comfortable home, and maybe even keep a marriage strong.

Current successful AI, on the other hand, is designed and built to work very well on just one or a very limited number of tasks. This isn’t necessarily a bad thing – engineers build technology to solve problems, and individual AI applications can solve a lot of problems with this focused approach. The performance of self-driving cars has improved dramatically in recent years, but there are situations where AI could adapt to unexpected circumstances and have a big impact. Some clear examples are control systems for mobile robots that operate without any human guidance, such as robots that are exploring a dangerous area and don’t have reliable communications with their human users. 

If our brains can do it, so can we

Building AI that can handle novel situational input turns out to be a hard challenge that is currently receiving less attention and research funding than the approaches that are producing highly successful, more narrowly focused applications. One avenue of research that has the potential to speed up progress in building more adaptable and generalizable AI is turning to an old, reliable source of inspiration: the human brain itself. Brains have long served as models for building artificial intelligence. Networks of interconnected computing elements that are designed to work like neurons (neural networks) are at the core of today’s successful deep learning strategies. But neural networks are notoriously bad at producing reasonable results when they’re tested with unexpected input. Since brains use neurons to compute, and unlike artificial neural networks, brains do a good job handling unexpected changes to their environment, we have to look harder at how the human nervous system processes information to understand how to build adaptable AI.

Predictions are hard, especially about the future (Yogi Berra)

The Emerging Technologies research team at MITRE is reading about cognitive information processing theory (e.g., Pezzulo et al., 2018) and reviewing published neuroscience and cognitive science experiments (e.g., Klein-Flugge & Bestmann, 2012) to piece together a high-level picture of how brains use input from the environment to guide actions. Starting with other scientists’ notions that our brains are constantly predicting the future, and that neural representations of action are always tied to what will happen in the world because of that action, we’ve developed a framework (Colder, 2017) describing how cognitive information processing is really about creating possible futures, selecting one of those futures, and then attempting to make that future happen.

The framework is just a guess right now, but it’s a guess that agrees with many previous theories (Grush, 2004; Clark, 2013), and with the results of many neuroscience and cognitive science experiments (Giovannucci et al., 2017). We know human brains use different types of learning at different times, and the framework includes ideas for how those different learning types help future creation, future selection, and attempting to make that future happen.

We think the way that different types of learning interact with representations of futures is key to how people can apply lessons they learn on one task to their work on other tasks, and how we can respond reasonably well in unexpected situations. As an example, we know that every moment we’re awake, our brains are constantly learning about how different parts of the world interact. After years of this constant learning, people build up a vast knowledge base of so-called common sense they can draw upon in every situation.

Making it work

Our team’s goal now is to use this new framework as design inspiration for AI that can also easily transfer learning among tasks and adapt to changes in the environment. We’re working towards giving our AI agents common sense to guide them when they face unexpected challenges. Right now we’re investigating how the framework can use reward information to create goals from the environment, and use its continual unsupervised learning to prioritize those goals. The ability to create and prioritize their own goals should allow agents to operate in multiple domains, such as traversing difficult terrain and also communicating information about novel stimuli.

However, allowing an agent to learn goals and priorities from scratch will take much longer than building goals into the agent and training it with hand-picked data. We’ll have to figure out how to extend and optimize that learning process. The ultimate goal is to design a machine that can adapt to unexpected input in multiple domains. Maybe someday, the same AI application that drives the car can also stop at the store to pick up some groceries, then order a pizza if the grocery store is out of the pizza dough that was planned for dinner. If so, we think it will be brain-like learning that powers that adaptable AI.

Brian Colder is a neuroscientist and software engineer who has modeled brain structure and function, developed an automated stock portfolio risk optimizer, recorded evidence of unconscious processing, coached his daughters’ basketball, softball and soccer teams, and really likes snorkeling. He is part of a group that applies neurotechnology and biomechanics to modeling the brain and body, improving sponsor team training programs, and analyzing injuries with the goal of preventing them.


Clark, A. (2013). Whatever next? predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36, 181–204. doi: 10.1017/S0140525X12000477

Colder, B. (2017). Expanding a Standard Theory of Action Selection to Produce a More Complete Model of Cognition. In AAAI Fall Symposium Series. Retrieved from

Giovannucci, A., Badura, A., Deverett, B., Najafi, F., Pereira, T. D., Gao, Z., et al. (2017). Cerebellar granule cells acquire a widespread predictive feedback signal during motor learning. Nat. Neurosci. 20, 727–734. doi: 10.1038/nn.4531

Grush, R. (2004). The emulation theory of representation: motor control, imagery and perception. Behav. Brain Sci. 27, 377–442. doi: 10.1017/s0140525x04000093

Klein-Flügge, M. C., and Bestmann, S. (2012). Time-dependent changes in human corticospinal excitability reveal value-based competition for action during decision processing. J. Neurosci. 32, 8373–8382. doi: 10.1523/jneurosci.0270-12.2012

Pezzulo G., Rigoli F., Friston K. J. (2018). Hierarchical active inference: a theory of motivated control. Trends Cogn. Sci. 22, 294–306. 10.1016/j.tics.2018.01.009

Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362 (6419), pp. 1140-1144

© 2019 The MITRE Corporation. All rights reserved. Approved for public release.  Distribution unlimited. Case number 19-0049

MITRE’s mission-driven team is dedicated to solving problems for a safer world. Learn more about MITRE.

See also: 

The World as It Will Be: Workforce Development Within and Beyond MITRE

Catch You Later: Recap of the Generation AI Cyber Challenge

Phish, Flags, and Lesson Plans: Upcoming Hackathon for Generation AI Nexus

Technical Challenges in Data Science

Defining, Applying, and Coordinating Data Science at MITRE

Upgrading Machine Learning. Install Brain (Y/N)?

Is This a Wolf? Understanding Bias in Machine Learning


Pin It on Pinterest

Share This