Upgrading Machine Learning. Install Brain (Y/N)?
It’s both less scary and more thrilling than you might think—and we’ve been living with nascent versions of machine learning for some time in the form of cognitive assistance tools. Spellcheck, for example, and the suggestions for replies that Gmail now displays are trained, learning systems. But let’s skip the preamble and go straight to the good stuff that our human Hal has waiting for us.—Editor
Photo by Markus Spiske on Unsplash
Author: Hal Greenwald
Machine learning describes software development techniques that allow computer software to learn, so to speak, from examples and observations. Most software does not learn; software engineers specify exactly what the software should do, and it always responds the same way to the same input. Machine learning software adjusts its own behavior based on the input it receives so that it makes fewer mistakes and adapts to novel conditions. Software engineers have applied machine learning techniques to a wide range of domains requiring adaptive behavior including fraud detection, spam filtering, self-driving cars, and healthcare recommendations. MITRE’s engineers and scientists use machine learning to enable our government sponsors to enhance national security, detect criminal activity such as fraud, protect against threats to our national infrastructure, and develop more autonomous technologies.
How does machine learning work?
There are multiple approaches to machine learning, but the basic idea is that the software makes a decision or prediction based on available input and then figures out how well it did so it can incorporate the outcome into future decisions or predictions. How does it know how well it has done? Sometimes there is an explicit answer or reward signal that reveals whether the decision or prediction was correct; in other cases, the algorithm attempts to optimize the result of a mathematical function describing the quality of the result, cluster the data into related groups, or recognize statistical patterns.
Often, machine learning algorithms incorporate a “neural network” data structure that performs computations using values, or weights, associated with the connections within the network (Mitchell, 1997). An algorithm adjusts the weights according to its performance, strengthening connections that lead to correct results and weakening connections that yield errors, and these changes influence the algorithm’s future behavior. More complex networks can perform more complicated tasks, but increased complexity requires greater processing power and memory. Fortunately, increased computing resources have become more available and less expensive and, along with the greater availability of large, labeled training data sets, have enabled what scientists call deep learning techniques, which leverage larger learning networks to perform more complex tasks than ever for text, speech, image, and video processing (LeCun, Bengio, & Hinton, 2015). Consequently, machine learning algorithms have become significantly more capable and commonplace.
Do machine learning algorithms learn like humans?
Machine learning and the brain sciences (psychology, cognitive science, and neuroscience) have certainly influenced each other (Hassabis, Kumanan, Summerfield, & Botvinick, 2017). Basic ideas about neural function influenced how computer scientists designed neural networks, although their mechanisms are very simple compared to our current understanding of biological neural networks. Also, machine learning techniques have enabled more sophisticated analyses of neural and physiological signals. However, humans and computers learn very differently. While computers are better at identifying statistical regularities in complex data sets, humans learn from fewer examples, transfer knowledge between tasks more easily, and adapt more readily to changing contexts and environments. If machine learning algorithms were more humanlike in these ways, they would be even more capable (Greenwald & Oertel, 2017). MITRE is currently developing a cognitive-neuroscience-inspired framework for novel machine learning approaches.
Transfer learning and one-shot learning
Burning your hand once on a stove is typically enough for you to avoid repeating the behavior and to avoid touching other hot objects like ovens or grills. However, machine learning algorithms typically require large numbers of task-specific training examples, and what they learn often does not transfer to related situations. Humans’ lifelong knowledge acquisition helps us identify features that define and differentiate categories. Building a common body of knowledge is not enough; it is essential to recognize similarities between tasks and identify which information is relevant. Multitask learning models learn multiple tasks simultaneously and outperform equivalent models that are learning individual tasks in isolation, but transferring knowledge to novel tasks remains a challenge.
Continuous learning
Machine learning algorithms often have separate training and testing phases in which they learn a task and evaluate their performance, whereas human learning appears to be a continuous, asynchronous process that continues throughout an individual’s lifetime. Justifications for stopping learning are that it avoids errors resulting from overfitting training data, which occurs when an algorithm learns idiosyncrasies specific to the training examples, and that it minimizes the risk of learning incorrect information (for example, Twitter users taught Microsoft’s Tay chatbot to make politically insensitive statements; Price, 2016). However, stopping learning prevents machine learning algorithms from always refining what they learn and adapting easily when circumstances change. DARPA’s Lifelong Learning Machines program, which will begin later this year, will fund research on continuous learning and transfer learning to create new algorithms that adapt to changing environments and apply learned information to new tasks (DARPA, 2017).
Explanation and interpretability
Two related criticisms of many machine learning algorithms are that their representations (e.g., the pattern of weights in a neural network) are not easy for humans to interpret and that there are no explanations or justifications for the predictions and decisions they produce. Explanations and rationales are the basis for how humans evaluate judgments and decisions and are necessary to have confidence or trust in machine learning algorithms.
Machine learning algorithms’ internal representations need not be straightforward for humans to interpret, but associated algorithms responsible for logical reasoning and inference should be able to use learned information and causal relationships without also requiring access to the original training data. Also, it would be helpful if these internal representations offered insights into how learning algorithms reach their conclusions. DARPA’s Explainable Artificial Intelligence program is investigating such topics (DARPA, 2016).
What’s next?
Machine learning has advanced significantly in recent years, especially with the complexity that deep learning algorithms offer. We expect to see more influence from the brain sciences; as scientists learn more about biological mechanisms for learning and other aspects of cognition, artificial intelligence techniques including machine learning will incorporate key principles. We also expect machine learning algorithms to increase in sophistication to handle increasing volumes of data and enable smarter vehicles and homes.
Sources
DARPA. (2016). Explainable Artificial Intelligence (XAI) Broad Agency Announcement. Retrieved from https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf
DARPA. (2017). Lifelong Learning Machines (L2M) Broad Agency Announcement. Retrieved from https://www.fbo.gov/spg/ODA/DARPA/CMO/HR001117S0016/listing.html
Greenwald, H. S., & Oertel, C. K. (2017). Future Directions in Machine Learning. Frontiers in Robotics and AI, 3(79), 1-7.
Hassabis, D., Kumanan, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-Inspired Artificial Intelligence. Neuron, 95, 245-258.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436-444.
Mitchell, T. M. (1997). Machine Learning. New York: WCB McGraw-Hill.
Price, R. (2016, March 24). Microsoft is deleting its AI chatbot’s incredibly racist tweets. Retrieved from http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3
Hal Greenwald is a lead neuroscientist at The MITRE Corporation who works with government agencies on topics in perception and cognition and at the intersection of neuroscience and cognitive science with artificial intelligence. He holds a Ph.D. in Brain & Cognitive Sciences from the University of Rochester.
© 2017 The MITRE Corporation. All rights reserved. Approved for public release. Distribution unlimited. Case number 17-3665.
The MITRE Corporation is a not-for-profit organization that operates research and development centers sponsored by the federal government. Learn more about MITRE.
0 Comments