Where the Rubber Meets the Road: Engineering and Testing of Autonomy
Author: Amanda Andrei
Congratulations! You’ve built your self-driving car! Now what? Take it out for a spin on that cross-country trip, watching movies and the landscape as you go from sea to shining sea?
Hardly. Assuming you’ve been testing your car throughout development, it’s now time to demonstrate that your car passes the tests for approval to operate on the highway.
Engineering and testing of autonomy refers to the design, development, and assessment of new autonomy technologies and autonomy-enabled systems, from self-driving cars to unmanned aerial vehicles to robot vacuum cleaners.
MITRE’s role in this area is primarily prototyping and risk reduction, applying advances from multiple sectors and integrating a variety of cutting-edge autonomy resources to specific sponsor needs. When a system is ready for system-level or approval-to-operate testing, MITRE sets out to test it—assessing and evaluating how it performs in the environment it needs to work in.
“There’s a risk involved,” explains Jessica Rajkowski, leader of the Autonomous and Intelligent Systems Group at MITRE. “We could hold these systems to too high a standard, saying we need to understand exhaustively how they behave in every situation—so then we’d never get these fielded—or, if we hold them to too low a standard, we’re not going to actually stress the autonomous system and decision-making process well enough to uncover potential pitfalls or behaviors we don’t expect.”
The solution, Rajkowski says, is to find the happy medium between testing states: “Part of the work we do here is better understanding where those right places are and to focus on them.”
Opportunities and Challenges
This is a very exciting time in autonomy and artificial intelligence. There are rapidly improving capabilities being demonstrated regularly, a sustained availability of increasingly powerful platforms and tools, and considerable investment. There are some efforts to track all of these advances, but the advances in autonomy and AI are real, significant, and accelerating. The hype, overgeneralization of advances, and groundless fears related to autonomy and AI are equally significant. Discerning the difference will be required of all leaders: distinguishing between low-hanging fruit, credible stretch goals, and “unobtanium.”
“It’s great how ubiquitous autonomous systems are and what technology is available to employ, plus there’s a lot more people in the field contributing bright ideas,” shares Rajkowski. “The advancements are just astronomical—if you took someone from fifteen years ago and said, ‘Here’s what you have to stand up your own system to do some learning, they’d just be blown away.’”
And of course, whenever there’s great enthusiasm, rapid advances, and significant investment in a technology, buyer beware.
“In the AI community right now, there is a growing recognition that there are problems with the technical maturity of many of the results of autonomy and AI advances,” Chuck Howell, Chief Scientist for Dependable AI, says, “related to an inability to reproduce research results and no underlying scientific models for much of what is developed. We can build systems we do not understand; this has been described as being comparable to alchemy in the research community. This makes thoughtful, effective, and efficient testing of consequential autonomous and AI systems even more important.”
While corporations might be able to invest their money projects and research they deem valuable, government agencies need to justify why a particular technology or system should be funded. Furthermore, quantifying that justification can be difficult. “We look at autonomy and AI as something that’s good and will help us do things better, faster, and cheaper,” notes Rajkowski, “but actually putting a number on that to be able to say, is the investment worth the return – that’s really hard to do.”
The Future
With a focus on solving problems for a safer world, MITRE is supporting government agencies as they assess the AI/autonomy landscape. We support advances to test and integrate systems to serve specific mission needs, and we are helping to distinguish between real transformational advances and hype.
“I think we’re making revolutionary changes, and it’s necessary for staying ahead and being competitive,” says Rajkowski, “A lot of our challenges need well-thought out applications of AI, and we’re really making leaps and bounds towards better addressing these challenges. But we need to not get ahead of ourselves to try to solve all our problems and then abandon the technology abruptly.”
All the more need for thorough engineering and testing of autonomy and AI before taking that driverless road trip.
Amanda Andrei is a computational social scientist in the Department of Cognitive Sciences and Artificial Intelligence. She specializes in social media analysis, designing innovative spaces, and writing articles on cool subjects.
© 2019 The MITRE Corporation. All rights reserved. Approved for public release. Distribution unlimited. Case 19-0915
MITRE’s mission-driven team is dedicated to solving problems for a safer world. Learn more about MITRE.
See also:
Symphony™: Recipes for Success
Interview with Dr. Michael Balazs on Generation AI Nexus
Consequences and Tradeoffs—Dependability in Artificial Intelligence and Autonomy
The World as It Will Be: Workforce Development Within and Beyond MITRE
Catch You Later: Recap of the Generation AI Cyber Challenge
Phish, Flags, and Lesson Plans: Upcoming Hackathon for Generation AI Nexus
Technical Challenges in Data Science
Defining, Applying, and Coordinating Data Science at MITRE
Rising to the Challenge: Countering Unauthorized Unmanned Aircraft Systems
Mistakes and Transcendent Paradoxes: Dr. Peter Senge Talks on Cultivating Learning Organizations
MITRE Hackathon Examines Impact of Emerging Mobile Technologies