Is this a wolf? Understanding bias in machine learning

Photo courtesy of Pixabay

Author: Cameron Boozarjomehri

When you look at the image above, what you see is just a wolf in the snow. How do you know it’s a wolf in the snow? Well, I could go off on a long tangent about how your eyes receive light that your brain interprets into regions and shapes, the sum of which your mind has strongly associated with how you expect a wolf to look.

Simply put, you know it’s a wolf because you know what a wolf looks like. This ability to process an image—or the real thing—doesn’t just apply to wolves, of course. The process of neural interactions and visual interpretation happens every time your brain wants to identify literally anything you look at. And the only reason your brain knows how to make those connections is because you have consciously, or subconsciously, trained it to know that pointy ears, a long snout, shaggy fur, four legs, and a tail typically means “wolf”.

Photo courtesy of Pixabay

Now look at the dog above. You’ll again notice pointy ears, a long snout, shaggy fur, legs. All the telltale signs of a wolf, but it isn’t a wolf. It’s a dog (specifically a Husky). As a human, how do you draw this distinction? How have you learned the difference between a wolf and a Husky? Is it the silhouette? The color of the fur? The collar? One could argue it is all these things being just different enough that your brain wouldn’t think wolf, but dog.

When it comes to machine learning, it is these subtle differences that algorithms try to tease out to determine which information in an image is significant. But as machine learning matures, we are coming to appreciate how limited our understanding is of how these tools come to conclusions.

MITRE in the mix!

This is where MITRE comes in. Through MITRE research, we are taking steps to assess new ways for algorithms to give us feedback. Machine learning is a valuable tool, but its black box nature can make it difficult to rely on in the real world.

Photo courtesy of Pixabay

For instance, imagine I took our algorithm, and showed it the new image of a corgi above. If the algorithm puts more weight on the color of the fur, it might determine that this is a wolf. If it focuses on the silhouette of the head, or the contrast of the collar, then it might determine it’s a dog. The problem is that I have trained the algorithm to the point where I am satisfied with its performance, but now I am expecting it to give me an answer. Some kinds of machine learning tools break down the image into shapes and colors and patterns to discern information about those elements. Each element is used to calculate weights or scores that can be combined to produce a result, in this case wolf or dog. In the most limited case, it might just tell me dog or wolf. If we made it more verbose, then it might tell us how it weighted different elements in its decision.

Now imagine we were to redesign our algorithm. Wouldn’t it be better if we could get it to highlight exactly what in the image was impacting the outcome? We might find that our algorithm says it’s a dog, not because of the corgi itself, but because of the snow and lack of trees in the background of the image. This example of a machine learning classifier focusing on unintended features of images is from a research effort on better understanding how an opaque black box model is making classifications. The tool produced, LIME, is an example of an active research community focused on “Explainable AI”. DARPA is making significant investments in this, as are many major AI-focused companies.

These are the kinds of questions MITRE staff are exploring with the intent of creating human-usable tools that emphasize which information is most valuable to any classification. The real importance of this work comes back to bias. In the case above, the stakes are fairly low, but machine learning has significant applications beyond categorizing wildlife.

What happens in the real world?

Photo courtesy of Pixabay

Training machines to learn is in one sense conceptually similar to the way that humans learn and make decisions. Humans learn to make reasonable decisions based on their exposure to experiences in their environment throughout their lifetime. If human learning or decisions are made in the context of poor data or poor experiences, then there is the risk that an individual will make undesirable decisions or decisions that do not accurately interpret the data presented to them. For instance, they may see a wolf in the wild and try to pet it!

Similarly, machines learn based on the data to which they are exposed. Like humans, undesirable outcomes – potentially becoming life-threatening or illegal – could arise from machines being trained upon data that is not representative across a multitude of scenarios or populations, that unintentionally favors some outcomes over others, or has been intentionally manipulated for malicious purposes by an insider or external entity.  Here is a very real-world scenario that clarifies that what you train on really matters.

Recently, California’s state court ruled that their cash bail system was considered to be an unconstitutional denial of due process. The court came to this decision because it found bias in how those setting bail marginalized the already most disenfranchised members of society, specifically poor people and people of color [1]. To remediate this issue, they introduced SB10, a bill mandating a new process for “pretrial risk assessment.” As a result, a pretrial risk assessment will now be conducted by Pretrial Assessment Services with the use of a validated risk assessment tool. The goal of this tool is to remediate bias by providing information about the risk of a person’s failure to appear in court. [2]

The use of a “risk assessment tool” is understood in the law to mean an algorithm that will take in “arrest and conviction history and other data [1]”, the goal being that by using an algorithm, we are providing a sense of objectivity to the risk assessment process. Unfortunately, much like the algorithm example we have been exploring above, this means there is an opportunity for bias to play a big role in “determining risk” for defendants.

Consider the fundamental nature of this bill. It was created to replace a process considered unconstitutional because it disproportionately affected the poor and people of color. But where will we get the training data for the new SB10-mandated algorithm? Likely by observing past cases to inform the likelihood that someone is a flight risk—past cases that are characterized by bias, thus leading to the need for this bill.

Furthermore, assume that we continue to tweak our algorithm over time. Ideally, this adjustment would mean making corrections based on observed outcomes that differ from the algorithm’s own predictions. This adjustment creates further opportunity for bias by causing a feedback loop of sorts, thereby strengthening any bias that may have existed during the algorithm’s creation and training.

So where do we go from here?

The point is not that we are going to be able to completely eliminate bias, but that we must understand it if we hope to lean on these tools. Often unintentional, bias can come from almost anywhere: limitations in available data, the way data is presented, how new data can affect an algorithm over time. We need tools that can help us explore these factors and identify the source of bias, so we can understand if it needs to be accounted for. Even if we could create a perfect algorithm, it should not be the sole authority on why a decision is made, especially if that decision is made in a vacuum. We need tools that don’t just calculate outcomes, but also explain the logical steps taken to reach that outcome.

In the SB10 case above, it would be culturally acceptable to determine someone as High Risk based on past behavior during prior arrests. However, we need to guarantee that their record was the primary justification for the calculated result, and not tertiary information about a defendant’s life that should have no logical bearing on the case. [3]

These are the kinds of questions and considerations MITRE staff explore every day. MITRE’s subject matter experts are already working to understand the inherent bias common to any collection of data so that we can recognize it before the training begins. Through this and other ongoing work, MITRE hopes to continue to contribute to better machine learning support. This way we can appreciate exactly which information from any input is used to explain an outcome, and furthermore build models we can confidently apply to the real world. This is just one of many ways MITRE is solving problems for a safer world.

Cameron Boozarjomehri is a Software Engineer and a member of MITRE’s Privacy Capability. His passion is exploring the applications and implications of emerging technologies and finding new ways to make those technologies accessible to the public.

[1] Westervelt, E. (2018, October 2). California’s bail overhaul may do more harm that good, reformers say. Accessed:  https://www.npr.org/2018/10/02/651959950/californias-bail-overhaul-may-do-more-harm-than-good-reformers-say[2] California Legislative Information. (nd). Senate Bill No. 10. Accessed: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201720180SB10[3] Levin, S. (2018, September 7). Imprisoned by algorithms: The dark side of California ending cash bail. Accessed: https://www.theguardian.com/us-news/2018/sep/07/imprisoned-by-algorithms-the-dark-side-of-california-ending-cash-bail

© 2018 The MITRE Corporation. All rights reserved. Approved for public release.  Distribution unlimited. Case number 18-3872

MITRE’s mission-driven team is dedicated to solving problems for a safer world. Learn more about MITRE.

Recent Posts

Archives

Pin It on Pinterest

Share This