The person at the other end of the data
Author: Jonathan Rotner
I am sure my personal data has been stolen, given the number of events reported in the news. But what made me think differently about privacy, and protecting my data and my identity, was actually a small article that received limited attention, about how airlines had installed cameras in the seat-backs without informing passengers about why the cameras are there. This was as simple as it could get: there was a camera pointing at my face, and I couldn’t hide, I didn’t know where the feed was going, what it would be used for, or how secure the connection would be.
After that imaginary confrontation, I wanted to get smarter on the subject—I wanted to find out what industry and policy-makers had established as best practices, how I could discuss the subject with my sponsors, what I could do to better protect and educate myself, and if it was all too late anyway. So I reached out to the privacy community at my company and was directed to Julie Snyder.
Privacy can protect us in unforeseen ways
Julie Snyder starts our conversation with a story of her own, this one about the unintended consequences of an individual losing her privacy. A woman who has always identified as Ashkenazi Jewish received a DNA testing kit from one of the ancestry services and participates on a whim. Surprise! Turns out her father was actually a non-Jewish sperm donor. It’s one of many fascinating and recent cases of renegotiating identity, along with stories about an adopted child finding their true birth family, or even individuals tracing their ancestry back to someone practicing witchcraft. But what struck me was Julie’s takeaway from the story; she wondered if the once-anonymous man who donated sperm would still have donated if he realized that he could be later identified.
Julie’s experience has taught her that data is frequently used in ways individuals did not anticipate or permit—“We’re promised anonymity in one context, and then things change.” In the genealogy example, Julie connects the dots for me: if sperm donors can no longer donate anonymously, if visits to an infertility clinic can be revealed through your location or purchase history, if a parent knows that a child can learn they are adopted before the parent decides if and how to reveal that information, will such a loss of privacy prevent an adult from acting? Or the question that really “freaks her out,” what other societal implications might there be that we are collectively failing to consider?
I learned from her hypotheticals that Julie tends to think about second-order consequences of losing privacy. Julie Snyder is a Privacy Domain Capability Area Lead in MITRE’s National Cybersecurity FFRDC. Along with fellow privacy engineers in MITRE’s Cyber Solutions Technical Center, she has helped create MITRE’s Privacy Engineering Framework, a structured engineering process that helps organizations embed privacy into its systems engineering life cycle in a way that proactively addresses privacy risks. She works with our sponsors and with technical industry members and helps them integrate privacy solutions into their products from the outset rather than as an afterthought.
Successful integration means weighing the needs of the organization, the privacy risks to the individual, and adhering to established law and policy. Part of the challenge is that there’s no widely-accepted definition of privacy—it is specific to the applied field (if defined at all) and is influenced by the public’s expectations. Another obstacle? Even good solutions may fail to address all privacy considerations. Last, we don’t know what we don’t know, or, as Julie says, “Privacy is driven primarily by law and regulation. There are many privacy experts who bring that viewpoint but aren’t in the weeds of technologies and systems and don’t have the expertise to understand what those technologies are capable of. Engineers bring that understanding of technologies and capabilities but aren’t wired to think about how those capabilities may interact in ways that impact privacy.”
Along the way, a privacy engineer learns to break the mold of staying in her lane. Julie continues, “To be a privacy person, you also have to understand law, regulations, the mission of the organizations, the goals of a particular system, how to collaborate, and how to get buy-in from stakeholders. Often times it’s the privacy specialist who gets everyone together who should have been talking all along. Privacy is a convener.”
Building a case for privacy engineers
Julie grew up in a small town where neighbors tend to “know your business.” She appreciates how her upbringing provided a sense of community, but acknowledges it also provoked feelings of lost privacy. At first, she did not know the best way to reconcile those two sides. Then, some help came from a colleague who asked her what she liked about her work. Julie remembers answering, “I like protecting information about people.” Her colleague suggested a path: “Have you considered privacy? I think you’d like it.” Julie laughs now at the simplicity of the conversation and the power of the suggestion: “I didn’t realize it was a field of its own at the time. Back then, the privacy community was much smaller, and it was primarily focused on legal compliance.”
Julie was always driven by remembering “that there is a person at the other end of the data.” But figuring out how to apply her passion had a learning curve. Julie laughs again at her initial naivete; she remembers thinking, “Why don’t people just build privacy stuff in?” The solution, she has learned, has a lot to do with education and exposure. Julie joined MITRE in 2009 to work with others who shared her passion for doing a better job of addressing privacy risks by building it into systems.
It took time to find a sponsor to ask for help with privacy engineering, partly because the area was so new. “MITRE was ahead of the curve by about 10 years,” Julie explains. “In 2009, federal agencies were trying to stay compliant and above water, to triage all the questions they had.” And for industry partners, privacy was initially seen as a risk to a company’s bottom line – most companies want to get their tech out the door first, to have the best chance of capturing the market. Privacy was an additional hurdle, and it slowed the team down, if it was considered at all. As a result, Julie observed an “air gap between law and policy, and what technology and systems will do.”
MITRE privacy engineers set to work building their case. Julie learned a lot from early failures. “Unsuccess stories are really important. I was working on a program that applied an agile approach, developing and writing user stories in order to draw out important privacy implications. We added contextual information beyond what is typically found in a user story, with the intent to educate systems owners and engineers regarding why the privacy components were necessary, yet it still went nowhere. The engineers said ‘OK, we get it now. We don’t need any more help.’ And then they still didn’t build it in.” Luckily, Julie is not easily dissuaded. “I’m a partially recovering perfectionist. I don’t like to fail, but I’ve come to appreciate those failures as learning experiences.”
In analyzing what went wrong, a colleague helped her recognize that a privacy engineering fix was being applied in an environment that hadn’t prioritized privacy. Just because the group built good privacy documents did not mean anyone would use them. Julie and her team needed to adapt their approach to help an organization appreciate why addressing privacy risks results in a more valued product, and how to actually do so.
First, they helped both colleagues and sponsors expand their understanding of privacy’s scope. Privacy is not just security or confidentiality, although it overlaps with those elements; privacy is not the responsibility of only one person or department. “Privacy is complex, nuanced, and often hard.”
Even though privacy is complex, nuanced, and often hard, engineers can code specific requirements into the system in order to make it actionable and verifiable. MITRE’s privacy engineers adopted the lessons into MITRE’s Privacy Engineering Framework and developed complementary techniques adapted from other engineering domains, such as safety. These careful approaches result in more privacy-sensitive products, legally compliant outcomes, and offerings more in line with public demands. “MITRE’s privacy engineering framework makes clear that privacy isn’t a bolt-on thing, that privacy needs can be articulated in engineering terms, and you can design a system that is supportive of an organization’s objectives while managing privacy risk.”
Today, Julie and other privacy engineers get to work on some exciting applications. MITRE’s sponsors have regulatory purview over Unmanned Aerial Vehicles (a.k.a. drones), semi-automated cars that communicate with each other as well as devices along their paths, and protecting personal healthcare information. The technology is cool enough, but sponsors are working on unfamiliar ground. From a regulatory perspective, do they offer guidance or create more binding oversight? From a privacy perspective, how do they navigate the tension between the demand for more data, which can increase accuracy of machine learning algorithms, with infringements on personal privacy? From a public safety perspective, what is the right balance for offering security in exchange for privacy?
Privacy is everyone’s responsibility
I ended my conversation with Julie by posing a set of questions she hears all the time: Is privacy dead? and What can we do about it? She acknowledges the reality but remains motivated. She and other privacy engineers work to show how privacy is everyone’s responsibility. Our decisions on how to engage with technology; industry’s practices and the regulatory oversight that establish new norms; and educational campaigns that teach lessons from the past are all key elements of a comprehensive privacy approach.
Julie believes that understanding new technology and how it is integrated into our lives is crucial for updating her understanding of privacy. She listens to podcasts to expose herself to how others are thinking about and processing privacy, like learning about teenage behavioral norms for different social media apps, or how people choose to manage their online, digital reputations. Julie also learned not to worry that the next generation has given up caring about privacy: “They have different boundaries. For example, on social media, younger folks don’t mind friending everyone. But they keep other profiles for a tighter circle of friends.” Julie actively shares such stories with those around her, both to educate and to help herself answer the question, “What else are we missing?”
Julie knows we cannot expect everyone to be tech and privacy savvy. Instead, she focuses on where privacy and society issues converge. “Erosion of privacy doesn’t just come in the forms of Fahrenheit 451 and 1984. I would like to think there are too many people who care to let it erode that far. Privacy isn’t dead, and we need to continue working hard to keep it that way.”
What we still have left to preserve
After Julie and I talk, I decide to do a little research. I’ll confess, I initially found it hard to remain positive in the face of the many frustrating examples that show how data is used and appropriated in ways that individuals had not anticipated, are made aware of, or have agreed to. I cringed when reading that Target’s personalized ads outed a young woman’s pregnancy to her father, before she had decided whether or how to share the news. I got angry learning that insurance providers cull social media posts to base how much they charge for coverage, without notifying anyone that they employ this practice. I felt deflated after hearing how hackers of a Customs and Border Patrol contractor stole photos of thousands of travelers’ faces and license plates, as well as divulged the government’s current operating procedure, thereby exposing individuals and our defense strategies to targeting and exploitation. And I started feeling hopeless after realizing that the solving of the Golden State Killer case showed that not only do private corporations provide access to our most personal and inalterable biological signatures without our knowledge or consent, but that we don’t even have to provide that information for companies to figure it out anyway.
My initial reaction to all this was that it certainly didn’t look like things would change. Maybe that would be because of minimal consequences and fines (if any) to those that misuse or lose our data, maybe because we don’t really know what we’re giving away when signing up for new products (ever met anyone who reads those end-use license agreements?), maybe it’s because individuals believe that using these services constitutes an acceptable risk given the benefits (“I don’t mind telling the grocery store what I buy if I can get a discount”), and maybe there’s a feeling that all our information is out there and it will be harder and harder to hide our preferences and identities so what’s the point of trying?
But after a period to digest, I changed my mind for two reasons. First, I learned from Julie and her team that there are unpredictable ways in which our data will be misused. To me, this means we have to emphasize the necessity of identifying and mitigating the negative effects we read about today, especially for issues that affect large populations. These scary patterns will become increasingly encoded into our lives, and we will become increasingly desensitized to them, the longer nothing changes.
As companies multiply their attempts to monitor and mine our choices, their objectives become increasingly at odds with ours, like radicalizing what we watch in order to keep our attention, or increasing surveillance over us in order to target us for products with greater precision. As our national adversaries collect more personally identifying information on global citizens, they can solidify political control and undermine challenges to their political power, by identifying and blackmailing U.S. nationals who work for government intelligence agencies, or enforcing strict political fealty by tying the privileges of domestic citizens to their public discourse and political allegiances. Finally, as we allow companies and governments to expand their power without more oversight or checks and balances, we give up more of our fundamental rights to both privacy and security (and not a tradeoff between the two), and we end up with companies that secretly partner with local police departments, providing police with access to personal information shared or collected on private platforms, without a warrant or notification, sometimes in exchange for advertising and promotion of those products; we see how the spread of ubiquitous and persistent surveillance clashes with our fundamental American legal norm of presumption of innocence; and we watch the Chinese state mandate that an ethnic minority submit DNA and biometric samples and be subject to facial recognition and QR code identifiers so that the state can do with them as it likes.
The second reason I changed my mind is that we all have things we want to keep private; privacy is not about having “nothing to hide,” but it’s about choosing how and when to share certain information about ourselves. As an individual concerned about privacy, I don’t want to share everything, like my bank account password or pictures of my daughter, with everyone. As a member of many digital communities, if your email is hacked, then my personal messages that I wrote in confidence are also hacked. And as I read through more articles, it was striking how infringements on privacy can have significantly more impact on minority groups (gender, sexuality, race, ethnicity, religion, etc.). I do not have the personal experience to speak about this topic, and instead point you to some of the many brilliant and passionate writers who confront it.
My reasons for becoming more proactive are as simple as this: knowing our patterns and habits leads to predicting our patterns and habits, which allows unaccountable individuals or algorithms to control our patterns and habits, weaponize our patterns and habits, and use our patterns and habits to deny our fundamental rights.
In the end, these examples of overreach and personal loss only emphasize the importance of what we still have to fight to preserve: our autonomy, individuality, anonymity, and the right to control our own narrative. I feel fortunate that I work at a company that addresses privacy issues for government and industry. We have privacy experts available here, and I hope that everyone at MITRE will make use of their skills and approaches. I also think we have an opportunity at MITRE to not only contribute, but to lead in this area, by acknowledging the multi-faceted tradeoffs, by thinking through potential unknowns, and by funding the research to develop evidence-based incentives that lead to technical, policy, legal, financial, and organizational solutions.
I’m not ready for the end of privacy, and I hope you’re not ready either.
Postscript: if you want to learn more
Protecting your digital communications and browsing are only a part of privacy protection, but they are what we can do the most about on an individual level. These are some excellent, non-intimidating resources available, especially from the Electronic Frontier Foundation (EFF). I list them here from quickest to most involved action on your part.
- The EFF has a browser add-on that stops advertisers and other third-party trackers on the web and one that forces an encrypted communications with many major websites (if one is available), making browsing more secure.
- The New York Times has 9 easy steps for protecting your digital life.
- The EFF has well written guides on creating strong passwords, protecting yourself on social networks, enabling two-factor authentication, and many more specific articles.
Jonathan Rotner is a human-centered technologist who helps program managers, algorithm developers, and operators appreciate technology’s impact on human behavior. He works to increase communication and trust in an automated process.
© 2019 The MITRE Corporation. All rights reserved. Approved for public release. Distribution unlimited. Case number 19-2903.
MITRE’s mission-driven team is dedicated to solving problems for a safer world. Learn more about MITRE.
See also:
A social scientist examines the role of technology in our lives
Interview with Tammy Freeman on Redefining Innovation
Interview with Dr. Michael Balazs on Generation AI Nexus
Is This a Wolf? Understanding Bias in Machine Learning
Designing a Bridge Between Theory and Practice
Consequences and Tradeoffs—Dependability in Artificial Intelligence and Autonomy
The World as It Will Be: Workforce Development Within and Beyond MITRE
Technical Challenges in Data Science
Defining, Applying, and Coordinating Data Science at MITRE
Rising to the Challenge: Countering Unauthorized Unmanned Aircraft Systems
Mistakes and Transcendent Paradoxes: Dr. Peter Senge Talks on Cultivating Learning Organizations