The Balancing Act of Usability and Security
Justin Richer’s post explores the connection between security and usability as they relate to knowledge management. Richer suggests that collaborating with everyone involved in setting up systems, physical or virtual, is key to their adoption and success.—Editor
Author: Justin Richer
Collaboration systems are traditionally all about enabling users to share information. Usability is paramount: the easier it is for users to share within a tool, the more powerful and successful the tool is considered to be. Somewhat the opposite applies to security systems, as they are often considered most successful when parties are prevented from sharing something. Yet these two considerations, seemingly at odds with each other, are both essential to functioning systems in the real world.
Harmonizing usability and security is not a simple task, but it turns out that in many cases increasing usability ends up increasing security overall. Why is that? The answer may surprise many security practitioners: no matter how well designed and well intentioned security systems may be, the truth is that when security gets in the way of usability, people find ways to work around the security. Unfortunately, the way they do this usually turns out to pose more risks than if the security measures had not been there in the first place.
As an example from the physical world, I recently gave a talk at a facility where the main conference room (a shared collaborative resource) was secured with an electronic password lock. The rationale seemed to be that the room contained a large amount of expensive audiovisual equipment, and was not necessarily available to everyone at the shared facility. However, those at the facility not only knew who had the right to use the room, but also respected those rules as a community. Therefore, having a secret password to the door was not necessary in the day-to-day operations of the space. Consequently, right next to the door’s expensive and very secure lock there was a humble sticky note with the password for the door. As a result, in spite of what the designers of the lock system and facility owners would probably say, this is not a locked door; it is an unlocked door that is very cumbersome to use. In fact, during my talk the door itself was propped open to avoid use of the lock entirely. In this case, both security and practicality were sacrificed in the name of practicality. If the lock had been removed entirely, we would at least have been able to close the door during the presentation, effecting a higher level of both usability and security.
This becomes [even more important in the digital world] where the security mechanisms and their workarounds exist in different spaces. For example, two people may share the password to a single digital account in order to share information or data access rights. While this appears to be a breach of good security practice, these users shared a password to access system functionality and not to deliberately break the security architecture. In fact, the security architecture is itself fundamentally broken if users cannot perform legitimate operations without breaking the security systems, much like the electronic lock on an otherwise public door. The appropriate response to this very common occurrence is not to make it impossible to share passwords, but instead to build the system functionality in such a way that users do not have to share passwords in the first place – for instance by offering an option for users to delegate rights and make informed security decisions at runtime.
System architects, designers, and implementers must make it easy for users to do the right thing in terms of security without compromising the desired functionality. When the users are themselves software developers, this means making tools and services available which allow developers to build new systems that let people work seamlessly and securely. As a concrete example of how to make such functionality available in an enterprise environment, let’s look at something that MITRE has done.
In 2012, MITRE deployed an OpenID Connect and OAuth 2.0 authentication and authorization service that serves all MITRE employees. Developed for the open web with a focus on enabling cross-domain information sharing, these open standard protocols are built around enabling end-user control of security decisions across traditional security boundaries in a way that’s both usable and secure. MITRE’s deployed service offers advanced security capabilities to various applications, whether these applications are controlled by MITRE or not. This represents a radical departure from traditional enterprise security models, which emphasize centralized control, into a world where end users and developers are enabled to accomplish things that the central authority never deemed worthwhile or possible. Since MITRE’s deployed service is built as an extension to an already robust corporate security infrastructure, the two coexist to serve different but complementary use cases, providing security mechanisms that are easy enough (and powerful enough) for developers and end-users to employ without complex and dangerous workarounds.
I presented this work at CIS 2014, and have learned that other enterprises are starting to follow this pattern. MITRE believes this trend will continue going forward. While this divestment of control is frightening at first, remember the alternative: people will always find a way to work around onerous security systems. It is far better to involve everyone in the security decision space and to empower those who would otherwise subvert the system in the pursuit of functionality. In conclusion, security and usability must work hand-in-hand for a system to be practical and functional. Well-designed systems that empower decision makers instead of hindering them can encourage good security practices without loss of functionality. System designers should always seek this careful balance, knowing well that pushing things too far in either direction could result in disaster.