acm-header
Sign In

Communications of the ACM

Inside risks

Privacy By Design: Moving From Art to Practice


whole-body security scanner

Members of staff are seen demonstrating a new whole-body security scanner at Manchester Airport, Manchester, England, in January 2010. Airline passengers bound for the United States faced a hodgepodge of security measures across Europe and airports did no

Credit: Jon Super / Associated Press

Most people involved with system development are well aware of the adage that you are better off designing in security and privacy (and pretty much any other "nonfunctional" requirements) from the start, rather than trying to add them later. Yet, if this is the conventional wisdom, why is the conventional outcome so frequently systems with major flaws in these areas?

Part of the problem is that while people know how to talk about functionality, they are typically a lot less fluent in security and privacy. They may sincerely want security and privacy, but they seldom know how to specify what they seek. Specifying functionality, on the other hand, is a little more straight-forward, and thus the system that previously could make only regular coffee in addition to doing word processing will now make espresso too. (Whether this functionality actually meets user needs is another matter.)

Back to Top

Security and Privacy

The fact that it is often not apparent what security and privacy should look like is indicative of some deeper issues. Security and privacy tend to be articulated at a level of abstraction that often makes their specific manifestations less than obvious, to either customers or system developers.

This is not to say the emperor has no clothes; far from it. There are substantial bodies of knowledge for some nonfunctional areas, including security, but figuring out how to translate the abstract principles, models, and mechanisms into comprehensive specific requirements for specific systems operating within specific contexts is seldom straightforward. That translation process is crucial to designing these properties into systems, but it also tends to be the most problematic activity and the activity for which the least guidance is provided. The sheer complexity of most modern systems compounds the problem.

Security, though, is better positioned than privacy. Privacy—or informational privacy at least—certainly has commonly understood and accepted principles in the form of Fair Information Practices. It presently doesn't have much else. Models and mechanisms that support privacy are scarce, not generally known, and rarely understood by either customers or developers.

As more things become digitized, informational privacy increasingly covers areas for which Fair Information Practices were never envisioned. Biometrics, physical surveillance, genetics, and behavioral profiling are just a few of the areas that are straining Fair Information Practices to the breaking point. More sophisticated models are emerging for thinking about privacy risk, as represented by the work of scholars such as Helen Nissenbaum and Daniel Solove. However, if not associated with privacy protection mechanisms and supported by translation guidance, the impact of such models is likely to be much less than they deserve.

A recent example is the development and deployment of whole-body imaging (WBI) machines at airports for physical screening of passengers. In their original incarnation, these machines perform what has been dubbed a "virtual strip search" due to the body image that is presented. These machines are currently being deployed at U.S. airports in a way that is arguably compliant with Fair Information Practices. Yet they typically operate in a way that many people find offensive.

The intended purpose certainly is not to collect, use, disclose, and retain naked images of people; it is to detect potentially dangerous items they may be carrying on their persons when screened. Fair Information Practices include minimization of personal information collected, used, disclosed, and retained, consistent with the intended purpose.

This has profound implications for how image data is processed, presented, and stored. It should be processed so at no point does there ever exist an exposed body image that can be viewed or stored. It should be presented in a nonexposed form (for example, a chalk outline or a fully clothed person) with indicators where things have been detected. None of it should be retained beyond the immediate encounter. That almost none of these design elements were originally specified illustrates what too often happens in the absence of applicable models and mechanisms and their requisite translation, along with principles, into effective requirements.

In this instance, Solove's concept of exposure provides the necessary (partial) model. Exposure is a privacy violation that induces feelings of vulnerability and distress in the individual by revealing things we customarily conceal. The potential harm from exposure is not restricted to modesty or dignity. A friend is convinced that her pubescent daughter, who is currently extremely self-conscious about her body, would be quite literally traumatized if forced to undergo WBI. If physical strip searches would raise concern, why not WBI? Real damage—physical as well as psychological—can occur in the context of body image neuroses.

If one recognizes from the outset the range of privacy risks represented by exposure, and the relevance of exposure for WBI, one then stands a chance of effectively moving from principles to requirements. Even then, though, the translation process is not necessarily obvious.

Supposedly, the WBI machines being used by the U.S. Transportation Security Administration are not capable of retaining images when in normal operating mode. (They have this capability when in testing mode, though, so significant residual risk may exist.) Other necessary mechanisms were not originally specified. Some models of WBI are being retrofitted to present a nonexposed image, but the issue of intermediate processing remains. Some models developed after the initial wave apparently implement all the necessary control mechanisms; privacy really was designed in. Why wasn't it designed in from the beginning and across the board? The poor state of practice of privacy by design offers a partial explanation. The state of the art, though, is advancing.

The importance of meaningfully designing privacy into systems at the beginning of the development process, rather than bolting it on at the end (or overlooking it entirely), is being increasingly recognized in some quarters. A number of initiatives and activities are using the rubric of privacy by design. In Canada, the Ontario Information and Privacy Commissioner's Office has published a number of studies and statements on how privacy can be designed into specific kinds of systems. One example is electronic (RFID-enabled) driver's licenses, for which the inclusion of a built-in on/off switch is advocated, thereby providing individuals with direct, immediate, and dynamic control over whether the personal information embedded in the license can be remotely read or not. Such a mechanism would support several Fair Information Practices, most notably collecting personal information only with the knowledge and consent of the individual. This approach is clearly applicable as well to other kinds of RFID-enabled cards and documents carrying personal information.

Similar efforts have been sponsored by the U.K. Information Commissioner's Office. This work has taken a somewhat more systemic perspective, looking less at the application of privacy by design to specific types of technology and more at how to effectively integrate privacy into the system development life cycle through measures such as privacy impact assessments and 'practical' privacy standards. It also emphasizes the potential role of privacy-enhancing technologies (PETs) that can be integrated with or into other systems. While some of these are oriented toward empowering individuals, others—which might more appropriately be labeled Enterprise PETs—are oriented toward supporting organizational stewardship of personal information.

However, state of the art is state of the art. Supporting the translation of abstract principles, models, and mechanisms into implementable requirements, turning this into a repeatable process, and embedding that process in the system development life cycle is no small matter. Security has been at it a lot longer than privacy, and it is still running into problems. But at least security has a significant repertoire of principles, models, and mechanisms; privacy has not really reached this stage yet.

Back to Top

Conclusion

So, if privacy by design is still a ways off, and security by design still leaves something to be desired, how do we get there from here? There's little doubt that appropriately trained engineers (including security engineers) are key to supporting the effective translation of principles, models, and mechanisms into system requirements. There doesn't yet appear to be such a thing as a privacy engineer; given the relative paucity of models and mechanisms, that's not too surprising. Until we build up the latter, we won't have a sufficient basis for the former. For privacy by design to extend beyond a small circle of advocates and experts and become the state of practice, we'll need both.


Security and privacy tend to be articulated at a level of abstraction that often makes their specific manifestations less than obvious.


This will require recognition that there is a distinct and necessary technical discipline of privacy, just as there is a distinct and necessary technical discipline of security—even if neither is fully formed. If that can be accomplished, it will create a home and an incentive for the models and mechanisms privacy by design so badly needs.

This is not to minimize the difficulty of more effectively and consistently translating security's body of knowledge (which is still incomplete) into implementable and robust requirements. Both security and privacy need to receive more explicit and directed attention than they often do as areas of research and education.

Security by design and privacy by design can be achieved only by design. We need a firmer grasp of the obvious.

Back to Top

Author

Stuart S. Shapiro (s_shapiro@acm.org) is Principal Information Privacy and Security Engineer at The MITRE Corporation, Bedford MA.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1743546.1743559

Back to Top

Figures

UF1Figure. Members of staff are seen demonstrating a new whole-body security scanner at Manchester Airport, Manchester, England, in January 2010. Airline passengers bound for the United States faced a hodgepodge of security measures across Europe and airports did not appear to be following a U.S. request for increased screening of passengers from 14 countries.

Back to top


Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: