Visible to the public Contextual Integrity for Computer Systems - July 2022Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: Michael Tschantz (ICSI), Helen Nissenbaum (Cornell Tech)

HARD PROBLEM(S) ADDRESSED
Scalability and Composability, Policy-Governed Secure Collaboration

PUBLICATIONS

KEY HIGHLIGHTS

First, we started to applied CI to systematically analyze NIST's "PRAM" (short for "Privacy Risk Assessment Methodology"). We have conducted a step-by-step analysis of PRAM to illustrate how the limitations of a risk assessment approach for privacy plays out in practice: identifying the various points of discretion that an analyst may weaponize to underplay certain privacy risks or shape the assessment of privacy risks to the organization's own interests, in detriment of societal interests and values.

We have examined which elements of PRAM may be repurposed as part of a contextual integrity driven analysis, namely, how these two approaches to privacy modeling and design may complement each other. Our results indicate a two-tiered approach. Firstly, contextual integrity is tasked with the normative analysis that reveals the legitimacy of a given system or functionality. Secondly, once legitimacy is established (or rebutted), a risk assessment may become useful to determine the vectors of attack a system may be vulnerable to. Thus risk assessment is confined to the realm of engineering, where empirical determinations about technical vulnerabilities are applicable.

Second, our research make it increasingly clear that due to its inherent complexity, differential privacy is often misunderstood and misapplied. Furthermore, certain actors (either intentionally or unintentionally) have resorted to differential privacy to legitimize otherwise privacy-invasive systems and services. It is on this last point that more recent work has maintained a focus. We have identified three common misconceptions about differential privacy, and how these misconceptions may contribute to legitimize privacy-invasive systems or applications. Currently, we are subjecting these misconceptions to a contextual integrity analysis to elucidate and better explain how differential privacy is misunderstood, its properties and guarantees misconstrued. This is leading us to propose more precise modeling and communication methods to separate the narrow notion of privacy that differential privacy represents from the wider, more capacious notion that contextual integrity encompasses, a task for which contextual integrity itself holds promise.

We extended the work to applications of Contextual Integrity (CI) to computer systems operating in two areas -- educational systems and social network systems. Although CI informational norms are structured around a standard 5-tuple, the values for the parameters are variable according to the particular context of application. Education was of interest to the Ph.D. student. We also were fortunate to have an opportunity to work on developing normative systems for groups operating on social networks (e.g. Meta, Reddit, etc). Specifically, for this project, the aim is to demonstrate that technical mechanisms enhance privacy only if they allow for the expression of sufficiently complex constraints on information flows.

Submitted "Governing Assessment Integrity: Consumer and Student Privacy in Ed-Tech." Madiha Zahrah Choksi, Min Cheong Kim, Yan Shvarzshanider, Madelyn Rose Sanfilippo. Conference on Human Factors in Computing Systems, Hamburg, Germany, April 2023. (Under Review)

COMMUNITY ENGAGEMENTS

On June 2 & 3, held discussions about privacy risk assessments with participants at the Privacy Law Scholars Conference (PLSC).

"...contextual Integrity is the worst definition of privacy, except for all the others that have been tried...", Invited Tutorial, ACM FAccT 2022 Conference, June 23 2022


EDUCATIONAL ADVANCES