Numerous surveys find that Internet users want to limit the personal data that is being collected about them, as well as control the usage of their data. Existing and proposed regulation in the U.S. accords users such rights, in the form of a "transparency and control" obligation on personal data collectors: users should be informed about the rationale of requests for personal data so that they can make an informed decision on whether or not to disclose their data.
In the past few years, it became apparent that the transparency and control paradigm "does not work." It overburdens users by the sheer amount of decisions that must be made in a very short period of time. Recent research moreover shows that users are not cognitively able to make rational decisions when they have to trade off immediate benefits of data disclosure with uncertain and unspecific privacy threats sometimes in the future.
Based on empirical studies, this research helps predict what privacy decisions would be consistent with users' preferences, and generate personalized default settings for privacy choices as well as rationales for disclosure that best suits users' predicted decision-making. Users can override any of the predictions, and such corrections will modify the prediction algorithm over time. The proposed approach thus affords "realistic empowerment": it allows Internet users to make their own privacy decisions if they wish, and also helps them overcome the limits of their bounded rationality by first making personalized default decisions on behalf of them. The multiple methods used in the research design build upon one another, representing a rigorous and systematic approach to develop an in-depth understanding of designing and developing smart default privacy settings. This project can have a transformative impact on the privacy literature by providing a systematic understanding on predicting individuals' contextual privacy preferences.
|