Phishing has become a serious threat in the past several years, and combating it is increasingly important. Why do certain people get phished and others do not? In this project, we aim to identify the factors that cause people to be susceptible and resistant to phishing attacks. In doing so, we aim to deploy adaptive anti-phishing measures.
The objective of this project is to design empirical privacy metrics that are independent of existing privacy models to naturally reflect the privacy offered by anonymization. We propose to model privacy attacks as an inference process and develop an inference framework over anonymized data (independent of specific privacy objects and techniques for data anonymization) where machine-learning techniques can be integrated to implement various attacks. The privacy metrics is then defined as the accuracy of the inference of individuals' sensitive attributes. Data utility is modeled as a data aggregation process and thus can be measured in terms of accuracy of aggregate query answering. Our hypothesis is that, given the above empirical privacy and utility metrics, differential privacy based anonymization techniques offers a better privacy/utility tradeoff, when appropriate parameters are set. In particular, it is possible to improve utility greatly while imposing limited impact on privacy.
TEAM
PIs: Chris Mayhorn & Emerson Murphy-Hill Students: Kyung Wha Hong & Chris Kelly