Visible to the public Biblio

Filters: Keyword is information disclosure  [Clear All Filters]
2018-09-28
Li-Xin, L., Yong-Shan, D., Jia-Yan, W..  2017.  Differential Privacy Data Protection Method Based on Clustering. 2017 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). :11–16.

To enhance privacy protection and improve data availability, a differential privacy data protection method ICMD-DP is proposed. Based on insensitive clustering algorithm, ICMD-DP performs differential privacy on the results of ICMD (insensitive clustering method for mixed data). The combination of clustering and differential privacy realizes the differentiation of query sensitivity from single record to group record. At the meanwhile, it reduces the risk of information loss and information disclosure. In addition, to satisfy the requirement of maintaining differential privacy for mixed data, ICMD-DP uses different methods to calculate the distance and centroid of categorical and numerical attributes. Finally, experiments are given to illustrate the availability of the method.

2018-03-19
Ge, H., Yue, D., p Xie, X., Deng, S., Zhang, Y..  2017.  Analysis of Cyber Physical Systems Security via Networked Attacks. 2017 36th Chinese Control Conference (CCC). :4266–4272.

In this paper, cyber physical system is analyzed from security perspective. A double closed-loop security control structure and algorithm with defense functions is proposed. From this structure, the features of several cyber attacks are considered respectively. By this structure, the models of information disclosure, denial-of-service (DoS) and Man-in-the-Middle Attack (MITM) are proposed. According to each kind attack, different models are obtained and analyzed, then reduce to the unified models. Based on this, system security conditions are obtained, and a defense scenario with detail algorithm is design to illustrate the implementation of this program.

2017-11-03
Gambino, Andrew, Kim, Jinyoung, Sundar, S. Shyam, Ge, Jun, Rosson, Mary Beth.  2016.  User Disbelief in Privacy Paradox: Heuristics That Determine Disclosure. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :2837–2843.
We conducted a series of in-depth focus groups wherein users provided rationales for their own online privacy behaviors. Our data suggest that individuals often take action with little thought or evaluation, even showing surprise when confronted with their own behaviors. Our analysis yielded a battery of cognitive heuristics, i.e., mental shortcuts / rules of thumb, that users seem to employ when they disclose or withhold information at the spur of the moment. A total of 4 positive heuristics (promoting disclosure) and 4 negative heuristics (inhibiting disclosure) were discovered. An understanding of these heuristics can be valuable for designing interfaces that promote secure and trustworthy computing.
2017-03-07
Huang, Dejun, Gairola, Dhruv, Huang, Yu, Zheng, Zheng, Chiang, Fei.  2016.  PARC: Privacy-Aware Data Cleaning. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :2433–2436.

Poor data quality has become a persistent challenge for organizations as data continues to grow in complexity and size. Existing data cleaning solutions focus on identifying repairs to the data to minimize either a cost function or the number of updates. These techniques, however, fail to consider underlying data privacy requirements that exist in many real data sets containing sensitive and personal information. In this demonstration, we present PARC, a Privacy-AwaRe data Cleaning system that corrects data inconsistencies w.r.t. a set of FDs, and limits the disclosure of sensitive values during the cleaning process. The system core contains modules that evaluate three key metrics during the repair search, and solves a multi-objective optimization problem to identify repairs that balance the privacy vs. utility tradeoff. This demonstration will enable users to understand: (1) the characteristics of a privacy-preserving data repair; (2) how to customize data cleaning and data privacy requirements using two real datasets; and (3) the distinctions among the repair recommendations via visualization summaries.