Visible to the public Biblio

Filters: Keyword is semantic attacks  [Clear All Filters]
2020-08-13
Yang, Xudong, Gao, Ling, Wang, Hai, Zheng, Jie, Guo, Hongbo.  2019.  A Semantic k-Anonymity Privacy Protection Method for Publishing Sparse Location Data. 2019 Seventh International Conference on Advanced Cloud and Big Data (CBD). :216—222.

With the development of location technology, location-based services greatly facilitate people's life . However, due to the location information contains a large amount of user sensitive informations, the servicer in location-based services published location data also be subject to the risk of privacy disclosure. In particular, it is more easy to lead to privacy leaks without considering the attacker's semantic background knowledge while the publish sparse location data. So, we proposed semantic k-anonymity privacy protection method to against above problem in this paper. In this method, we first proposed multi-user compressing sensing method to reconstruct the missing location data . To balance the availability and privacy requirment of anonymity set, We use semantic translation and multi-view fusion to selected non-sensitive data to join anonymous set. Experiment results on two real world datasets demonstrate that our solution improve the quality of privacy protection to against semantic attacks.

2019-01-31
Mohammady, Meisam, Wang, Lingyu, Hong, Yuan, Louafi, Habib, Pourzandi, Makan, Debbabi, Mourad.  2018.  Preserving Both Privacy and Utility in Network Trace Anonymization. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :459–474.

As network security monitoring grows more sophisticated, there is an increasing need for outsourcing such tasks to third-party analysts. However, organizations are usually reluctant to share their network traces due to privacy concerns over sensitive information, e.g., network and system configuration, which may potentially be exploited for attacks. In cases where data owners are convinced to share their network traces, the data are typically subjected to certain anonymization techniques, e.g., CryptoPAn, which replaces real IP addresses with prefix-preserving pseudonyms. However, most such techniques either are vulnerable to adversaries with prior knowledge about some network flows in the traces, or require heavy data sanitization or perturbation, both of which may result in a significant loss of data utility. In this paper, we aim to preserve both privacy and utility through shifting the trade-off from between privacy and utility to between privacy and computational cost. The key idea is for the analysts to generate and analyze multiple anonymized views of the original network traces; those views are designed to be sufficiently indistinguishable even to adversaries armed with prior knowledge, which preserves the privacy, whereas one of the views will yield true analysis results privately retrieved by the data owner, which preserves the utility. We formally analyze the privacy of our solution and experimentally evaluate it using real network traces provided by a major ISP. The results show that our approach can significantly reduce the level of information leakage (e.g., less than 1% of the information leaked by CryptoPAn) with comparable utility.

2017-12-20
Heartfield, R., Loukas, G., Gan, D..  2017.  An eye for deception: A case study in utilizing the human-as-a-security-sensor paradigm to detect zero-day semantic social engineering attacks. 2017 IEEE 15th International Conference on Software Engineering Research, Management and Applications (SERA). :371–378.

In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.