Biblio
Security and privacy in computer systems has always been an important aspect of computer engineering and will continue to grow in importance as computer systems become entrusted to handle an ever increasing amount of sensitive information. Classical exploitation techniques such as memory corruption or shell command injection have been well researched and thus there exists known design patterns to avoid and penetration testing tools for testing the robustness of programs against these types of attacks. When it comes to the notion of program security requirements being violated through indirect means referred to as side-channels, testing frameworks of quality comparable to popular memory safety or command injection tools are not available. Recent computer security research has shown that private information may be indirectly leaked through side-channels such as patterns of encrypted network traffic, CPU and motherboard noise, and monitor ambient light. This paper presents the design and evaluation of a side-channel detection and exploitation framework that follows a machine learning based plugin oriented architecture thus allowing side-channel research to be conducted on a wide-variety of side-channel sources.
Artificial Intelligence systems have enabled significant benefits for users and society, but whilst the data for their feeding are always increasing, a side to privacy and security leaks is offered. The severe vulnerabilities to the right to privacy obliged governments to enact specific regulations to ensure privacy preservation in any kind of transaction involving sensitive information. In the case of digital and/or physical documents comprising sensitive information, the right to privacy can be preserved by data obfuscation procedures. The capability of recognizing sensitive information for obfuscation is typically entrusted to the experience of human experts, who are over-whelmed by the ever increasing amount of documents to process. Artificial intelligence could proficiently mitigate the effort of the human officers and speed up processes. Anyway, until enough knowledge won't be available in a machine readable format, automatic and effectively working systems can't be developed. In this work we propose a methodology for transferring and leveraging general knowledge across specific-domain tasks. We built, from scratch, specific-domain knowledge data sets, for training artificial intelligence models supporting human experts in privacy preserving tasks. We exploited a mixture of natural language processing techniques applied to unlabeled domain-specific documents corpora for automatically obtain labeled documents, where sensitive information are recognized and tagged. We performed preliminary tests just over 10.000 documents from the healthcare and justice domains. Human experts supported us during the validation. Results we obtained, estimated in terms of precision, recall and F1-score metrics across these two domains, were promising and encouraged us to further investigations.
Web technology has evolved to offer 360-degree immersive browsing experiences. This new technology, called WebVR, enables virtual reality by rendering a three-dimensional world on an HTML canvas. Unfortunately, there exists no browser-supported way of sharing this canvas between different parties. As a result, third-party library providers with ill intent (e.g., stealing sensitive information from end-users) can easily distort the entire WebVR site. To mitigate the new threats posed in WebVR, we propose CanvasMirror, which allows publishers to specify the behaviors of third-party libraries and enforce this specification. We show that CanvasMirror effectively separates the third-party context from the host origin by leveraging the privilege separation technique and safely integrates VR contents on a shared canvas.
Air-gapped networks achieve security by using the physical isolation to keep the computers and network from the Internet. However, magnetic covert channels based on CPU utilization have been proposed to help secret data to escape the Faraday-cage and the air-gap. Despite the success of such cover channels, they suffer from the high risk of being detected by the transmitter computer and the challenge of installing malware into such a computer. In this paper, we propose MagView, a distributed magnetic cover channel, where sensitive information is embedded in other data such as video and can be transmitted over the air-gapped internal network. When any computer uses the data such as playing the video, the sensitive information will leak through the magnetic covert channel. The "separation" of information embedding and leaking, combined with the fact that the covert channel can be created on any computer, overcomes these limitations. We demonstrate that CPU utilization for video decoding can be effectively controlled by changing the video frame type and reducing the quantization parameter without video quality degradation. We prototype MagView and achieve up to 8.9 bps throughput with BER as low as 0.0057. Experiments under different environment are conducted to show the robustness of MagView. Limitations and possible countermeasures are also discussed.
Mobile wearable health devices have expanded prevalent usage and become very popular because of the valuable health monitor system. These devices provide general health tips and monitoring human health parameters as well as generally assisting the user to take better health of themselves. However, these devices are associated with security and privacy risk among the consumers because these devices deal with sensitive data information such as users sleeping arrangements, dieting formula such as eating constraint, pulse rate and so on. In this paper, we analyze the significant security and privacy features of three very popular health tracker devices: Fitbit, Jawbone and Google Glass. We very carefully analyze the devices' strength and how the devices communicate and its Bluetooth pairing process with mobile devices. We explore the possible malicious attack through Bluetooth networking by hacker. The outcomes of this analysis show how these devices allow third parties to gain sensitive information from the device exact location that causes the potential privacy breach for users. We analyze the reasons of user data security and privacy are gained by unauthorized people on wearable devices and the possible challenge to secure user data as well as the comparison of three wearable devices (Fitbit, Jawbone and Google Glass) security vulnerability and attack type.
IoT devices introduce unprecedented threats into home and professional networks. As they fail to adhere to security best practices, they are broadly exploited by malicious actors to build botnets or steal sensitive information. Their adoption challenges established security standard as classic security measures are often inappropriate to secure them. This is even more problematic in sensitive environments where the presence of insecure IoTs can be exploited to bypass strict security policies. In this paper, we demonstrate an attack against a highly secured network using a Bluetooth smart bulb. This attack allows a malicious actor to take advantage of a smart bulb to exfiltrate data from an air gapped network.
In order to protect individuals' privacy, data have to be "well-sanitized" before sharing them, i.e. one has to remove any personal information before sharing data. However, it is not always clear when data shall be deemed well-sanitized. In this paper, we argue that the evaluation of sanitized data should be based on whether the data allows the inference of sensitive information that is specific to an individual, instead of being centered around the concept of re-identification. We propose a framework to evaluate the effectiveness of different sanitization techniques on a given dataset by measuring how much an individual's record from the sanitized dataset influences the inference of his/her own sensitive attribute. Our intent is not to accurately predict any sensitive attribute but rather to measure the impact of a single record on the inference of sensitive information. We demonstrate our approach by sanitizing two real datasets in different privacy models and evaluate/compare each sanitized dataset in our framework.
Event logs that originate from information systems enable comprehensive analysis of business processes, e.g., by process model discovery. However, logs potentially contain sensitive information about individual employees involved in process execution that are only partially hidden by an obfuscation of the event data. In this paper, we therefore address the risk of privacy-disclosure attacks on event logs with pseudonymized employee information. To this end, we introduce PRETSA, a novel algorithm for event log sanitization that provides privacy guarantees in terms of k-anonymity and t-closeness. It thereby avoids disclosure of employee identities, their membership in the event log, and their characterization based on sensitive attributes, such as performance information. Through step-wise transformations of a prefix-tree representation of an event log, we maintain its high utility for discovery of a performance-annotated process model. Experiments with real-world data demonstrate that sanitization with PRETSA yields event logs of higher utility compared to methods that exploit frequency-based filtering, while providing the same privacy guarantees.
Because cloud storage services have been broadly used in enterprises for online sharing and collaboration, sensitive information in images or documents may be easily leaked outside the trust enterprise on-premises due to such cloud services. Existing solutions to this problem have not fully explored the tradeoffs among application performance, service scalability, and user data privacy. Therefore, we propose CloudDLP, a generic approach for enterprises to automatically sanitize sensitive data in images and documents in browser-based cloud storage. To the best of our knowledge, CloudDLP is the first system that automatically and transparently detects and sanitizes both sensitive images and textual documents without compromising user experience or application functionality on browser-based cloud storage. To prevent sensitive information escaping from on-premises, CloudDLP utilizes deep learning methods to detect sensitive information in both images and textual documents. We have evaluated the proposed method on a number of typical cloud applications. Our experimental results show that it can achieve transparent and automatic data sanitization on the cloud storage services with relatively low overheads, while preserving most application functionalities.