Biblio
The disclosure of an important yet sensitive link may cause serious privacy crisis between two users of a social graph. Only deleting the sensitive link referred to as a target link which is often the attacked target of adversaries is not enough, because the adversarial link prediction can deeply forecast the existence of the missing target link. Thus, to defend some specific adversarial link prediction, a budget limited number of other non-target links should be optimally removed. We first propose a path-based dissimilarity function as the optimizing objective and prove that the greedy link deletion to preserve target link privacy referred to as the GLD2Privacy which has monotonicity and submodularity properties can achieve a near optimal solution. However, emulating all length limited paths between any pair of nodes for GLD2Privacy mechanism is impossible in large scale social graphs. Secondly, we propose a Walk2Privacy mechanism that uses self-avoiding random walk which can efficiently run in large scale graphs to sample the paths of given lengths between the two ends of any missing target link, and based on the sampled paths we select the alternative non-target links being deleted for privacy purpose. Finally, we compose experiments to demonstrate that the Walk2Privacy algorithm can remarkably reduce the time consumption and achieve a very near solution that is achieved by the GLD2Privacy.
Today's virtual switches not only support legacy network protocols and standard network management interfaces, but also become adapted to OpenFlow as a prevailing communication protocol. This makes them a core networking component of today's virtualized infrastructures which are able to handle sophisticated networking scenarios in a flexible and software-defined manner. At the same time, these virtual SDN data planes become high-value targets because a compromised switch is hard to detect while it affects all components of a virtualized/SDN-based environment.Most of the well known programmable virtual switches in the market are open source which makes them cost-effective and yet highly configurable options in any network infrastructure deployment. However, this comes at a cost which needs to be addressed. Accordingly, this paper raises an alarm on how attackers may leverage white box analysis of software switch functionalities to lunch effective low profile attacks against it. In particular, we practically present how attackers can systematically take advantage of static and dynamic code analysis techniques to lunch a low rate saturation attack on virtual SDN data plane in a cloud data center.
We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps’ compliance with the Children’s Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children’s apps, we found that a majority are potentially in violation of COPPA, mainly due to their use of thirdparty SDKs. While many of these SDKs offer configuration options to respect COPPA by disabling tracking and behavioral advertising, our data suggest that a majority of apps either do not make use of these options or incorrectly propagate them across mediation SDKs. Worse, we observed that 19% of children’s apps collect identifiers or other personally identifiable information (PII) via SDKs whose terms of service outright prohibit their use in child-directed apps. Finally, we show that efforts by Google to limit tracking through the use of a resettable advertising ID have had little success: of the 3,454 apps that share the resettable ID with advertisers, 66% transmit other, non-resettable, persistent identifiers as well, negating any intended privacy-preserving properties of the advertising ID.
The rapid growth and globalization of the integrated circuit (IC) industry put the threat of hardware Trojans (HTs) front and center among all security concerns in the IC supply chain. Current Trojan detection approaches always assume HTs are composed of digital circuits. However, recent demonstrations of analog attacks, such as A2 and Rowhammer, invalidate the digital assumption in previous HT detection or testing methods. At the system level, attackers can utilize the analog properties of the underlying circuits such as charge-sharing and capacitive coupling effects to create information leakage paths. These new capacitor-based vulnerabilities are rarely covered in digital testings. To address these stealthy yet harmful threats, we identify a large class of such capacitor-enabled attacks and define them as charge-domain Trojans. We are able to abstract the detailed charge-domain models for these Trojans and expose the circuit-level properties that critically contribute to their information leakage paths. Aided by the abstract models, an information flow tracking (IFT) based solution is developed to detect charge-domain leakage paths and then identify the charge-domain Trojans/vulnerabilities. Our proposed method is validated on an experimental RISC microcontroller design injected with different variants of charge-domain Trojans. We demonstrate that successful detection can be accomplished with an automatic tool which realizes the IFT-based solution.
Atomic multicast is a communication primitive that delivers messages to multiple groups of processes according to some total order, with each group receiving the projection of the total order onto messages addressed to it. To be scalable, atomic multicast needs to be genuine, meaning that only the destination processes of a message should participate in ordering it. In this paper we propose a novel genuine atomic multicast protocol that in the absence of failures takes as low as 3 message delays to deliver a message when no other messages are multicast concurrently to its destination groups, and 5 message delays in the presence of concurrency. This improves the latencies of both the fault-tolerant version of classical Skeen's multicast protocol (6 or 12 message delays, depending on concurrency) and its recent improvement by Coelho et al. (4 or 8 message delays). To achieve such low latencies, we depart from the typical way of guaranteeing fault-tolerance by replicating each group with Paxos. Instead, we weave Paxos and Skeen's protocol together into a single coherent protocol, exploiting opportunities for white-box optimisations. We experimentally demonstrate that the superior theoretical characteristics of our protocol are reflected in practical performance pay-offs.
Network virtualization is a fundamental technology for datacenters and upcoming wireless communications (e.g., 5G). It takes advantage of software-defined networking (SDN) that provides efficient network management by converting networking fabrics into SDN-capable devices. Moreover, white-box switches, which provide flexible and fast packet processing, are broadly deployed in commercial datacenters. A white-box switch requires a specific and restricted packet processing pipeline; however, to date, there has been no SDN-based network hypervisor that can support the pipeline of white-box switches. Therefore, in this paper, we propose WhiteVisor: a network hypervisor which can support the physical network composed of white-box switches. WhiteVisor converts a flow rule from the virtual network into a packet processing pipeline compatible with the white-box switch. We implement the prototype herein and show its feasibility and effectiveness with pipeline conversion and overhead.
The economic progress of the Internet of Things (IoT) is phenomenal. Applications range from checking the alignment of some components during a manufacturing process, monitoring of transportation and pedestrian levels to enhance driving and walking path, remotely observing terminally ill patients by means of medical devices such as implanted devices and infusion pumps, and so on. To provide security, encrypting the data becomes an indispensable requirement, and symmetric encryptions algorithms are becoming a crucial implementation in the resource constrained environments. Typical symmetric encryption algorithms like Advanced Encryption Standard (AES) showcases an assumption that end points of communications are secured and that the encryption key being securely stored. However, devices might be physically unprotected, and attackers may have access to the memory while the data is still encrypted. It is essential to reserve the key in such a way that an attacker finds it hard to extract it. At present, techniques like White-Box cryptography has been utilized in these circumstances. But it has been reported that applying White-Box cryptography in IoT devices have resulted in other security issues like the adversary having access to the intermediate values, and the practical implementations leading to Code lifting attacks and differential attacks. In this paper, a solution is presented to overcome these problems by demonstrating the need of White-Box Cryptography to enhance the security by utilizing the cipher block chaining (CBC) mode.
Email privacy is of crucial importance. Existing email encryption approaches are comprehensive but seldom used due to their complexity and inconvenience. We take a new approach to simplify email encryption and improve its usability by implementing receiver-controlled encryption: newly received messages are transparently downloaded and encrypted to a locally-generated key; the original message is then replaced. To avoid the problem of moving a single private key between devices, we implement per-device key pairs: only public keys need be synchronized via a simple verification step. Compromising an email account or server only provides access to encrypted emails. We implemented this scheme on several platforms, showing it works with PGP and S/MIME, is compatible with widely used mail clients and email services including Gmail, has acceptable overhead, and that users consider it intuitive and easy to use.
In AI Matters Volume 4, Issue 2, and Issue 4, we raised the notion of the possibility of an AI Cosmology in part in response to the "AI Hype Cycle" that we are currently experiencing. We posited that our current machine learning and big data era represents but one peak among several previous peaks in AI research in which each peak had accompanying "Hype Cycles". We associated each peak with an epoch in a possible AI Cosmology. We briefly explored the logic machines, cybernetics, and expert system epochs. One of the objectives of identifying these epochs was to help establish that we have been here before. In particular we've been in the territory where some application of AI research finds substantial commercial success which is then closely followed by AI fever and hype. The public's expectations are heightened only to end in disillusionment when the applications fall short. Whereas it is sometimes somewhat of a challenge even for AI researchers, educators, and practitioners to know where the reality ends and hype begins, the layperson is often in an impossible position and at the mercy of pop culture, marketing and advertising campaigns. We suggested that an AI Cosmology might help us identify a single standard model for AI that could be the foundation for a common shared understanding of what AI is and what it is not. A tool to help the layperson understand where AI has been, where it's going, and where it can't go. Something that could provide a basic road map to help the general public navigate the pitfalls of AI Hype.
Network attacks continue to pose threats to missions in cyber space. To prevent critical missions from getting impacted or minimize the possibility of mission impact, active cyber defense is very important. Mission impact graph is a graphical model that enables mission impact assessment and shows how missions can be possibly impacted by cyber attacks. Although the mission impact graph provides valuable information, it is still very difficult for human analysts to comprehend due to its size and complexity. Especially when given limited resources, human analysts cannot easily decide which security measures to take first with respect to mission assurance. Therefore, this paper proposes to apply a ranking algorithm towards the mission impact graph so that the huge amount of information can be prioritized. The actionable conditions that can be managed by security admins are ranked with numeric values. The rank enables efficient utilization of limited resources and provides guidance for taking security countermeasures.
The increasing adoption of 3D printing in many safety and mission critical applications exposes 3D printers to a variety of cyber attacks that may result in catastrophic consequences if the printing process is compromised. For example, the mechanical properties (e.g., physical strength, thermal resistance, dimensional stability) of 3D printed objects could be significantly affected and degraded if a simple printing setting is maliciously changed. To address this challenge, this study proposes a model-free real-time online process monitoring approach that is capable of detecting and defending against the cyber-physical attacks on the firmwares of 3D printers. Specifically, we explore the potential attacks and consequences of four key printing attributes (including infill path, printing speed, layer thickness, and fan speed) and then formulate the attack models. Based on the intrinsic relation between the printing attributes and the physical observations, our defense model is established by systematically analyzing the multi-faceted, real-time measurement collected from the accelerometer, magnetometer and camera. The Kalman filter and Canny filter are used to map and estimate three aforementioned critical toolpath information that might affect the printing quality. Mel-frequency Cepstrum Coefficients are used to extract features for fan speed estimation. Experimental results show that, for a complex 3D printed design, our method can achieve 4% Hausdorff distance compared with the model dimension for infill path estimate, 6.07% Mean Absolute Percentage Error (MAPE) for speed estimate, 9.57% MAPE for layer thickness estimate, and 96.8% accuracy for fan speed identification. Our study demonstrates that, this new approach can effectively defend against the cyber-physical attacks on 3D printers and 3D printing process.
As a frequent participant in eSociety, Willeke is often preoccupied with paperwork because there is no easy to use, affordable way to act as a qualified person in the digital world. Confidential interactions take place over insecure channels like e-mail and post. This situation poses risks and costs for service providers, civilians and governments, while goals regarding confidentiality and privacy are not always met. The objective of this paper is to demonstrate an alternative architecture in which identifying persons, exchanging information, authorizing external parties and signing documents will become more user-friendly and secure. As a starting point, each person has their personal data space, provided by a qualified trust service provider that also issues a high level of assurance electronic ID. Three main building blocks are required: (1) secure exchange between the personal data space of each person, (2) coordination functionalities provided by a token based infrastructure, and (3) governance over this infrastructure. Following the design science research approach, we developed prototypes of the building blocks that we will pilot in practice. Policy makers and practitioners that want to enable Willeke to get rid of her paperwork can find guidance throughout this paper and are welcome to join the pilots in the Netherlands.
Security experts often recommend using password-management tools that both store passwords and generate random passwords. However, research indicates that only a small fraction of users use password managers with password generators. Past studies have explored factors in the adoption of password managers using surveys and online store reviews. Here we describe a semi-structured interview study with 30 participants that allows us to provide a more comprehensive picture of the mindsets underlying adoption and effective use of password managers and password-generation features. Our participants include users who use no password-specific tools at all, those who use password managers built into browsers or operating systems, and those who use separately installed password managers. Furthermore, past field data has indicated that users of built-in, browser-based password managers more often use weak and reused passwords than users of separate password managers that have password generation available by default. Our interviews suggest that users of built-in password managers may be driven more by convenience, while users of separately installed tools appear more driven by security. We advocate tailored designs for these two mentalities and provide actionable suggestions to induce effective password manager usage.