Biblio
We address the known problem of detecting a previous compression in JPEG images, focusing on the challenging case of high and very high quality factors (textgreater= 90) as well as repeated compression with identical or nearly identical quality factors. We first revisit the approaches based on Benford–Fourier analysis in the DCT domain and block convergence analysis in the spatial domain. Both were originally conceived for specific scenarios. Leveraging decision tree theory, we design a combined approach complementing the discriminatory capabilities. We obtain a set of novel detectors targeted to high quality grayscale JPEG images.
With the advent of globalization in the semiconductor industry, it is necessary to prevent unauthorized usage of third-party IPs (3PIPs), cloning and unwanted modification of 3PIPs, and unauthorized production of ICs. Due to the increasing complexity of ICs, system-on-chip (SoC) designers use various 3PIPs in their design to reduce time-to-market and development costs, which creates a trust issue between the SoC designer and the IP owners. In addition, as the ICs are fabricated around the globe, the SoC designers give fabrication contracts to offshore foundries to manufacture ICs and have little control over the fabrication process, including the total number of chips fabricated. Similarly, the 3PIP owners lack control over the number of fabricated chips and/or the usage of their IPs in an SoC. Existing research only partially addresses the problems of IP piracy and IC overproduction, and to the best of our knowledge, there is no work that considers IP overuse. In this article, we present a comprehensive solution for preventing IP piracy and IC overproduction by assuring forward trust between all entities involved in the SoC design and fabrication process. We propose a novel design flow to prevent IC overproduction and IP overuse. We use an existing logic encryption technique to obfuscate the netlist of an SoC or a 3PIP and propose a modification to enable manufacturing tests before the activation of chips which is absolutely necessary to prevent overproduction. We have used asymmetric and symmetric key encryption, in a fashion similar to Pretty Good Privacy (PGP), to transfer keys from the SoC designer or 3PIP owners to the chips. In addition, we also propose to attach an IP digest (a cryptographic hash of the entire IP) to the header of an IP to prevent modification of the IP by the SoC designers. We have shown that our approach is resistant to various attacks with the cost of minimal area overhead.
A new paradigm in wireless network access is presented and analyzed. In this concept, certain classes of wireless terminals can be turned temporarily into an access point (AP) anytime while connected to the Internet. This creates a dynamic network architecture (DNA) since the number and location of these APs vary in time. In this paper, we present a framework to optimize different aspects of this architecture. First, the dynamic AP association problem is addressed with the aim to optimize the network by choosing the most convenient APs to provide the quality-of-service (QoS) levels demanded by the users with the minimum cost. Then, an economic model is developed to compensate the users for serving as APs and, thus, augmenting the network resources. The users' security investment is also taken into account in the AP selection. A preclustering process of the DNA is proposed to keep the optimization process feasible in a high dense network. To dynamically reconfigure the optimum topology and adjust it to the traffic variations, a new specific encoding of genetic algorithm (GA) is presented. Numerical results show that GA can provide the optimum topology up to two orders of magnitude faster than exhaustive search for network clusters, and the improvement significantly increases with the cluster size.
Cloud has gained a wide acceptance across the globe. Despite wide acceptance and adoption of cloud computing, certain apprehensions and diffidence, related to safety and security of data still exists. The service provider needs to convince and demonstrate to the client, the confidentiality of data on the cloud. This can be broadly translated to issues related to the process of identifying, developing, maintaining and optimizing trust with clients regarding the services provided. Continuous demonstration, maintenance and optimization of trust of the agreed upon services affects the relationship with a client. The paper proposes a framework of integration of trust at the IAAS level in the cloud. It proposes a novel method of generation of trust index factor, considering the performance and the agility of the feedback received using fuzzy logic.
We propose a modular framework which deploys state-of-the art techniques in dynamic pattern matching as well as machine learning algorithms for Big Data predictive and be-havioural analytics to detect threats and attacks in Managed File Transfer and collaboration platforms. We leverage the use of the kill chain model by looking for indicators of compromise either for long-term attacks as Advanced Persistent Threats, zero-day attacks or DDoS attacks. The proposed engine can act complimentary to existing security services as SIEMs, IDS, IPS and firewalls.
Lattice-based cryptography offers some of the most attractive primitives believed to be resistant to quantum computers. Following increasing interest from both companies and government agencies in building quantum computers, a number of works have proposed instantiations of practical post-quantum key exchange protocols based on hard problems in ideal lattices, mainly based on the Ring Learning With Errors (R-LWE) problem. While ideal lattices facilitate major efficiency and storage benefits over their non-ideal counterparts, the additional ring structure that enables these advantages also raises concerns about the assumed difficulty of the underlying problems. Thus, a question of significant interest to cryptographers, and especially to those currently placing bets on primitives that will withstand quantum adversaries, is how much of an advantage the additional ring structure actually gives in practice. Despite conventional wisdom that generic lattices might be too slow and unwieldy, we demonstrate that LWE-based key exchange is quite practical: our constant time implementation requires around 1.3ms computation time for each party; compared to the recent NewHope R-LWE scheme, communication sizes increase by a factor of 4.7x, but remain under 12 KiB in each direction. Our protocol is competitive when used for serving web pages over TLS; when partnered with ECDSA signatures, latencies increase by less than a factor of 1.6x, and (even under heavy load) server throughput only decreases by factors of 1.5x and 1.2x when serving typical 1 KiB and 100 KiB pages, respectively. To achieve these practical results, our protocol takes advantage of several innovations. These include techniques to optimize communication bandwidth, dynamic generation of public parameters (which also offers additional security against backdoors), carefully chosen error distributions, and tight security parameters.
This paper presents the holistic approach to cyber resilience as a means of preparing for the "unknown unknowns". Principles of augmented cyber risks management and resilience management model at national level are presented, with elaboration on multi-stakeholder engagement and partnership for the implementation of national cyber resilience collaborative framework. The complementarity of governance, law, and business/industry initiatives is outlined, with examples of the collaborative resilience model for the Bulgarian national strategy and its multi-national engagements.
The overarching vision of social machines is to facilitate social processes by having computers provide administrative support. We conceive of a social machine as a sociotechnical system (STS): a software-supported system in which autonomous principals such as humans and organizations interact to exchange information and services. Existing approaches for social machines emphasize the technical aspects and inadequately support the meanings of social processes, leaving them informally realized in human interactions. We posit that a fundamental rethinking is needed to incorporate accountability, essential for addressing the openness of the Web and the autonomy of its principals. We introduce Interaction-Oriented Software Engineering (IOSE) as a paradigm expressly suited to capturing the social basis of STSs. Motivated by promoting openness and autonomy, IOSE focuses not on implementation but on social protocols, specifying how social relationships, characterizing the accountability of the concerned parties, progress as they interact. Motivated by providing computational support, IOSE adopts the accountability representation to capture the meaning of a social machine's states and transitions.
We demonstrate IOSE via examples drawn from healthcare. We reinterpret the classical software engineering (SE) principles for the STS setting and show how IOSE is better suited than traditional software engineering for supporting social processes. The contribution of this paper is a new paradigm for STSs, evaluated via conceptual analysis.
Wireless sensor networks (WSN) are useful in many practical applications including agriculture, military and health care systems. However, the nodes in a sensor network are constrained by energy and hence the lifespan of such sensor nodes are limited due to the energy problem. Temporal logics provide a facility to predict the lifetime of sensor nodes in a WSN using the past and present traffic and environmental conditions. Moreover, fuzzy logic helps to perform inference under uncertainty. When fuzzy logic is combined with temporal constraints, it increases the accuracy of decision making with qualitative information. Hence, a new data collection and cluster based energy efficient routing algorithm is proposed in this paper by extending the existing LEACH protocol. Extensions are provided in this work by including fuzzy temporal rules for making data collection and routing decisions. Moreover, this proposed work uses fuzzy temporal logic for forming clusters and to perform cluster based routing. The main difference between other cluster based routing protocols and the proposed protocol is that two types of cluster heads are used here, one for data collection and other for routing. In this research work we conducted an experiment and it is observed that the proposed fuzzy cluster based routing algorithm with temporal constrains enhances the network life time reduces the energy consumption and enhances the quality of service by increasing the packet delivery ratio by reducing the delay.
Cyber-Physical Embedded Systems (CPESs) are distributed embedded systems integrated with various actuators and sensors. When it comes to the issue of CPES security, the most significant problem is the security of Embedded Sensor Networks (ESNs). With the continuous growth of ESNs, the security of transferring data from sensors to their destinations has become an important research area. Due to the limitations in power, storage, and processing capabilities, existing security mechanisms for wired or wireless networks cannot apply directly to ESNs. Meanwhile, ESNs are likely to be attacked by different kinds of attacks in industrial scenarios. Therefore, there is a need to develop new techniques or modify the current security mechanisms to overcome these problems. In this article, we focus on Intrusion Detection (ID) techniques and propose a new attack-defense game model to detect malicious nodes using a repeated game approach. As a direct consequence of the game model, attackers and defenders make different strategies to achieve optimal payoffs. Importantly, error detection and missing detection are taken into consideration in Intrusion Detection Systems (IDSs), where a game tree model is introduced to solve this problem. In addition, we analyze and prove the existence of pure Nash equilibrium and mixed Nash equilibrium. Simulations show that the proposed model can both reduce energy consumption by up to 50% compared with the existing All Monitor (AM) model and improve the detection rate by up to 10% to 15% compared with the existing Cluster Head (CH) monitor model.
Authenticating a user based on her unique behavioral bio-metric traits has been extensively researched over the past few years. The most researched behavioral biometrics techniques are based on keystroke and mouse dynamics. These schemes, however, have been shown to be vulnerable to human-based and robotic attacks that attempt to mimic the user's behavioral pattern to impersonate the user. In this paper, we aim to verify the user's identity through the use of active, cognition-based user interaction in the authentication process. Such interaction boasts to provide two key advantages. First, it may enhance the security of the authentication process as multiple rounds of active interaction would serve as a mechanism to prevent against several types of attacks, including zero-effort attack, expert trained attackers, and automated attacks. Second, it may enhance the usability of the authentication process by actively engaging the user in the process. We explore the cognitive authentication paradigm through very simplistic interactive challenges, called Dynamic Cognitive Games, which involve objects floating around within the images, where the user's task is to match the objects with their respective target(s) and drag/drop them to the target location(s). Specifically, we introduce, build and study Gametrics ("Game-based biometrics"), an authentication mechanism based on the unique way the user solves such simple challenges captured by multiple features related to her cognitive abilities and mouse dynamics. Based on a comprehensive data set collected in both online and lab settings, we show that Gametrics can identify the users with a high accuracy (false negative rates, FNR, as low as 0.02) while rejecting zero-effort attackers (false positive rates, FPR, as low as 0.02). Moreover, Gametrics shows promising results in defending against expert attackers that try to learn and later mimic the user's pattern of solving the challenges (FPR for expert human attacker as low as 0.03). Furthermore, we argue that the proposed biometrics is hard to be replayed or spoofed by automated means, such as robots or malware attacks.
When people utilize social applications and services, their privacy suffers a potential serious threat. In this article, we present a novel, robust, and effective de-anonymization attack to mobility trace data and social data. First, we design a Unified Similarity (US) measurement, which takes account of local and global structural characteristics of data, information obtained from auxiliary data, and knowledge inherited from ongoing de-anonymization results. By analyzing the measurement on real datasets, we find that some data can potentially be de-anonymized accurately and the other can be de-anonymized in a coarse granularity. Utilizing this property, we present a US-based De-Anonymization (DA) framework, which iteratively de-anonymizes data with accuracy guarantee. Then, to de-anonymize large-scale data without knowledge of the overlap size between the anonymized data and the auxiliary data, we generalize DA to an Adaptive De-Anonymization (ADA) framework. By smartly working on two core matching subgraphs, ADA achieves high de-anonymization accuracy and reduces computational overhead. Finally, we examine the presented de-anonymization attack on three well-known mobility traces: St Andrews, Infocom06, and Smallblue, and three social datasets: ArnetMiner, Google+, and Facebook. The experimental results demonstrate that the presented de-anonymization framework is very effective and robust to noise. The source code and employed datasets are now publicly available at SecGraph [2015].
Proxy Re-Encryption (PRE) is a favorable primitive to realize a cryptographic cloud with secure and flexible data sharing mechanism. A number of PRE schemes with versatile capabilities have been proposed for different applications. The secure data sharing can be internally achieved in each PRE scheme. But no previous work can guarantee the secure data sharing among different PRE schemes in a general manner. Moreover, it is challenging to solve this problem due to huge differences among the existing PRE schemes in their algebraic systems and public-key types. To solve this problem more generally, this paper uniforms the definitions of the existing PRE and Public Key Encryption (PKE) schemes, and further uniforms their security definitions. Then taking any uniformly defined PRE scheme and any uniformly defined PKE scheme as two building blocks, this paper constructs a Generally Hybrid Proxy Re-Encryption (GHPRE) scheme with the idea of temporary public and private keys to achieve secure data sharing between these two underlying schemes. Since PKE is a more general definition than PRE, the proposed GHPRE scheme also is workable between any two PRE schemes. Moreover, the proposed GHPRE scheme can be transparently deployed even if the underlying PRE schemes are implementing.
Growing numbers of ubiquitous electronic devices and services motivate the need for effortless user authentication and identification. While biometrics are a natural means of achieving these goals, their use poses privacy risks, due mainly to the difficulty of preventing theft and abuse of biometric data. One way to minimize information leakage is to derive biometric keys from users' raw biometric measurements. Such keys can be used in subsequent security protocols and ensure that no sensitive biometric data needs to be transmitted or permanently stored. This paper is the first attempt to explore the use of human body impedance as a biometric trait for deriving secret keys. Building upon Randomized Biometric Templates as a key generation scheme, we devise a mechanism that supports consistent regeneration of unique keys from users' impedance measurements. The underlying set of biometric features are found using a feature learning technique based on Siamese networks. Compared to prior feature extraction methods, the proposed technique offers significantly improved recognition rates in the context of key generation. Besides computing experimental error rates, we tailor a known key guessing approach specifically to the used key generation scheme and assess security provided by the resulting keys. We give a very conservative estimate of the number of guesses an adversary must make to find a correct key. Results show that the proposed key generation approach produces keys comparable to those obtained by similar methods based on other biometrics.
A new dataset is presented composed of music identification matches from Gracenote, a leading global music metadata company. Matches from January 1, 2014 to December 31, 2014 have been curated and made available as a public dataset called Gracenote Music Identification 2014, or GNMID14, at the following address: https://developer.gracenote.com/mid2014. This collection is the first significant music identification dataset and one of the largest music related datasets available containing more than 110M matches in 224 countries for 3M unique tracks, and 509K unique artists. It features geotemporal information (i.e. country and match date), genre and mood metadata. In this paper, we characterize the dataset and demonstrate its utility for Information Retrieval (IR) research.
Infrastructure-as-a-Service (IaaS) clouds such as OpenStack consist of two kinds of nodes in their infrastructure: control nodes and compute nodes. While control nodes run all critical services, compute nodes host virtual machines of customers. Given the large number of compute nodes, and the fact that they are hosting VMs of (possibly malicious) customers, it is possible that some of the compute nodes may be compromised. This paper examines the impact of such a compromise. We focus on OpenStack, a popular open-source cloud plat- form that is widely adopted. We show that attackers com- promising a single compute node can extend their controls over the entire cloud infrastructure. They can then gain free access to resources that they have not paid for, or even bring down the whole cloud to affect all customers. This startling result stems from the cloud platform's misplaced trust, which does not match today's threats. To overcome the weakness, we propose a new system, called SOS , for hardening OpenStack. SOS limits trust on compute nodes. SOS consists of a framework that can enforce a wide range of security policies. Specifically, we applied mandatory access control and capabilities to con- fine interactions among different components. Effective confinement policies are generated automatically. Furthermore, SOS requires no modifications to the OpenStack. This has allowed us to deploy SOS on multiple versions of OpenStack. Our experimental results demonstrate that SOS is scalable, incurs negligible overheads and offers strong protection.
The orthodox paradigm to defend against automated social-engineering attacks in large-scale socio-technical systems is reactive and victim-agnostic. Defenses generally focus on identifying the attacks/attackers (e.g., phishing emails, social-bot infiltrations, malware offered for download). To change the status quo, we propose to identify, even if imperfectly, the vulnerable user population, that is, the users that are likely to fall victim to such attacks. Once identified, information about the vulnerable population can be used in two ways. First, the vulnerable population can be influenced by the defender through several means including: education, specialized user experience, extra protection layers and watchdogs. In the same vein, information about the vulnerable population can ultimately be used to fine-tune and reprioritize defense mechanisms to offer differentiated protection, possibly at the cost of additional friction generated by the defense mechanism. Secondly, information about the user population can be used to identify an attack (or compromised users) based on differences between the general and the vulnerable population. This paper considers the implications of the proposed paradigm on existing defenses in three areas (phishing of user credentials, malware distribution and socialbot infiltration) and discusses how using knowledge of the vulnerable population can enable more robust defenses.
Network protocols such as Ethernet and TCP/IP were not designed to ensure the security and privacy of users. To protect users' privacy, anonymity networks such as Tor have been proposed to hide both identities and communication contents for Internet traffic. However, such solutions cannot protect enterprise network traffic that does not transit the Internet. In this paper, we present the design, implementation, and evaluation of a moving target technique called Packet Header Randomization (PHEAR), a privacy-enhancing system for enterprise networks that leverages emerging Software-Defined Networking hardware and protocols to eliminate identifiers found at the MAC, Network, and higher layers of the network stack. PHEAR also encrypts all packet data beyond the Network layer. We evaluate the security of PHEAR against a variety of known and novel attacks and conduct whole-network experiments that show the prototype deployment provides sufficient performance for common applications such as web browsing and file sharing.
Advanced cyber attacks consist of multiple stages aimed at being stealthy and elusive. Such attack patterns leave their footprints spatio-temporally dispersed across many different logs in victim machines. However, existing log-mining intrusion analysis systems typically target only a single type of log to discover evidence of an attack and therefore fail to exploit fundamental inter-log connections. The output of such single-log analysis can hardly reveal the complete attack story for complex, multi-stage attacks. Additionally, some existing approaches require heavyweight system instrumentation, which makes them impractical to deploy in real production environments. To address these problems, we present HERCULE, an automated multi-stage log-based intrusion analysis system. Inspired by graph analytics research in social network analysis, we model multi-stage intrusion analysis as a community discovery problem. HERCULE builds multi-dimensional weighted graphs by correlating log entries across multiple lightweight logs that are readily available on commodity systems. From these, HERCULE discovers any "attack communities" embedded within the graphs. Our evaluation with 15 well known APT attack families demonstrates that HERCULE can reconstruct attack behaviors from a spectrum of cyber attacks that involve multiple stages with high accuracy and low false positive rates.
Recommender systems must find items that match the heterogeneous preferences of its users. Customizable recommenders allow users to directly manipulate the system's algorithm in order to help it match those preferences. However, customizing may demand a certain degree of skill and new users particularly may struggle to effectively customize the system. In user studies of two different systems, I show that there is considerable heterogeneity in the way that new users will try to customize a recommender, even within groups of users with similar underlying preferences. Furthermore, I show that this heterogeneity persists beyond the first few interactions with the recommender. System designs should consider this heterogeneity so that new users can both receive good recommendations in their early interactions as well as learn how to effectively customize the system for their preferences.
Polynomial masking is a glitch-resistant and higher-order masking scheme based upon Shamir's secret sharing scheme and multi-party computation protocols. Polynomial masking was first introduced at CHES 2011, while a 1st-order implementation of the AES S-box on FPGA was presented at CHES 2013. In this latter work, the authors showed a 2nd-order univariate leakage by side-channel collision analysis on a tuned measurement setup. This negative result motivates the need to evaluate the performance, area-costs, and security margins of combined \shuffled\ and higher-order polynomially masking schemes to counteract trivial univariate leakages. In this work, we provide the following contributions: first, we introduce additional principles for the selection of efficient addition chains, which allow for more compact and faster implementations of cryptographic S-boxes. Our 1st-order AES S-box implementation requires approximately 27% less registers, 20% less clock cycles, and 5% less random bits than the CHES 2013 implementation. Then, we propose a lightweight shuffling countermeasure, which inherently applies to polynomial masking schemes and effectively enhances their univariate security at negligible area expenses. Finally, we present the design of a \combined\ \shuffled\ \and\ higher-order polynomially masked AES S-box in hardware, while providing ASIC synthesis and side-channel analysis results in the Electro-Magnetic (EM) domain.
Polynomial masking is a glitch-resistant and higher-order masking scheme based upon Shamir's secret sharing scheme and multi-party computation protocols. Polynomial masking was first introduced at CHES 2011, while a 1st-order implementation of the AES S-box on FPGA was presented at CHES 2013. In this latter work, the authors showed a 2nd-order univariate leakage by side-channel collision analysis on a tuned measurement setup. This negative result motivates the need to evaluate the performance, area-costs, and security margins of combined \shuffled\ and higher-order polynomially masking schemes to counteract trivial univariate leakages. In this work, we provide the following contributions: first, we introduce additional principles for the selection of efficient addition chains, which allow for more compact and faster implementations of cryptographic S-boxes. Our 1st-order AES S-box implementation requires approximately 27% less registers, 20% less clock cycles, and 5% less random bits than the CHES 2013 implementation. Then, we propose a lightweight shuffling countermeasure, which inherently applies to polynomial masking schemes and effectively enhances their univariate security at negligible area expenses. Finally, we present the design of a \combined\ \shuffled\ \and\ higher-order polynomially masked AES S-box in hardware, while providing ASIC synthesis and side-channel analysis results in the Electro-Magnetic (EM) domain.
With the rapid increasing IPv6 network traffic, some network process systems like DPI and firewall cannot meet the demand of high network bandwidth. Flow table based on hash is one of the bottlenecks. In this paper, we measure the characteristics of IPv6 address and propose an entropy based revision hash algorithm, which can produce a better distribution within acceptable time. Moreover, we use a hierarchical hash strategy to reduce hash table lookup times further more even in extreme cases.
Computed Tomography (CT) Image Reconstruction is an important technique used in a wide range of applications, ranging from explosive detection, medical imaging to scientific imaging. Among available reconstruction methods, Model Based Iterative Reconstruction (MBIR) produces higher quality images and allows for the use of more general CT scanner geometries than is possible with more commonly used methods. The high computational cost of MBIR, however, often makes it impractical in applications for which it would otherwise be ideal. This paper describes a new MBIR implementation that significantly reduces the computational cost of MBIR while retaining its benefits. It describes a novel organization of the scanner data into super-voxels (SV) that, combined with a super-voxel buffer (SVB), dramatically increase locality and prefetching, enable parallelism across SVs and lead to an average speedup of 187 on 20 cores.