Biblio
A class of cyber-attacks called False Data Injection attacks that target measurement data used for state estimation in the power grid are currently under study by the research community. These attacks modify sensor readings obtained from meters with the aim of misleading the control center into taking ill-advised response action. It has been shown that an attacker with knowledge of the network topology can craft an attack that bypasses existing bad data detection schemes (largely based on residual generation) employed in the power grid. We propose a multi-agent system for detecting false data injection attacks against state estimation. The multi-agent system is composed of software implemented agents created for each substation. The agents facilitate the exchange of information including measurement data and state variables among substations. We demonstrate that the information exchanged among substations, even untrusted, enables agents cooperatively detect disparities between local state variables at the substation and global state variables computed by the state estimator. We show that a false data injection attack that passes bad data detection for the entire system does not pass bad data detection for each agent.
Cognitive radio (CR) has emerged as a promising technology to increase the utilization of spectrum resource. A pivotal challenge in CR lies on secondary users' (SU) finding each other on the frequency band, i.e., the spectrum locating. In this demo, we implement two kinds of multi-channel rendezvous technology to solve the problem of spectrum locating: (i) the common control channel (CCC) based rendezvous scheme, which is simple and effective when a control channel is always available; and (ii) the channel-hopping (CH) based blind rendezvous, which could also obtain guaranteed rendezvous on all commonly available channels of pairwise SUs in a short time without a CCC. Furthermore, the cognitive nodes in the demonstration could adjust their communication channels autonomously according to the dynamic spectrum environment for continuous data transmission.
With the increased popularity of ubiquitous computing and connectivity, the Internet of Things (IoT) also introduces new vulnerabilities and attack vectors. While secure data collection (i.e. the upward link) has been well studied in the literature, secure data dissemination (i.e. the downward link) remains an open problem. Attribute-based encryption (ABE) and outsourced-ABE has been used for secure message distribution in IoT, however, existing mechanisms suffer from extensive computation and/or privacy issues. In this paper, we explore the problem of privacy-preserving targeted broadcast in IoT. We propose two multi-cloud-based outsourced-ABE schemes, namely the parallel-cloud ABE and the chain-cloud ABE, which enable the receivers to partially outsource the computationally expensive decryption operations to the clouds, while preventing user attributes from being disclosed. In particular, the proposed solution protects three types of privacy (i.e., data, attribute and access policy privacy) by enforcing collaborations among multiple clouds. Our schemes also provide delegation verifiability that allows the receivers to verify whether the clouds have faithfully performed the outsourced operations. We extensively analyze the security guarantees of the proposed mechanisms and demonstrate the effectiveness and efficiency of our schemes with simulated resource-constrained IoT devices, which outsource operations to Amazon EC2 and Microsoft Azure.
In the past three years, Emotion Recognition in the Wild (EmotiW) Grand Challenge has drawn more and more attention due to its huge potential applications. In the fourth challenge, aimed at the task of video based emotion recognition, we propose a multi-clue emotion fusion (MCEF) framework by modeling human emotion from three mutually complementary sources, facial appearance texture, facial action, and audio. To extract high-level emotion features from sequential face images, we employ a CNN-RNN architecture, where face image from each frame is first fed into the fine-tuned VGG-Face network to extract face feature, and then the features of all frames are sequentially traversed in a bidirectional RNN so as to capture dynamic changes of facial textures. To attain more accurate facial actions, a facial landmark trajectory model is proposed to explicitly learn emotion variations of facial components. Further, audio signals are also modeled in a CNN framework by extracting low-level energy features from segmented audio clips and then stacking them as an image-like map. Finally, we fuse the results generated from three clues to boost the performance of emotion recognition. Our proposed MCEF achieves an overall accuracy of 56.66% with a large improvement of 16.19% with respect to the baseline.
India being digitized through digital India, the most basic unique identity for each individual is biometrics. Since India is the second most populous nation, the database that has to be maintained is surplus. Shielding those information by using the present techniques has been questioned. This contravene problem can be overcome by using cryptographic algorithms in accumulation to biometrics. Hence proposed system is developed by combining multimodal biometric (Fingerprint, Retina, Finger vein) with cryptographic algorithm with Genuine Acceptance Rate of 94%, False Acceptance Rate of 1.46%, and False Rejection Rate of 1.07%.
Multipath TCP (MP-TCP) has the potential to greatly improve application performance by using multiple paths transparently. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate our algorithm Balia (balanced linked adaptation), which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new algorithm to existing MP-TCP algorithms.
Microfluidics is an interdisciplinary science focusing on the development of devices and systems that process low volumes of fluid for applications such as high throughput DNA sequencing, immunoassays, and entire Labs-on-Chip platforms. Microfluidic diagnostic technology enables these advances by facilitating the miniaturization and integration of complex biochemical processing through a microfluidic biochip [1]. This approach tightly couples the biochemical operations, sensing system, control algorithm, and droplet-based biochip. During the process the status of a droplet is monitored in real-time to detect operational errors. If an error has occurred, the control algorithm dynamically reconfigures to allow recovery and rescheduling of on-chip operations. During this recovery procedure the droplet that is the source of the error is discarded to prevent the propagation of the error and the operation is repeated. Threats to the operation of the microfluidics biochip include (1) integrity: an attack can modify control electrodes to corrupt the diagnosis, and (2) privacy: what can a user/operator deduce about the diagnosis? It is challenging to describe both these aspects using existing models; as Figure 1 depicts there are multiple security domains, Unidirectional information flows shown in black indicate undesirable flows, the bidirectional black arrows indicate desirable, but possibly corrupted, information flows, and the unidirectional red arrows indicate undesirable information flows. As with Stuxnet, a bidirectional, deducible information flow is needed between the monitoring security domain and internal security domain (biochip) [2]. Simultaneously, the attacker and the operators should receive a nondeducible information flow. Likewise, the red attack arrows should be deducible to the internal domain. Our current security research direction uses the novel approach of Multiple Security Domain Nondeducibility [2] to explore the vulnerabilities of exploiting this error recovery process through information flow leakages and leads to protection of the system through desirable information flows.
Multiple-input multiple-output (MIMO) techniques have been the subject of increased attention for underwater acoustic communication for its ability to significantly improve the channel capabilities. Recently, an under-ice MIMO acoustic communication experiment was conducted in shallow water which differs from previous works in that the water column was covered by about 40 centimeters thick sea ice. In this experiment, high frequency MIMO signals centered at 10 kHz were transmitted from a two-element source array to a four-element vertical receive array at 1km range. The unique under-ice acoustic propagation environment in shallow water seems naturally separate data streams from different transducers, but there is still co-channel interference. Time reversal followed by a single channel decision feedback equalizer is used in this paper to compensate for the inter-symbol interference and co-channel interference. It is demonstrated that this simple receiver scheme is good enough to realize robust performance using fewer hydrophones (i.e. 2) without the explicit use of complex co-channel interference cancelation algorithms such as parallel interference cancelation or serial interference cancelation. Two channel estimation algorithms based on least square and least mean square are also studied for MIMO communications in this paper and their performance are compared using experimental data.
In Data mining is the method of extracting the knowledge from huge amount of data and interesting patterns. With the rapid increase of data storage, cloud and service-based computing, the risk of misuse of data has become a major concern. Protecting sensitive information present in the data is crucial and critical. Data perturbation plays an important role in privacy preserving data mining. The major challenge of privacy preserving is to concentrate on factors to achieve privacy guarantee and data utility. We propose a data perturbation method that perturbs the data using fuzzy logic and random rotation. It also describes aspects of comparable level of quality over perturbed data and original data. The comparisons are illustrated on different multivariate datasets. Experimental study has proved the model is better in achieving privacy guarantee of data, as well as data utility.
Additive manufacturing, also known as 3D printing, has been increasingly applied to fabricate highly intellectual property (IP) sensitive products. However, the related IP protection issues in 3D printers are still largely underexplored. On the other hand, smartphones are equipped with rich onboard sensors and have been applied to pervasive mobile surveillance in many applications. These facts raise one critical question: is it possible that smartphones access the side-channel signals of 3D printer and then hack the IP information? To answer this, we perform an end-to-end study on exploring smartphone-based side-channel attacks against 3D printers. Specifically, we formulate the problem of the IP side-channel attack in 3D printing. Then, we investigate the possible acoustic and magnetic side-channel attacks using the smartphone built-in sensors. Moreover, we explore a magnetic-enhanced side-channel attack model to accurately deduce the vital directional operations of 3D printer. Experimental results show that by exploiting the side-channel signals collected by smartphones, we can successfully reconstruct the physical prints and their G-code with Mean Tendency Error of 5.87% on regular designs and 9.67% on complex designs, respectively. Our study demonstrates this new and practical smartphone-based side channel attack on compromising IP information during 3D printing.
The security game is a basic model for resource allocation in adversarial environments. Here there are two players, a defender and an attacker. The defender wants to allocate her limited resources to defend critical targets and the attacker seeks his most favorable target to attack. In the past decade, there has been a surge of research interest in analyzing and solving security games that are motivated by applications from various domains. Remarkably, these models and their game-theoretic solutions have led to real-world deployments in use by major security agencies like the LAX airport, the US Coast Guard and Federal Air Marshal Service, as well as non-governmental organizations. Among all these research and applications, equilibrium computation serves as a foundation. This paper examines security games from a theoretical perspective and provides a unified view of various security game models. In particular, each security game can be characterized by a set system E which consists of the defender's pure strategies; The defender's best response problem can be viewed as a combinatorial optimization problem over E. Our framework captures most of the basic security game models in the literature, including all the deployed systems; The set system E arising from various domains encodes standard combinatorial problems like bipartite matching, maximum coverage, min-cost flow, packing problems, etc. Our main result shows that equilibrium computation in security games is essentially a combinatorial problem. In particular, we prove that, for any set system \$E\$, the following problems can be reduced to each other in polynomial time: (0) combinatorial optimization over E; (1) computing the minimax equilibrium for zero-sum security games over E; (2) computing the strong Stackelberg equilibrium for security games over E; (3) computing the best or worst (for the defender) Nash equilibrium for security games over E. Therefore, the hardness [polynomial solvability] of any of these problems implies the hardness [polynomial solvability] of all the others. Here, by "games over E" we mean the class of security games with arbitrary payoff structures, but a fixed set E of defender pure strategies. This shows that the complexity of a security game is essentially determined by the set system E. We view drawing these connections as an important conceptual contribution of this paper.
Tree structures such as breadth-first search (BFS) trees and minimum spanning trees (MST) are among the most fundamental graph structures in distributed network algorithms. However, by definition, these structures are not robust against failures and even a single edge's removal can disrupt their functionality. A well-studied concept which attempts to circumvent this issue is Fault-Tolerant Tree Structures, where the tree gets augmented with additional edges from the network so that the functionality of the structure is maintained even when an edge fails. These structures, or other equivalent formulations, have been studied extensively from a centralized viewpoint. However, despite the fact that the main motivations come from distributed networks, their distributed construction has not been addressed before. In this paper, we present distributed algorithms for constructing fault tolerant BFS and MST structures. The time complexity of our algorithms are nearly optimal in the following strong sense: they almost match even the lower bounds of constructing (basic) BFS and MST trees.
In this article, we describe a neighbour disjoint multipath (NDM) scheme that is shown to be more resilient amidst node or link failures compared to the two well-known node disjoint and edge disjoint multipath techniques. A centralised NDM was first conceptualised in our initial published work utilising the spatial diversity among multiple paths to ensure robustness against localised poor channel quality or node failures. Here, we further introduce a distributed version of our NDM algorithm adapting to the low-power and lossy network (LLN) characteristics. We implement our distributed NDM algorithm in Contiki OS on top of LOADng—a lightweight On-demand Ad hoc Distance Vector Routing protocol. We compare this implementation's performance with a standard IPv6 Routing Protocol for Low power and Lossy Networks (RPL), and also with basic LOADng, running in the Cooja simulator. Standard performance metrics such as packet delivery ratio, end-to-end latency, overhead and average routing table size are identified for the comparison. The results and observations are provided considering a few different application traffic patterns, which serve to quantify the improvements in robustness arising from NDM. The results are confirmed by experiments using a public sensor network testbed with over 100 nodes.
In view of the high demand for the security of visiting data in power system, a network data security analysis method based on DPI technology was put forward in this paper, to solve the problem of security gateway judge the legality of the network data. Considering the legitimacy of the data involves data protocol and data contents, this article will filters the data from protocol matching and content detection. Using deep packet inspection (DPI) technology to screen the protocol. Using protocol analysis to detect the contents of data. This paper implements the function that allowing secure data through the gateway and blocking threat data. The example proves that the method is more effective guarantee the safety of visiting data.
Content-centric networking (CCN) is a networking paradigm that emphasizes request-response-based data transfer. A \\textbackslashem consumer\ issues a request explicitly referencing desired data by name. A \\textbackslashem producer\ assigns a name to each data it publishes. Names are used both to identify data to and route traffic between consumers and producers. The type, format, and representation of names are fundamental to CCN. Currently, names are represented as human-readable application-layer URIs. This has several important security and performance implications for the network. In this paper, we propose to transparently decouple application-layer names from their network-layer counterparts. We demonstrate a mapping between the two namespaces that can be deterministically computed by consumers and producers, using application names formatted according to the standard CCN URI scheme. Meanwhile, consumers and producers can continue to use application-layer names. We detail the computation and mapping function requirements and discuss their impact on consumers, producers, and routers. Finally, we comprehensively analyze several mapping functions to show their functional equivalence to standard application names and argue that they address several issues that stem from propagating application names into the network.
This paper establishes a new framework for electrical cyber-physical systems (ECPSs). The communication network is designed by the characteristics of a power grid. The interdependent relationship of communication networks and power grids is described by data-uploading channels and commands-downloading channels. Control strategies (such as load shedding and relay protection) are extended to this new framework for analyzing the performance of ECPSs under several attack scenarios. The fragility of ECPSs under cyber attacks (DoS attack and false data injection attack) and the effectiveness of relay protection policies are verified by experimental results.
In this paper, we propose a new risk analysis framework that enables to supervise risks in complex and distributed systems. Our contribution is twofold. First, we provide the Risk Assessment Graphs (RAGs) as a model of risk analysis. This graph-based model is adaptable to the system changes over the time. We also introduce the potentiality and the accessibility functions which, during each time slot, evaluate respectively the chance of exploiting the RAG's nodes, and the connection time between these nodes. In addition, we provide a worst-case risk evaluation approach, based on the assumption that the intruder threats usually aim at maximising their benefits by inflicting the maximum damage to the target system (i.e. choosing the most likely paths in the RAG). We then introduce three security metrics: the propagated risk, the node risk and the global risk. We illustrate the use of our framework through the simple example of an enterprise email service. Our framework achieves both flexibility and generality requirements, it can be used to assess the external threats as well as the insider ones, and it applies to a wide set of applications.
In this paper, we propose a new risk analysis framework that enables to supervise risks in complex and distributed systems. Our contribution is twofold. First, we provide the Risk Assessment Graphs (RAGs) as a model of risk analysis. This graph-based model is adaptable to the system changes over the time. We also introduce the potentiality and the accessibility functions which, during each time slot, evaluate respectively the chance of exploiting the RAG's nodes, and the connection time between these nodes. In addition, we provide a worst-case risk evaluation approach, based on the assumption that the intruder threats usually aim at maximising their benefits by inflicting the maximum damage to the target system (i.e. choosing the most likely paths in the RAG). We then introduce three security metrics: the propagated risk, the node risk and the global risk. We illustrate the use of our framework through the simple example of an enterprise email service. Our framework achieves both flexibility and generality requirements, it can be used to assess the external threats as well as the insider ones, and it applies to a wide set of applications.
In this paper, we propose a new risk analysis framework that enables to supervise risks in complex and distributed systems. Our contribution is twofold. First, we provide the Risk Assessment Graphs (RAGs) as a model of risk analysis. This graph-based model is adaptable to the system changes over the time. We also introduce the potentiality and the accessibility functions which, during each time slot, evaluate respectively the chance of exploiting the RAG's nodes, and the connection time between these nodes. In addition, we provide a worst-case risk evaluation approach, based on the assumption that the intruder threats usually aim at maximising their benefits by inflicting the maximum damage to the target system (i.e. choosing the most likely paths in the RAG). We then introduce three security metrics: the propagated risk, the node risk and the global risk. We illustrate the use of our framework through the simple example of an enterprise email service. Our framework achieves both flexibility and generality requirements, it can be used to assess the external threats as well as the insider ones, and it applies to a wide set of applications.
The performance of clustering is a crucial challenge, especially for pattern recognition. The models aggregation has a positive impact on the efficiency of Data clustering. This technique is used to obtain more cluttered decision boundaries by aggregating the resulting clustering models. In this paper, we study an aggregation scheme to improve the stability and accuracy of clustering, which allows to find a reliable and robust clustering model. We demonstrate the advantages of our aggregation method by running Fuzzy C-Means (FCM) clustering on Reuters-21578 corpus. Experimental studies showed that our scheme optimized the bias-variance on the selected model and achieved enhanced clustering for unstructured textual resources.
As the amount of spatial data gets bigger, organizations realized that it is cheaper and more flexible to keep their data on the Cloud rather than to establish and maintain in-house huge data centers. Though this saves a lot for IT costs, organizations are still concerned about the privacy and security of their data. Encrypting the whole database before uploading it to the Cloud solves the security issue. But querying the database requires downloading and decrypting the data set, which is impractical. In this paper, we propose a new scheme for protecting the privacy and integrity of spatial data stored in the Cloud while being able to execute range queries efficiently. The proposed technique suggests a new index structure to support answering range query over encrypted data set. The proposed indexing scheme is based on the Z-curve. The paper describes a distributed algorithm for answering range queries over spatial data stored on the Cloud. We carried many simulation experiments to measure the performance of the proposed scheme. The experimental results show that the proposed scheme outperforms the most recent schemes by Kim et al. in terms of data redundancy.
With the increasing popularity of cloud storage services, many individuals and enterprises start to move their local data to the clouds. To ensure their privacy and data security, some cloud service users may want to encrypt their data before outsourcing them. However, this impedes efficient data utilities based on the plain text search. In this paper, we study how to construct a secure index that supports both efficient index updating and similarity search. Using the secure index, users are able to efficiently perform similarity searches tolerating input mistakes and update the index when new data are available. We formally prove the security of our proposal and also perform experiments on real world data to show its efficiency.
SMS (Short Messaging Service) is a text messaging service for mobile users to exchange short text messages. It is also widely used to provide SMS-powered services (e.g., mobile banking). With the rapid deployment of all-IP 4G mobile networks, the underlying technology of SMS evolves from the legacy circuit-switched network to the IMS (IP Multimedia Subsystem) system over packet-switched network. In this work, we study the insecurity of the IMS-based SMS. We uncover its security vulnerabilities and exploit them to devise four SMS attacks: silent SMS abuse, SMS spoofing, SMS client DoS, and SMS spamming. We further discover that those SMS threats can propagate towards SMS-powered services, thereby leading to three malicious attacks: social network account hijacking, unauthorized donation, and unauthorized subscription. Our analysis reveals that the problems stem from the loose security regulations among mobile phones, carrier networks, and SMS-powered services. We finally propose remedies to the identified security issues.
The threat that malicious insiders pose towards organisations is a significant problem. In this paper, we investigate the task of detecting such insiders through a novel method of modelling a user's normal behaviour in order to detect anomalies in that behaviour which may be indicative of an attack. Specifically, we make use of Hidden Markov Models to learn what constitutes normal behaviour, and then use them to detect significant deviations from that behaviour. Our results show that this approach is indeed successful at detecting insider threats, and in particular is able to accurately learn a user's behaviour. These initial tests improve on existing research and may provide a useful approach in addressing this part of the insider-threat challenge.
We introduce the Nondeterministic Strong Exponential Time Hypothesis (NSETH) as a natural extension of the Strong Exponential Time Hypothesis (SETH). We show that both refuting and proving NSETH would have interesting consequences. In particular we show that disproving NSETH would give new nontrivial circuit lower bounds. On the other hand, NSETH implies non-reducibility results, i.e. the absence of (deterministic) fine-grained reductions from SAT to a number of problems. As a consequence we conclude that unless this hypothesis fails, problems such as 3-SUM, APSP and model checking of a large class of first-order graph properties cannot be shown to be SETH-hard using deterministic or zero-error probabilistic reductions.