Biblio
Multiple string matching plays a fundamental role in network intrusion detection systems. Automata-based multiple string matching algorithms like AC, SBDM and SBOM are widely used in practice, but the huge memory usage of automata prevents them from being applied to a large-scale pattern set. Meanwhile, poor cache locality of huge automata degrades the matching speed of algorithms. Here we propose a space-efficient multiple string matching algorithm BVM, which makes use of bit-vector and succinct hash table to replace the automata used in factor-searching-based algorithms. Space complexity of the proposed algorithm is O(rm2 + ΣpϵP |p|), that is more space-efficient than the classic automata-based algorithms. Experiments on datasets including Snort, ClamAV, URL blacklist and synthetic rules show that the proposed algorithm significantly reduces memory usage and still runs at a fast matching speed. Above all, BVM costs less than 0.75% of the memory usage of AC, and is capable of matching millions of patterns efficiently.
Reduction of Quality (RoQ) attack is a stealthy denial of service attack. It can decrease or inhibit normal TCP flows in network. Victims are hard to perceive it as the final network throughput is decreasing instead of increasing during the attack. Therefore, the attack is strongly hidden and it is difficult to be detected by existing detection systems. Based on the principle of Time-Frequency analysis, we propose a two-stage detection algorithm which combines anomaly detection with misuse detection. In the first stage, we try to detect the potential anomaly by analyzing network traffic through Wavelet multiresolution analysis method. According to different time-domain characteristics, we locate the abrupt change points. In the second stage, we further analyze the local traffic around the abrupt change point. We extract the potential attack characteristics by autocorrelation analysis. By the two-stage detection, we can ultimately confirm whether the network is affected by the attack. Results of simulations and real network experiments demonstrate that our algorithm can detect RoQ attacks, with high accuracy and high efficiency.
We consider the problem of communicating information over a network secretly and reliably in the presence of a hidden adversary who can eavesdrop and inject malicious errors. We provide polynomial-time distributed network codes that are information-theoretically rate-optimal for this scenario, improving on the rates achievable in prior work by Ngai Our main contribution shows that as long as the sum of the number of links the adversary can jam (denoted by ZO) and the number of links he can eavesdrop on (denoted by ZI) is less than the network capacity (denoted by C) (i.e., ), our codes can communicate (with vanishingly small error probability) a single bit correctly and without leaking any information to the adversary. We then use this scheme as a module to design codes that allow communication at the source rate of C- ZO when there are no security requirements, and codes that allow communication at the source rate of C- ZO- ZI while keeping the communicated message provably secret from the adversary. Interior nodes are oblivious to the presence of adversaries and perform random linear network coding; only the source and destination need to be tweaked. We also prove that the rate-region obtained is information-theoretically optimal. In proving our results, we correct an error in prior work by a subset of the authors in this paper.
Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.
It has gradually realized in the industry that the increasing complexity of cloud computing under interaction of technology, business, society and the like, instead of being simply solved depending on research on information technology, shall be explained and researched from a systematic and scientific perspective on the basis of theory and method of a complex adaptive system (CAS). This article, for basic problems in CAS theoretical framework, makes research on definition of an active adaptive agent constituting the cloud computing system, and proposes a service agent concept and basic model through commonality abstraction from two basic levels: cloud computing technology and business, thus laying a foundation for further development of cloud computing complexity research as well as for multi-agent based cloud computing environment simulation.
In the present paper, we present our approach for the transformation of workflow applications based on institution theory. The workflow application is modeled with UML Activity Diagram(UML AD). Then, for a formal verification purposes, the graphical model will be translated to an Event-B specification. Institution theory will be used in two levels. First, we defined a local semantic for UML AD and Event B specification using a categorical description of each one. Second, we defined institution comorphism to link the two defined institutions. The theoretical foundations of our approach will be studied in the same mathematical framework since the use of institution theory. The resulted Event-B specification, after applying the transformation approach, will be used for the formal verification of functional proprieties and the verification of absences of problems such deadlock. Additionally, with the institution comorphism, we define a semantic correctness and coherence of the model transformation.
Smart grids, where cyber infrastructure is used to make power distribution more dependable and efficient, are prime examples of modern infrastructure systems. The cyber infrastructure provides monitoring and decision support intended to increase the dependability and efficiency of the system. This comes at the cost of vulnerability to accidental failures and malicious attacks, due to the greater extent of virtual and physical interconnection. Any failure can propagate more quickly and extensively, and as such, the net result could be lowered reliability. In this paper, we describe metrics for assessment of two phases of smart grid operation: the duration before a failure occurs, and the recovery phase after an inevitable failure. The former is characterized by reliability, which we determine based on information about cascading failures. The latter is quantified using resilience, which can in turn facilitate comparison of recovery strategies. We illustrate the application of these metrics to a smart grid based on the IEEE 9-bus test system.
Demand response management (DRM) is one of the main features in smart grid, which is realized via communications between power providers and consumers. Due to the vulnerabilities of communication channels, communication is not perfect in practice and will be threatened by jamming attack. In this paper, we consider jamming attack in the wireless communication for smart grid. Firstly, the DRM performance degradation introduced by unreliable communication is fully studied. Secondly, a regret matching based anti-jamming algorithm is proposed to enhance the performance of communication and DRM. Finally, numerical results are presented to illustrate the impacts of unreliable communication on DRM and the performance of the proposed anti-jamming algorithm.
More and more intelligent functions are proposed, designed and implemented in meters to make the power supply be smart. However, these complex functions also bring risks to the smart meters, and they become susceptible to vulnerabilities and attacks. We present the rat-group attack in this paper, which exploits the vulnerabilities of smart meters in the cyber world, but spreads in the physical world due to the direct economic benefits. To the best of our knowledge, no systematic work has been conducted on this attack. Game theory is then applied to analyze this attack, and two game models are proposed and compared under different assumptions. The analysis results suggest that the power company shall follow an open defense policy: disclosing the defense parameters to all users (i.e., the potential attackers), results in less loss in the attack.
The modern society increasingly relies on electrical service, which also brings risks of catastrophic consequences, e.g., large-scale blackouts. In the current literature, researchers reveal the vulnerability of power grids under the assumption that substations/transmission lines are removed or attacked synchronously. In reality, however, it is highly possible that such removals can be conducted sequentially. Motivated by this idea, we discover a new attack scenario, called the sequential attack, which assumes that substations/transmission lines can be removed sequentially, not synchronously. In particular, we find that the sequential attack can discover many combinations of substation whose failures can cause large blackout size. Previously, these combinations are ignored by the synchronous attack. In addition, we propose a new metric, called the sequential attack graph (SAG), and a practical attack strategy based on SAG. In simulations, we adopt three test benchmarks and five comparison schemes. Referring to simulation results and complexity analysis, we find that the proposed scheme has strong performance and low complexity.
This paper proposes a methodology to assess cyber-related risks and to identify critical assets both at power grid and substation levels. The methodology is based on a two-pass engine model. The first pass engine is developed to identify the most critical substation(s) in a power grid. A mixture of Analytical hierarchy process (AHP) and (N-1) contingent analysis is used to calculate risks. The second pass engine is developed to identify risky assets within a substation and improve the vulnerability of a substation against the intrusion and malicious acts of cyber hackers. The risk methodology uniquely combines asset reliability, vulnerability and costs of attack into a risk index. A methodology is also presented to improve the overall security of a substation by optimally placing security agent(s) on the automation system.
This paper proposes a methodology to assess cyber-related risks and to identify critical assets both at power grid and substation levels. The methodology is based on a two-pass engine model. The first pass engine is developed to identify the most critical substation(s) in a power grid. A mixture of Analytical hierarchy process (AHP) and (N-1) contingent analysis is used to calculate risks. The second pass engine is developed to identify risky assets within a substation and improve the vulnerability of a substation against the intrusion and malicious acts of cyber hackers. The risk methodology uniquely combines asset reliability, vulnerability and costs of attack into a risk index. A methodology is also presented to improve the overall security of a substation by optimally placing security agent(s) on the automation system.
The vulnerability analysis is vital for safely running power grids. The simultaneous attack, which applies multiple failures simultaneously, does not consider the time domain in applying failures, and is limited to find unknown vulnerabilities of power grid networks. In this paper, we discover a new attack scenario, called the sequential attack, in which the failures of multiple network components (i.e., links/nodes) occur at different time. The sequence of such failures can be carefully arranged by attackers in order to maximize attack performances. This attack scenario leads to a new angle to analyze and discover vulnerabilities of grid networks. The IEEE 39 bus system is adopted as test benchmark to compare the proposed attack scenario with the existing simultaneous attack scenario. New vulnerabilities are found. For example, the sequential failure of two links, e.g., links 26 and 39 in the test benchmark, can cause 80% power loss, whereas the simultaneous failure of them causes less than 10% power loss. In addition, the sequential attack is demonstrated to be statistically stronger than the simultaneous attack. Finally, several metrics are compared and discussed in terms of whether they can be used to sharply reduce the search space for identifying strong sequential attacks.
The security issue of complex networks has drawn significant concerns recently. While pure topological analyzes from a network security perspective provide some effective techniques, their inability to characterize the physical principles requires a more comprehensive model to approximate failure behavior of a complex network in reality. In this paper, based on an extended topological metric, we proposed an approach to examine the vulnerability of a specific type of complex network, i.e., the power system, against cascading failure threats. The proposed approach adopts a model called extended betweenness that combines network structure with electrical characteristics to define the load of power grid components. By using this power transfer distribution factor-based model, we simulated attacks on different components (buses and branches) in the grid and evaluated the vulnerability of the system components with an extended topological cascading failure simulator. Influence of different loading and overloading situations on cascading failures was also evaluated by testing different tolerance factors. Simulation results from a standard IEEE 118-bus test system revealed the vulnerability of network components, which was then validated on a dc power flow simulator with comparisons to other topological measurements. Finally, potential extensions of the approach were also discussed to exhibit both utility and challenge in more complex scenarios and applications.
This paper proposes an analysis method of power grids vulnerability based on complex networks. The method effectively combines the degree and betweenness of nodes or lines into a new index. Through combination of the two indexes, the new index can help to analyze the vulnerability of power grids. Attacking the line of the new index can obtain a smaller size of the largest cluster and global efficiency than that of the pure degree index or betweenness index. Finally, the fault simulation results of IEEE 118 bus system show that the new index can reveal the vulnerability of power grids more effectively.
In military operation or emergency response situations, very frequently a commander will need to assemble and dynamically manage Community of Interest (COI) mobile groups to achieve a critical mission assigned despite failure, disconnection or compromise of COI members. We combine the designs of COI hierarchical management for scalability and reconfigurability with COI dynamic trust management for survivability and intrusion tolerance to compose a scalable, reconfigurable, and survivable COI management protocol for managing COI mission-oriented mobile groups in heterogeneous mobile environments. A COI mobile group in this environment would consist of heterogeneous mobile entities such as communication-device-carried personnel/robots and aerial or ground vehicles operated by humans exhibiting not only quality of service (QoS) characters, e.g., competence and cooperativeness, but also social behaviors, e.g., connectivity, intimacy and honesty. A COI commander or a subtask leader must measure trust with both social and QoS cognition depending on mission task characteristics and/or trustee properties to ensure successful mission execution. In this paper, we present a dynamic hierarchical trust management protocol that can learn from past experiences and adapt to changing environment conditions, e.g., increasing misbehaving node population, evolving hostility and node density, etc. to enhance agility and maximize application performance. With trust-based misbehaving node detection as an application, we demonstrate how our proposed COI trust management protocol is resilient to node failure, disconnection and capture events, and can help maximize application performance in terms of minimizing false negatives and positives in the presence of mobile nodes exhibiting vastly distinct QoS and social behaviors.
The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers “house of resources” includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security.
Recently, cloud computing has been spotlighted as a new paradigm of database management system. In this environment, databases are outsourced and deployed on a service provider in order to reduce cost for data storage and maintenance. However, the service provider might be untrusted so that the two issues of data security, including data confidentiality and query result integrity, become major concerns for users. Existing bucket-based data authentication methods have problem that the original spatial data distribution can be disclosed from data authentication index due to the unsophisticated data grouping strategies. In addition, the transmission overhead of verification object is high. In this paper, we propose a privacy-aware query authentication which guarantees data confidentiality and query result integrity for users. A periodic function-based data grouping scheme is designed to privately partition a spatial database into small groups for generating a signature of each group. The group signature is used to check the correctness and completeness of outsourced data when answering a range query to users. Through performance evaluation, it is shown that proposed method outperforms the existing method in terms of range query processing time up to 3 times.
After the occurrence of numerous worldwide financial scandals, the importance of related issues such as internal control and information security has greatly increased. This study develops an internal control framework that can be applied within an enterprise resource planning (ERP) system. A literature review is first conducted to examine the necessary forms of internal control in information technology (IT) systems. The control criteria for the establishment of the internal control framework are then constructed. A case study is conducted to verify the feasibility of the established framework. This study proposes a 12-dimensional framework with 37 control items aimed at helping auditors perform effective audits by inspecting essential internal control points in ERP systems. The proposed framework allows companies to enhance IT audit efficiency and mitigates control risk. Moreover, companies that refer to this framework and consider the limitations of their own IT management can establish a more robust IT management mechanism.
As wireless networks become more pervasive, the amount of the wireless data is rapidly increasing. One of the biggest challenges of wide adoption of distributed data storage is how to store these data securely. In this work, we study the frequency-based attack, a type of attack that is different from previously well-studied ones, that exploits additional adversary knowledge of domain values and/or their exact/approximate frequencies to crack the encrypted data. To cope with frequency-based attacks, the straightforward 1-to-1 substitution encryption functions are not sufficient. We propose a data encryption strategy based on 1-to-n substitution via dividing and emulating techniques to defend against the frequency-based attack, while enabling efficient query evaluation over encrypted data. We further develop two frameworks, incremental collection and clustered collection, which are used to defend against the global frequency-based attack when the knowledge of the global frequency in the network is not available. Built upon our basic encryption schemes, we derive two mechanisms, direct emulating and dual encryption, to handle updates on the data storage for energy-constrained sensor nodes and wireless devices. Our preliminary experiments with sensor nodes and extensive simulation results show that our data encryption strategy can achieve high security guarantee with low overhead.
Cloud computing emerges as a new computing paradigm that aims to provide reliable, customized and quality of service guaranteed computation environments for cloud users. Applications and databases are moved to the large centralized data centers, called cloud. Due to resource virtualization, global replication and migration, the physical absence of data and machine in the cloud, the stored data in the cloud and the computation results may not be well managed and fully trusted by the cloud users. Most of the previous work on the cloud security focuses on the storage security rather than taking the computation security into consideration together. In this paper, we propose a privacy cheating discouragement and secure computation auditing protocol, or SecCloud, which is a first protocol bridging secure storage and secure computation auditing in cloud and achieving privacy cheating discouragement by designated verifier signature, batch verification and probabilistic sampling techniques. The detailed analysis is given to obtain an optimal sampling size to minimize the cost. Another major contribution of this paper is that we build a practical secure-aware cloud computing experimental environment, or SecHDFS, as a test bed to implement SecCloud. Further experimental results have demonstrated the effectiveness and efficiency of the proposed SecCloud.
Online social networks are attracting billions of nowadays, both on a global scale as well as in social enterprise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-table-based online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.
Mobile cloud computing is a combination of mobile computing and cloud computing that provides a platform for mobile users to offload heavy tasks and data on the cloud, thus, helping them to overcome the limitations of their mobile devices. However, while utilizing the mobile cloud computing technology users lose physical control of their data; this ultimately calls for the need of a data security protocol. Although, numerous such protocols have been proposed,none of them consider a cloudlet based architecture. A cloudlet is a reliable, resource-rich computer/cluster which is well-connected to the internet and is available to nearby mobile devices. In this paper, we propose a data security protocol for a distributed cloud architecture having cloudlet integrated with the base station, using the property of perfect forward secrecy. Our protocol not only protects data from any unauthorized user, but also prevents exposure of data to the cloud owner.
By identifying memory pages that external I/O operations have modified, a proposed scheme blocks malicious injected code activation, accurately distinguishing an attack from legitimate code injection with negligible performance impact and no changes to the user application.
The aim of this study is to examine the utility of physiological compliance (PC) to understand shared experience in a multiuser technological environment involving active and passive users. Common ground is critical for effective collaboration and important for multiuser technological systems that include passive users since this kind of user typically does not have control over the technology being used. An experiment was conducted with 48 participants who worked in two-person groups in a multitask environment under varied task and technology conditions. Indicators of PC were measured from participants' cardiovascular and electrodermal activities. The relationship between these PC indicators and collaboration outcomes, such as performance and subjective perception of the system, was explored. Results indicate that PC is related to group performance after controlling for task/technology conditions. PC is also correlated with shared perceptions of trust in technology among group members. PC is a useful tool for monitoring group processes and, thus, can be valuable for the design of collaborative systems. This study has implications for understanding effective collaboration.