Biblio
Reduction of Quality (RoQ) attack is a stealthy denial of service attack. It can decrease or inhibit normal TCP flows in network. Victims are hard to perceive it as the final network throughput is decreasing instead of increasing during the attack. Therefore, the attack is strongly hidden and it is difficult to be detected by existing detection systems. Based on the principle of Time-Frequency analysis, we propose a two-stage detection algorithm which combines anomaly detection with misuse detection. In the first stage, we try to detect the potential anomaly by analyzing network traffic through Wavelet multiresolution analysis method. According to different time-domain characteristics, we locate the abrupt change points. In the second stage, we further analyze the local traffic around the abrupt change point. We extract the potential attack characteristics by autocorrelation analysis. By the two-stage detection, we can ultimately confirm whether the network is affected by the attack. Results of simulations and real network experiments demonstrate that our algorithm can detect RoQ attacks, with high accuracy and high efficiency.
A slow-paced persistent attack, such as slow worm or bot, can bewilder the detection system by slowing down their attack. Detecting such attacks based on traditional anomaly detection techniques may yield high false alarm rates. In this paper, we frame our problem as detecting slow-paced persistent attacks from a time series obtained from network trace. We focus on time series spectrum analysis to identify peculiar spectral patterns that may represent the occurrence of a persistent activity in the time domain. We propose a method to adaptively detect slow-paced persistent attacks in a time series and evaluate the proposed method by conducting experiments using both synthesized traffic and real-world traffic. The results show that the proposed method is capable of detecting slow-paced persistent attacks even in a noisy environment mixed with legitimate traffic.
As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.
Wireless channel reciprocity can be successfully exploited as a common source of randomness for the generation of a secret key by two legitimate users willing to achieve confidential communications over a public channel. This paper presents an analytical framework to investigate the theoretical limits of secret-key generation when wireless multi-dimensional Gaussian channels are used as source of randomness. The intrinsic secrecy content of wide-sense stationary wireless channels in frequency, time and spatial domains is derived through asymptotic analysis as the number of observations in a given domain tends to infinity. Some significant case studies are presented where single and multiple antenna eavesdroppers are considered. In the numerical results, the role of signal-to-noise ratio, spatial correlation, frequency and time selectivity is investigated.
We consider the problem of communicating information over a network secretly and reliably in the presence of a hidden adversary who can eavesdrop and inject malicious errors. We provide polynomial-time distributed network codes that are information-theoretically rate-optimal for this scenario, improving on the rates achievable in prior work by Ngai Our main contribution shows that as long as the sum of the number of links the adversary can jam (denoted by ZO) and the number of links he can eavesdrop on (denoted by ZI) is less than the network capacity (denoted by C) (i.e., ), our codes can communicate (with vanishingly small error probability) a single bit correctly and without leaking any information to the adversary. We then use this scheme as a module to design codes that allow communication at the source rate of C- ZO when there are no security requirements, and codes that allow communication at the source rate of C- ZO- ZI while keeping the communicated message provably secret from the adversary. Interior nodes are oblivious to the presence of adversaries and perform random linear network coding; only the source and destination need to be tweaked. We also prove that the rate-region obtained is information-theoretically optimal. In proving our results, we correct an error in prior work by a subset of the authors in this paper.
We investigate large wireless networks subject to security constraints. In contrast to point-to-point, interference-limited communications considered in prior works, we propose active cooperative relaying based schemes. We consider a network with nl legitimate nodes and ne eavesdroppers, and path loss exponent α ≥ 2. As long as ne2(log(ne))γ = o(nl) holds for some positive γ, we show one can obtain unbounded secure aggregate rate. This means zero-cost secure communication, given a fixed total power constraint for the entire network. We achieve this result with (i) the source using Wyner randomized encoder and a serial (multi-stage) block Markov scheme, to cooperate with the relays, and (ii) the relays acting as a virtual multi-antenna to apply beamforming against the eavesdroppers. Our simpler parallel (two-stage) relaying scheme can achieve the same unbounded secure aggregate rate when neα/2 + 1 (log(ne))γ+δ(α/2+1) = o(nl) holds, for some positive γ, δ.
In this paper, we consider the security of exact-repair regenerating codes operating at the minimum-storage-regenerating (MSR) point. The security requirement (introduced in Shah et. al.) is that no information about the stored data file must be leaked in the presence of an eavesdropper who has access to the contents of ℓ1 nodes as well as all the repair traffic entering a second disjoint set of ℓ2 nodes. We derive an upper bound on the size of a data file that can be securely stored that holds whenever ℓ2 ≤ d - k + 1. This upper bound proves the optimality of the product-matrix-based construction of secure MSR regenerating codes by Shah et. al.
In this article, researcher collaboration patterns and research topics on Intelligence and Security Informatics (ISI) are investigated using social network analysis approaches. The collaboration networks exhibit scale-free property and small-world effect. From these networks, the authors obtain the key researchers, institutions, and three important topics.
Certificateless public key cryptography (CL-PKC) combines the advantage of both traditional PKC and identity-based cryptography (IBC) as it eliminates the certificate management problem in traditional PKC and resolves the key escrow problem in IBC. Recently, Choi et al. and Tso et al.proposed two different efficient CL short signature schemes and claimed that the two schemes are secure against super adversaries and satisfy the strongest security. In this study, the authors show that both Choi et al.'s scheme and Tso et al.'s scheme are insecure against the strong adversaries who can replace users' public keys and have access to the signing oracle under the replaced public keys.
NoSQL databases are easy to scale-out because of their flexible schema and support for BASE (Basically Available, Soft State and Eventually Consistent) properties. The process of scaling-out in most of these databases is supported by sharding which is considered as the key feature in providing faster reads and writes to the database. However, securing the data sharded over various servers is a challenging problem because of the data being distributedly processed and transmitted over the unsecured network. Though, extensive research has been performed on NoSQL sharding mechanisms but no specific criterion has been defined to analyze the security of sharded architecture. This paper proposes an assessment criterion comprising various security features for the analysis of sharded NoSQL databases. It presents a detailed view of the security features offered by NoSQL databases and analyzes them with respect to proposed assessment criteria. The presented analysis helps various organizations in the selection of appropriate and reliable database in accordance with their preferences and security requirements.
Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.
In order to conserve wireless sensor network (WSN) lifetime, data aggregation is applied. Some researchers consider the importance of security and propose secure data aggregation protocols. The essential of those secure approaches is to make sure that the aggregators aggregate the data in appropriate and secure way. In this paper we give the description of ESPDA (Energy-efficient and Secure Pattern-based Data Aggregation) and SRDA (Secure Reference-Based Data Aggregation) protocol that work on cluster-based WSN and the deep security analysis that are different from the previously presented one.
Denial-of-Service (DoS) and probe attacks are growing more modern and sophisticated in order to evade detection by Intrusion Detection Systems (IDSs) and to increase the potent threat to the availability of network services. Detecting these attacks is quite tough for network operators using misuse-based IDSs because they need to see through attackers and upgrade their IDSs by adding new accurate attack signatures. In this paper, we proposed a novel signal and image processing-based method for detecting network probe and DoS attacks in which prior knowledge of attacks is not required. The method uses a time-frequency representation technique called S-transform, which is an extension of Wavelet Transform, to reveal abnormal frequency components caused by attacks in a traffic signal (e.g., a time-series of the number of packets). Firstly, S-Transform converts the traffic signal to a two-dimensional image which describes time-frequency behavior of the traffic signal. The frequencies that behave abnormally are discovered as abnormal regions in the image. Secondly, Otsu's method is used to detect the abnormal regions and identify time that attacks occur. We evaluated the effectiveness of the proposed method with several network probe and DoS attacks such as port scans, packet flooding attacks, and a low-intensity DoS attack. The results clearly indicated that the method is effective for detecting the probe and DoS attack streams which were generated to real-world Internet.
In this paper, an edit detection method for forensic audio analysis is proposed. It develops and improves a previous method through changes in the signal processing chain and a novel detection criterion. As with the original method, electrical network frequency (ENF) analysis is central to the novel edit detector, for it allows monitoring anomalous variations of the ENF related to audio edit events. Working in unsupervised manner, the edit detector compares the extent of ENF variations, centered at its nominal frequency, with a variable threshold that defines the upper limit for normal variations observed in unedited signals. The ENF variations caused by edits in the signal are likely to exceed the threshold providing a mechanism for their detection. The proposed method is evaluated in both qualitative and quantitative terms via two distinct annotated databases. Results are reported for originally noisy database signals as well as versions of them further degraded under controlled conditions. A comparative performance evaluation, in terms of equal error rate (EER) detection, reveals that, for one of the tested databases, an improvement from 7% to 4% EER is achieved, respectively, from the original to the new edit detection method. When the signals are amplitude clipped or corrupted by broadband background noise, the performance figures of the novel method follow the same profile of those of the original method.
We characterize the secrecy level of communication under Uncoordinated Frequency Hopping, a spread spectrum scheme where a transmitter and a receiver randomly hop through a set of frequencies with the goal of deceiving an adversary. In our work, the goal of the legitimate parties is to land on a given frequency without the adversary eavesdroppers doing so, therefore being able to communicate securely in that period, that may be used for secret-key exchange. We also consider the effect on secrecy of the availability of friendly jammers that can be used to obstruct eavesdroppers by causing them interference. Our results show that tuning the number of frequencies and adding friendly jammers are effective countermeasures against eavesdroppers.
The electric network frequency (ENF) criterion is a recently developed technique for audio timestamp identification, which involves the matching between extracted ENF signal and reference data. For nearly a decade, conventional matching criterion has been based on the minimum mean squared error (MMSE) or maximum correlation coefficient. However, the corresponding performance is highly limited by low signal-to-noise ratio, short recording durations, frequency resolution problems, and so on. This paper presents a threshold-based dynamic matching algorithm (DMA), which is capable of autocorrecting the noise affected frequency estimates. The threshold is chosen according to the frequency resolution determined by the short-time Fourier transform (STFT) window size. A penalty coefficient is introduced to monitor the autocorrection process and finally determine the estimated timestamp. It is then shown that the DMA generalizes the conventional MMSE method. By considering the mainlobe width in the STFT caused by limited frequency resolution, the DMA achieves improved identification accuracy and robustness against higher levels of noise and the offset problem. Synthetic performance analysis and practical experimental results are provided to illustrate the advantages of the DMA.
Biometrics is attracting increasing attention in privacy and security concerned issues, such as access control and remote financial transaction. However, advanced forgery and spoofing techniques are threatening the reliability of conventional biometric modalities. This has been motivating our investigation of a novel yet promising modality transient evoked otoacoustic emission (TEOAE), which is an acoustic response generated from cochlea after a click stimulus. Unlike conventional modalities that are easily accessible or captured, TEOAE is naturally immune to replay and falsification attacks as a physiological outcome from human auditory system. In this paper, we resort to wavelet analysis to derive the time-frequency representation of such nonstationary signal, which reveals individual uniqueness and long-term reproducibility. A machine learning technique linear discriminant analysis is subsequently utilized to reduce intrasubject variability and further capture intersubject differentiation features. Considering practical application, we also introduce a complete framework of the biometric system in both verification and identification modes. Comparative experiments on a TEOAE data set of biometric setting show the merits of the proposed method. Performance is further improved with fusion of information from both ears.
In this paper we investigate a secret sharing scheme based on a shortened systematic Reed-Solomon code. In the scheme L secrets S1, S2, ..., SLand n shares X1, X2, ..., Xn satisfy certain n - k + L linear equations. Security of such a ramp secret sharing scheme is analyzed in detail. We prove that this scheme realizes a (k; n)-threshold scheme for the case of L = 1 and a ramp (k, L, n)-threshold scheme for the case of 2 ≤ L ≤ k - 1 under a certain assumption on S1, S2, ..., SL.
It has gradually realized in the industry that the increasing complexity of cloud computing under interaction of technology, business, society and the like, instead of being simply solved depending on research on information technology, shall be explained and researched from a systematic and scientific perspective on the basis of theory and method of a complex adaptive system (CAS). This article, for basic problems in CAS theoretical framework, makes research on definition of an active adaptive agent constituting the cloud computing system, and proposes a service agent concept and basic model through commonality abstraction from two basic levels: cloud computing technology and business, thus laying a foundation for further development of cloud computing complexity research as well as for multi-agent based cloud computing environment simulation.
Sampling and reconstruction (S&R) are used in virtually all areas of science and technology. The classical sampling theorem is a theoretical foundation of S&R. However, for a long time, only sampling rates and ways of the sampled signals representation were derived from it. The fact that the design of S&R circuits (SCs and RCs) is based on a certain interpretation of the sampling theorem was mostly forgotten. The traditional interpretation of this theorem was selected at the time of the theorem introduction because it offered the only feasible way of S&R realization then. At that time, its drawbacks did not manifest themselves. By now, this interpretation has largely exhausted its potential and inhibits future progress in the field. This tutorial expands the theoretical foundation of S&R. It shows that the traditional interpretation, which is indirect, can be replaced by the direct one or by various combinations of the direct and indirect interpretations that enable development of novel SCs and RCs (NSCs and NRCs) with advanced properties. The tutorial explains the basic principles of the NSCs and NRCs design, their advantages, as well as theoretical problems and practical challenges of their realization. The influence of the NSCs and NRCs on the architectures of SDRs and CRs is also discussed.
The paradigm shift from traditional BPM to Subject-oriented BPM (S-BPM) is accounted to identifying independently acting subjects. As such, they can perform arbitrary actions on arbitrary objects. Abstract State Machines (ASMs) work on a similar basis. Exploring their capabilities with respect to representing and executing S-BPM models strengthens the theoretical foundations of S-BPM, and thus, validity of S-BPM tools. Moreover it enables coherent intertwining of business process modeling with executing of S-BPM representations. In this contribution we introduce the framework and roadmap tackling the exploration of the ASM approach in the context of S-BPM. We also report the major result, namely the implementation of an executable workflow engine with an Abstract State Machine interpreter based on an existing abstract interpreter model for S-BPM (applying the ASM refinement concept). This workflow engine serves as a baseline and reference implementation for further language and processing developments, such as simulation tools, as it has been developed within the Open-S-BPM initiative.
In the present paper, we present our approach for the transformation of workflow applications based on institution theory. The workflow application is modeled with UML Activity Diagram(UML AD). Then, for a formal verification purposes, the graphical model will be translated to an Event-B specification. Institution theory will be used in two levels. First, we defined a local semantic for UML AD and Event B specification using a categorical description of each one. Second, we defined institution comorphism to link the two defined institutions. The theoretical foundations of our approach will be studied in the same mathematical framework since the use of institution theory. The resulted Event-B specification, after applying the transformation approach, will be used for the formal verification of functional proprieties and the verification of absences of problems such deadlock. Additionally, with the institution comorphism, we define a semantic correctness and coherence of the model transformation.
Mutation analysis generates tests that distinguish variations, or mutants, of an artifact from the original. Mutation analysis is widely considered to be a powerful approach to testing, and hence is often used to evaluate other test criteria in terms of mutation score, which is the fraction of mutants that are killed by a test set. But mutation analysis is also known to provide large numbers of redundant mutants, and these mutants can inflate the mutation score. While mutation approaches broadly characterized as reduced mutation try to eliminate redundant mutants, the literature lacks a theoretical result that articulates just how many mutants are needed in any given situation. Hence, there is, at present, no way to characterize the contribution of, for example, a particular approach to reduced mutation with respect to any theoretical minimal set of mutants. This paper's contribution is to provide such a theoretical foundation for mutant set minimization. The central theoretical result of the paper shows how to minimize efficiently mutant sets with respect to a set of test cases. We evaluate our method with a widely-used benchmark.
Principles of agile information systems development (ISD) have attracted the interest of practice as well as research. The goal of this literature review is to validate, update and extend previous reviews in terms of the general state of research on agile ISD. Besides including categories such as the employed research methods and data collection techniques, the importance of theory is highlighted by evaluating the theoretical foundations and contributions of former studies. Since agile ISD is rooted in the IS as well as software engineering discipline, important outlets of both disciplines are included in the search process, resulting in 482 investigated papers. The findings show that quantitative studies and the theoretical underpinnings of agile ISD are lacking. Extreme Programming is still the most researched agile ISD method, and more efforts on Scrum are needed. In consequence, multiple research gaps that need further research attention are identified.
Recent attention to aviation cyber physical systems (ACPS) is driven by the need for seamless integration of design disciplines that dominate physical world and cyber world convergence. System convergence is a big obstacle to good aviation cyber-physical system (ACPS) design, which is due to a lack of an adequate scientific theoretical foundation for the subject. The absence of a good understanding of the science of aviation system convergence is not due to neglect, but rather due to its difficulty. Most complex aviation system builders have abandoned any science or engineering discipline for system convergence they simply treat it as a management problem. Aviation System convergence is almost totally absent from software engineering and engineering curricula. Hence, system convergence is particularly challenging in ACPS where fundamentally different physical and computational design concerns intersect. In this paper, we propose an integrated approach to handle System convergence of aviation cyber physical systems based on multi-dimensions, multi-views, multi-paradigm and multiple tools. This model-integrated development approach addresses the development needs of cyber physical systems through the pervasive use of models, and physical world, cyber world can be specified and modeled together, cyber world and physical world can be converged entirely, and cyber world models and physical world model can be integrated seamlessly. The effectiveness of the approach is illustrated by means of one practical case study: specifying and modeling Aircraft Systems. In this paper, We specify and model Aviation Cyber-Physical Systems with integrating Modelica, Modelicaml and Architecture Analysis & Design Language (AADL), the physical world is modeled by Modelica and Modelicaml, the cyber part is modeled by AADL and Modelicaml.