Biblio
This paper addresses the potential danger using integrated circuits which contain malicious hardware modifications hidden in the silicon structure. A so called hardware Trojan may be added at several stages of the chip development process. This work concentrates on formal hardware Trojan detection during the design phase and highlights applied verification techniques. Selected methods are discussed and their combination used to increase an introduced “Trojan Assurance Level”.
Due to design and fabrication outsourcing to foundries, the problem of malicious modifications to integrated circuits known as hardware Trojans has attracted attention in academia as well as industry. To reduce the risks associated with Trojans, researchers have proposed different approaches to detect them. Among these approaches, test-time detection approaches have drawn the greatest attention and most approaches assume the existence of a “golden model”. Prior works suggest using reverse-engineering to identify such Trojan-free ICs for the golden model but they did not state how to do this efficiently. In this paper, we propose an innovative and robust reverseengineering approach to identify the Trojan-free ICs. We adapt a well-studied machine learning method, one-class support vector machine, to solve our problem. Simulation results using state-of-the-art tools on several publicly available circuits show that our approach can detect hardware Trojans with high accuracy rate across different modeling and algorithm parameters.
Critical Infrastructure represents the basic facilities, services and installations necessary for functioning of a community, such as water, power lines, transportation, or communication systems. Any act or practice that causes a real-time Critical Infrastructure System to impair its normal function and performance will have debilitating impact on security and economy, with direct implication on the society. SCADA (Supervisory Control and Data Acquisition) system is a control system which is widely used in Critical Infrastructure System to monitor and control industrial processes autonomously. As SCADA architecture relies on computers, networks, applications and programmable controllers, it is more vulnerable to security threats/attacks. Traditional SCADA communication protocols such as IEC 60870, DNP3, IEC 61850, or Modbus did not provide any security services. Newer standards such as IEC 62351 and AGA-12 offer security features to handle the attacks on SCADA system. However there are performance issues with the cryptographic solutions of these specifications when applied to SCADA systems. This research is aimed at improving the performance of SCADA security standards by employing NTRU, a faster and light-weight NTRU public key algorithm for providing end-to-end security.
As most of the modern encryption algorithms are broken fully/partially, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography has been identified as a possible technology that may bring forward a new hope for hybrid and unbreakable algorithms. Currently, several DNA computing algorithms are proposed for cryptography, cryptanalysis and steganography problems, and they are proven to be very powerful in these areas. This paper gives an architectural framework for encryption & Generation of digital signature using DNA Cryptography. To analyze the performance; the original plaintext size and the key size; together with the encryption and decryption time are examined also the experiments on plaintext with different contents are performed to test the robustness of the program.
Nowhere is the problem of lack of human capital more keenly felt than in the field of cybersecurity where the numbers and quality of well-trained graduates are woefully lacking [10]. In 2005, the National Academy of Sciences indicted the US education system as the culprit contributing to deficiencies in our technical workforce, sounding the alarm that we are at risk of losing our competitive edge [14]. While the government has made cybersecurity education a national priority, seeking to stimulate university and community college production of information assurance (IA) expertise, they still have thousands of IA jobs going unfilled. The big question for the last decade [17] has been 'where will we find the talent we need?' In this article, we describe one university's approach to begin addressing this problem and discuss an innovative curricular model that holistically develops future cybersecurity professionals.
Up-to-date studies and surveys regarding IT security show, that companies of every size and branch nowadays are faced with the growing risk of cyber crime. Many tools, standards and best practices are in place to support enterprise IT security experts in dealing with the upcoming risks, whereas meanwhile especially small and medium sized enterprises(SMEs) feel helpless struggling with the growing threats. This article describes an approach, how SMEs can attain high quality assurance whether they are a victim of cyber crime, what kind of damage resulted from a certain attack and in what way remediation can be done. The focus on all steps of the analysis lies in the economic feasibility and the typical environment of SMEs.
Botnets are the most common vehicle of cyber-criminal activity. They are used for spamming, phishing, denial-of-service attacks, brute-force cracking, stealing private information, and cyber warfare. Botnets carry out network scans for several reasons, including searching for vulnerable machines to infect and recruit into the botnet, probing networks for enumeration or penetration, etc. We present the measurement and analysis of a horizontal scan of the entire IPv4 address space conducted by the Sality botnet in February 2011. This 12-day scan originated from approximately 3 million distinct IP addresses and used a heavily coordinated and unusually covert scanning strategy to try to discover and compromise VoIP-related (SIP server) infrastructure. We observed this event through the UCSD Network Telescope, a /8 darknet continuously receiving large amounts of unsolicited traffic, and we correlate this traffic data with other public sources of data to validate our inferences. Sality is one of the largest botnets ever identified by researchers. Its behavior represents ominous advances in the evolution of modern malware: the use of more sophisticated stealth scanning strategies by millions of coordinated bots, targeting critical voice communications infrastructure. This paper offers a detailed dissection of the botnet's scanning behavior, including general methods to correlate, visualize, and extrapolate botnet behavior across the global Internet.
The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of autonomic computing and a simple object access protocol (SOAP)-based interface to metadata access points (IF-MAP) external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, and self-managed framework. The contribution of this paper is twofold: 1) A flexible two-level communication layer based on autonomic computing and service oriented architecture is detailed and 2) three complementary modules that dynamically reconfigure in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real-world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific operating system and port configurations. In addition, the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.
A novel short-time Fourier transform (STFT) domain adaptive filtering scheme is proposed that can be easily combined with nonlinear post filters such as residual echo or noise reduction in acoustic echo cancellation. Unlike normal STFT subband adaptive filters, which suffers from aliasing artifacts due to its poor prototype filter, our scheme achieves good accuracy by exploiting the relationship between the linear convolution and the poor prototype filter, i.e., the STFT window function. The effectiveness of our scheme was confirmed through the results of simulations conducted to compare it with conventional methods.
Misalignment angles estimation of strapdown inertial navigation system (INS) using global positioning system (GPS) data is highly affected by measurement noises, especially with noises displaying time varying statistical properties. Hence, adaptive filtering approach is recommended for the purpose of improving the accuracy of in-motion alignment. In this paper, a simplified form of Celso's adaptive stochastic filtering is derived and applied to estimate both the INS error states and measurement noise statistics. To detect and bound the influence of outliers in INS/GPS integration, outlier detection based on jerk tracking model is also proposed. The accuracy and validity of the proposed algorithm is tested through ground based navigation experiments.
This letter presents an adaptive filtering approach of synthetic aperture radar (SAR) image times series based on the analysis of the temporal evolution. First, change detection matrices (CDMs) containing information on changed and unchanged pixels are constructed for each spatial position over the time series by implementing coefficient of variation (CV) cross tests. Afterward, the CDM provides for each pixel in each image an adaptive spatiotemporal neighborhood, which is used to derive the filtered value. The proposed approach is illustrated on a time series of 25 ascending TerraSAR-X images acquired from November 6, 2009 to September 25, 2011 over the Chamonix-Mont-Blanc test-site, which includes different kinds of change, such as parking occupation, glacier surface evolution, etc.
The gradient-descent total least-squares (GD-TLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data. We study the local convergence of the GD-TLS algoritlun and find bounds for its step-size that ensure its stability. We also analyze the steady-state performance of the GD-TLS algorithm and calculate its steady-state mean-square deviation. Our steady-state analysis is inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. The results predicted by the analysis show good agreement with the simulation experiments.
A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.
Route selection is a very sensitive activity for mobile ad-hoc network (MANET) and ranking of multiple routes from source node to destination node can result in effective route selection and can provide many other benefits for better performance and security of MANET. This paper proposes an evaluation model based on analytical hierarchy process (AHP), fuzzy sets and technique for order performance by similarity to ideal solution (TOPSIS) to provide a useful solution for ranking of routes. The proposed model utilizes AHP to acquire criteria weights, fuzzy sets to describe vagueness with linguistic values and triangular fuzzy numbers, and TOPSIS to obtain the final ranking of routes. Final ranking of routes facilitates selection of best and most reliable route and provide alternative options for making a robust Mobile Ad-hoc network.
This paper presents a novel visual analytics technique developed to support exploratory search tasks for event data document collections. The technique supports discovery and exploration by clustering results and overlaying cluster summaries onto coordinated timeline and map views. Users can also explore and interact with search results by selecting clusters to filter and re-cluster the data with animation used to smooth the transition between views. The technique demonstrates a number of advantages over alternative methods for displaying and exploring geo-referenced search results and spatio-temporal data. Firstly, cluster summaries can be presented in a manner that makes them easy to read and scan. Listing representative events from each cluster also helps the process of discovery by preserving the diversity of results. Also, clicking on visual representations of geo-temporal clusters provides a quick and intuitive way to navigate across space and time simultaneously. This removes the need to overload users with the display of too many event labels at any one time. The technique was evaluated with a group of nineteen users and compared with an equivalent text based exploratory search engine.
In this paper, an algorithm is proposed to automatically produce hierarchical graph-based representations of maritime shipping lanes extrapolated from historical vessel positioning data. Each shipping lane is generated based on the detection of the vessel behavioural changes and represented in a compact synthetic route composed of the network nodes and route segments. The outcome of the knowledge discovery process is a geographical maritime network that can be used in Maritime Situational Awareness (MSA) applications such as track reconstruction from missing information, situation/destination prediction, and detection of anomalous behaviour. Experimental results are presented, testing the algorithm in a specific scenario of interest, the Dover Strait.
Quantum cryptography and quantum search algorithm are considered as two important research topics in quantum information science. An asymmetrical quantum encryption protocol based on the properties of quantum one-way function and quantum search algorithm is proposed. Depending on the no-cloning theorem and trapdoor one-way functions of the public-key, the eavesdropper cannot extract any private-information from the public-keys and the ciphertext. Introducing key-generation randomized logarithm to improve security of our proposed protocol, i.e., one private-key corresponds to an exponential number of public-keys. Using unitary operations and the single photon measurement, secret messages can be directly sent from the sender to the receiver. The security of the proposed protocol is proved that it is information-theoretically secure. Furthermore, compared the symmetrical Quantum key distribution, the proposed protocol is not only efficient to reduce additional communication, but also easier to carry out in practice, because no entangled photons and complex operations are required.
Cloud Computing is emerging as a new paradigm that aims delivering computing as a utility. For the cloud computing paradigm to be fully adopted and effectively used, it is critical that the security mechanisms are robust and resilient to faults and attacks. Securing cloud systems is extremely complex due to the many interdependent tasks such as application layer firewalls, alert monitoring and analysis, source code analysis, and user identity management. It is strongly believed that we cannot build cloud services that are immune to attacks. Resiliency to attacks is becoming an important approach to address cyber-attacks and mitigate their impacts. Resiliency for mission critical systems is demanded higher. In this paper, we present a methodology to develop an Autonomic Resilient Cloud Management (ARCM) based on moving target defense, cloud service Behavior Obfuscation (BO), and autonomic computing. By continuously and randomly changing the cloud execution environments and platform types, it will be difficult especially for insider attackers to figure out the current execution environment and their existing vulnerabilities, thus allowing the system to evade attacks. We show how to apply the ARCM to one class of applications, Map/Reduce, and evaluate its performance and overhead.
Address shuffling is a type of moving target defense that prevents an attacker from reliably contacting a system by periodically remapping network addresses. Although limited testing has demonstrated it to be effective, little research has been conducted to examine the theoretical limits of address shuffling. As a result, it is difficult to understand how effective shuffling is and under what circumstances it is a viable moving target defense. This paper introduces probabilistic models that can provide insight into the performance of address shuffling. These models quantify the probability of attacker success in terms of network size, quantity of addresses scanned, quantity of vulnerable systems, and the frequency of shuffling. Theoretical analysis shows that shuffling is an acceptable defense if there is a small population of vulnerable systems within a large network address space, however shuffling has a cost for legitimate users. These results will also be shown empirically using simulation and actual traffic traces.
App vetting is the process of approving or rejecting an app prior to deployment on a mobile device. • The decision to approve or reject an app is based on the organization's security requirements and the type and severity of security vulnerabilities found in the app. • Security vulnerabilities including Cross Site Scripting (XSS), information leakage, authentication and authorization, session management, and SQL injection can be exploited to steal information or control a device.
Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.
In many client-facing applications, a vulnerability in any part can compromise the entire application. This paper describes the design and implementation of Passe, a system that protects a data store from unintended data leaks and unauthorized writes even in the face of application compromise. Passe automatically splits (previously shared-memory-space) applications into sandboxed processes. Passe limits communication between those components and the types of accesses each component can make to shared storage, such as a backend database. In order to limit components to their least privilege, Passe uses dynamic analysis on developer-supplied end-to-end test cases to learn data and control-flow relationships between database queries and previous query results, and it then strongly enforces those relationships. Our prototype of Passe acts as a drop-in replacement for the Django web framework. By running eleven unmodified, off-the-shelf applications in Passe, we demonstrate its ability to provide strong security guarantees-Passe correctly enforced 96% of the applications' policies-with little additional overhead. Additionally, in the web-specific setting of the prototype, we also mitigate the cross-component effects of cross-site scripting (XSS) attacks by combining browser HTML5 sandboxing techniques with our automatic component separation.
Recently, there has been great interest in the physical layer security technique which exploits the artificial noise (AN) to enlarge the channel condition between the legitimate receiver and the eavesdropper. However, in certain communication scenery, this strategy may suffer from some attacks in the signal processing perspective. In this paper, we consider speech signals and the scenario in which the eavesdropper has the similar channel performance compared to the legitimate receiver. We design the optimal artificial noise (AN) to resist the attack of the eavesdropper who uses the blind source separation (BSS) technology to reconstruct the secret information. The Optimal AN is obtained by making a tradeoff between results of direct eavesdropping and reconstruction. The simulation results show that the AN we proposed has better performance than that of the white Gaussian AN to resist the BSS attacks effectively.
In view of the difficulty in selecting wavelet base and decomposition level for wavelet-based de-noising method, this paper proposes an adaptive de-noising method based on Ensemble Empirical Mode Decomposition (EEMD). The autocorrelation, cross-correlation method is used to adaptively find the signal-to-noise boundary layer of the EEMD in this method. Then the noise dominant layer is filtered directly and the signal dominant layer is threshold de-noised. Finally, the de-noising signal is reconstructed by each layer component which is de-noised. This method solves the problem of mode mixing in Empirical Mode Decomposition (EMD) by using EEMD and combines the advantage of wavelet threshold. In this paper, we focus on the analysis and verification of the correctness of the adaptive determination of the noise dominant layer. The simulation experiment results prove that this de-noising method is efficient and has good adaptability.
Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.

