Biblio
Maritime transportation plays a critical role for the U.S. and global economies, and has evolved into a complex system that involves a plethora of supply chain stakeholders spread around the globe. The inherent complexity brings huge security challenges including cargo loss and high burdens in cargo inspection against illicit activities and potential terrorist attacks. The emerging blockchain technology provides a promising tool to build a unified maritime cargo tracking system critical for cargo security. However, most existing efforts focus on transportation data itself, while ignoring how to bind the physical cargo movements and information managed by the system consistently. This can severely undermine the effectiveness of securing cargo transportation. To fulfill this gap, we propose a binding scheme leveraging a novel digital identity management mechanism. The digital identity management mechanism maps the best practice in the physical world to the cyber world and can be seamlessly integrated with a blockchain-based cargo management system.
The dynamically changing landscape of DDoS threats increases the demand for advanced security solutions. The rise of massive IoT botnets enables attackers to mount high-intensity short-duration ”volatile ephemeral” attack waves in quick succession. Therefore the standard human-in-the-loop security center paradigm is becoming obsolete. To battle the new breed of volatile DDoS threats, the intrusion detection system (IDS) needs to improve markedly, at least in reaction times and in automated response (mitigation). Designing such an IDS is a daunting task as network operators are traditionally reluctant to act - at any speed - on potentially false alarms. The primary challenge of a low reaction time detection system is maintaining a consistently low false alarm rate. This paper aims to show how a practical FPGA-based DDoS detection and mitigation system can successfully address this. Besides verifying the model and algorithms with real traffic ”in the wild”, we validate the low false alarm ratio. Accordingly, we describe a methodology for determining the false alarm ratio for each involved threat type, then we categorize the causes of false detection, and provide our measurement results. As shown here, our methods can effectively mitigate the volatile ephemeral DDoS attacks, and accordingly are usable both in human out-of-loop and on-the-loop next-generation security solutions.
In this work, a measurement system is developed based on acoustic resonance which can be used for classification of materials. Basically, the inspection methods based on acoustic, utilized for containers screening in the field, identification of defective pills hold high significance in the fields of health, security and protection. However, such techniques are constrained by costly instrumentation, offline analysis and complexities identified with transducer holder physical coupling. So a simple, non-destructive and amazingly cost effective technique in view of acoustic resonance has been formulated here for quick data acquisition and analysis of acoustic signature of liquids for their constituent identification and classification. In this system, there are two ceramic coated piezoelectric transducers attached at both ends of V-shaped glass, one is act as transmitter and another as receiver. The transmitter generates sound with the help of white noise generator. The pick up transducer on another end of the V-shaped glass rod detects the transmitted signal. The recording is being done with arduino interfaced to computer. The FFTs of recorded signals are being analyzed and the resulted resonant frequency observed for water, water+salt and water+sugar are 4.8 KHz, 6.8 KHz and 3.2 KHz respectively. The different resonant frequency in case different sample is being observed which shows that the developed prototype model effectively classifying the materials.
To solve the problems associated with large data volume real-time processing, heterogeneous systems using various computing devices are increasingly used. The characteristic of solving this class of problems is related to the fact that there are two directions for improving methods of real-time data analysis: the first is the development of algorithms and approaches to analysis, and the second is the development of hardware and software. This article reviews the main approaches to the architecture of a hardware-software solution for traffic capture and deep packet inspection (DPI) in data transmission networks with a bandwidth of 80 Gbit/s and higher. At the moment there are software and hardware tools that allow designing the architecture of capture system and deep packet inspection: 1) Using only the central processing unit (CPU); 2) Using only the graphics processing unit (GPU); 3) Using the central processing unit and graphics processing unit simultaneously (CPU + GPU). In this paper, we consider these key approaches. Also attention is paid to both hardware and software requirements for the architecture of solutions. Pain points and remedies are described.
This paper describes the work done to design a SoC platform for real-time on-line pattern search in TCP packets for Deep Packet Inspection (DPI) applications. The platform is based on a Xilinx Zynq programmable SoC and includes an accelerator that implements a pattern search engine that extends the original Boyer-Moore algorithm with timing and logical rules, that produces a very complex set of rules. Also, the platform implements different modes of operation, including SIMD and MISD parallelism, which can be configured on-line. The platform is scalable depending of the analysis requirement up to 8 Gbps. High-Level synthesis and platform based design methodologies have been used to reduce the time to market of the completed system.
Nowadays, Internet Service Providers (ISPs) have been depending on Deep Packet Inspection (DPI) approaches, which are the most precise techniques for traffic identification and classification. However, constructing high performance DPI approaches imposes a vigilant and an in-depth computing system design because the demands for the memory and processing power. Membership query data structures, specifically Bloom filter (BF), have been employed as a matching check tool in DPI approaches. It has been utilized to store signatures fingerprint in order to examine the presence of these signatures in the incoming network flow. The main issue that arise when employing Bloom filter in DPI approaches is the need to use k hash functions which, in turn, imposes more calculations overhead that degrade the performance. Consequently, in this paper, a new design and implementation for a DPI approach have been proposed. This DPI utilizes a membership query data structure called Cuckoo filter (CF) as a matching check tool. CF has many advantages over BF like: less memory consumption, less false positive rate, higher insert performance, higher lookup throughput, support delete operation. The achieved experiments show that the proposed approach offers better performance results than others that utilize Bloom filter.
DPI Management application which resides on the north-bound of SDN architecture is to analyze the application signature data from the network. The data being read and analyzed are of format JSON for effective data representation and flows provisioned from North-bound application is also of JSON format. The data analytic engine analyzes the data stored in the non-relational data base and provides the information about real-time applications used by the network users. Allows the operator to provision flows dynamically with the data from the network to allow/block flows and also to boost the bandwidth. The DPI Management application allows decoupling of application with the controller; thus providing the facility to run it in any hyper-visor within network. Able to publish SNMP trap notifications to the network operators with application threshold and flow provisioning behavior. Data purging from non-relational database at frequent intervals to remove the obsolete analyzed data.
Modern vehicles are opening up, with wireless interfaces such as Bluetooth integrated in order to enable comfort and safety features. Furthermore a plethora of aftermarket devices introduce additional connectivity which contributes to the driving experience. This connectivity opens the vehicle to potentially malicious attacks, which could have negative consequences with regards to safety. In this paper, we survey vehicles with Bluetooth connectivity from a threat intelligence perspective to gain insight into conditions during real world driving. We do this in two ways: firstly, by examining Bluetooth implementation in vehicles and gathering information from inside the cabin, and secondly, using war-nibbling (general monitoring and scanning for nearby devices). We find that as the vehicle age decreases, the security (relatively speaking) of the Bluetooth implementation increases, but that there is still some technological lag with regards to Bluetooth implementation in vehicles. We also find that a large proportion of vehicles and aftermarket devices still use legacy pairing (and are therefore more insecure), and that these vehicles remain visible for sufficient time to mount an attack (assuming some premeditation and preparation). We demonstrate a real-world threat scenario as an example of the latter. Finally, we provide some recommendations on how the security risks we discover could be mitigated.
Modbus over TCP/IP is one of the most popular industrial network protocol that are widely used in critical infrastructures. However, vulnerability of Modbus TCP protocol has attracted widely concern in the public. The traditional intrusion detection methods can identify some intrusion behaviors, but there are still some problems. In this paper, we present an innovative approach, SD-IDS (Stereo Depth IDS), which is designed for perform real-time deep inspection for Modbus TCP traffic. SD-IDS algorithm is composed of two parts: rule extraction and deep inspection. The rule extraction module not only analyzes the characteristics of industrial traffic, but also explores the semantic relationship among the key field in the Modbus TCP protocol. The deep inspection module is based on rule-based anomaly intrusion detection. Furthermore, we use the online test to evaluate the performance of our SD-IDS system. Our approach get a low rate of false positive and false negative.
The theory of robust control models the controller-disturbance interaction as a game where disturbance is nonstrategic. The proviso of a deliberately malicious (strategic) attacker should be considered to increase the robustness of infrastructure systems. This has become especially important since many IT systems supporting critical functionalities are vulnerable to exploits by attackers. While the usefulness of game theory methods for modeling cyber-security is well established in the literature, new game theoretic models of cyber-physical security are needed for deriving useful insights on "optimal" attack plans and defender responses, both in terms of allocation of resources and operational strategies of these players. This whitepaper presents some progress and challenges in using game-theoretic models for security of infrastructure networks. Main insights from the following models are presented: (i) Network security game on flow networks under strategic edge disruptions; (ii) Interdiction problem on distribution networks under node disruptions; (iii) Inspection game to monitor commercial non-technical losses (e.g. energy diversion); and (iv) Interdependent security game of networked control systems under communication failures. These models can be used to analyze the attacker-defender interactions in a class of cyber-physical security scenarios.
The correct prediction of faulty modules or classes has a number of advantages such as improving the quality of software and assigning capable development resources to fix such faults. There have been different kinds of fault/defect prediction models proposed in literature, but a great majority of them makes use of static code metrics as independent variables for making predictions. Recently, process metrics have gained a considerable attention as alternative metrics to use for making trust-worthy predictions. The objective of this paper is to investigate different combinations of static code and process metrics for evaluating fault prediction performance. We have used publicly available data sets, along with a frequently used classifier, Naive Bayes, to run our experiments. We have, both statistically and visually, analyzed our experimental results. The statistical analysis showed evidence against any significant difference in fault prediction performances for a variety of different combinations of metrics. This reinforced earlier research results that process metrics are as good as predictors of fault proneness as static code metrics. Furthermore, the visual inspection of box plots revealed that the best set of metrics for fault prediction is a mix of both static code and process metrics. We also presented evidence in support of some process metrics being more discriminating than others and thus making them as good predictors to use.
In view of the high demand for the security of visiting data in power system, a network data security analysis method based on DPI technology was put forward in this paper, to solve the problem of security gateway judge the legality of the network data. Considering the legitimacy of the data involves data protocol and data contents, this article will filters the data from protocol matching and content detection. Using deep packet inspection (DPI) technology to screen the protocol. Using protocol analysis to detect the contents of data. This paper implements the function that allowing secure data through the gateway and blocking threat data. The example proves that the method is more effective guarantee the safety of visiting data.
Industrial Control System (ICS) consists of large number of electronic devices connected to field devices to execute the physical processes. Communication network of ICS supports wide range of packet based applications. A growing issue with network security and its impact on ICS have highlighted some fundamental risks to critical infrastructure. To address network security issues for ICS a clear understanding of security specific defensive countermeasures is required. Reconnaissance of ICS network by deep packet inspection (DPI) consists analysis of the contents of the captured packets in order to get accurate measures of process that uses specific countermeasure to create an aggregated posture. In this paper we focus on novel approach by presenting a technique with captured network traffic. This technique is capable to identify the protocols and extract different features for classification of traffic based on network protocol, header information and payload to understand the whole architecture of complex system. Here we have segregated possible types of attacks on ICS.
In this paper we present an approach to implement security as a Virtualized Network Function (VNF) that is implemented within a Software-Defined Infrastructure (SDI). We present a scalable, flexible, and seamless design for a Deep Packet Inspection (DPI) system for network intrusion detection and prevention. We discuss how our design introduces significant reductions in both capital and operational expenses (CAPEX and OPEX). As proof of concept, we describe an implementation for a modular security solution that uses the SAVI SDI testbed to first detect and then block an attack or to re-direct it to a honey-pot for further analysis. We discuss our testing methodology and provide measurement results for the test cases where an application faces various security attacks.
In this paper, we study the security and system congestion in a risk-based checkpoint screening system with two kinds of inspection queues, named as Selectee Lanes and Normal Lanes. Based on the assessed threat value, the arrival crossing the security checkpoints is classified as either a selectee or a non-selectee. The Selectee Lanes with enhanced scrutiny are used to check selectees, while Normal Lanes are used to check non-selectees. The goal of the proposed modelling framework is to minimize the system congestion under the constraints of total security and limited budget. The system congestion of the checkpoint screening system is determined through a steady-state analysis of multi-server queueing models. By solving an optimization model, we can determine the optimal threshold for differentiating the arrivals, and determine the optimal number of security devices for each type of inspection queues. The analysis conducted in this study contributes managerial insights for understanding the operation and system performance of such risk-based checkpoint screening systems.
In order to enhance the supply chain security at airports, the German federal ministry of education and research has initiated the project ESECLOG (enhanced security in the air cargo chain) which has the goal to improve the threat detection accuracy using one-sided access methods. In this paper, we present a new X-ray backscatter technology for non-intrusive imaging of suspicious objects (mainly low-Z explosives) in luggage's and parcels with only a single-sided access. A key element in this technology is the X-ray backscatter camera embedded with a special twisted-slit collimator. The developed technology has efficiently resolved the problem related to the imaging of complex interior of the object by fixing source and object positions and changing only the scanning direction of the X-ray backscatter camera. Experiments were carried out on luggages and parcels packed with mock-up dangerous materials including liquid and solid explosive simulants. In addition, the quality of the X-ray backscatter image was enhanced by employing high-resolution digital detector arrays. Experimental results are discussed and the efficiency of the present technique to detect suspicious objects in luggages and parcels is demonstrated. At the end, important applications of the proposed backscatter imaging technology to the aviation security are presented.
The safety, security, and resilience of international postal, shipping, and transportation critical infrastructure are vital to the global supply chain that enables worldwide commerce and communications. But security on an international scale continues to fail in the face of new threats, such as the discovery by Panamanian authorities of suspected components of a surface-to-air missile system aboard a North Korean-flagged ship in July 2013 [1].This reality calls for new and innovative approaches to critical infrastructure security. Owners and operators of critical postal, shipping, and transportation operations need new methods to identify, assess, and mitigate security risks and gaps in the most effective manner possible.
There are relatively fewer studies on the security-check waiting lines for screening cargo containers using queueing models. In this paper, we address two important measures at a security-check system, which are concerning the security screening effectiveness and the efficiency. The goal of this paper is to provide a modelling framework to understand the economic trade-offs embedded in container-inspection decisions. In order to analyze the policy initiatives, we develop a stylized queueing model with the novel features pertaining to the security checkpoints.
In a system manufacturing process that use screens, for exemple, TVs, computer monitors, or notebook, the inspection images is one of the most important quality tests. Due to increasing complexity of these systems, manual inspection became complex and slow. Thus, automatic inspection is an attractive alternative. In this paper, we present an automatic inspection system images using edge and line detection algorithms, rectangles recognition and image comparison metrics. The experiments, performed to 504 images (TVs, computer monitors, and notebook) demonstrate that the system has good performance.
Application domains in which early performance evaluation is needed are becoming more complex. In addition to traditional measures of complexity due, for example, to the number of components, their interactions, complicated control coordination and schemes, emerging applications may require adaptive response and reconfiguration the impact of externally observable (security) parameters. In this paper we introduce an approach for effective modeling and analysis of performance and security tradeoffs. The approach identifies a suitable allocation of resources that meet performance requirements, while maximizing measurable security effects. We demonstrate this approach through the analysis of performance sensitivity of a Border Inspection Management System (BIMS) with changing security mechanisms (e.g. biometric system parameters for passenger identification). The final result is a model-based approach that allows us to take decisions about BIMS performance and security mechanisms on the basis of rates of traveler arrivals and traveler identification security guarantees. We describe the experience gained when applying this approach to daily flight arrival schedule of a real airport.
This paper addresses a robust methodology for developing a statistically sound, robust prognostic condition index and encapsulating this index as a series of highly accurate, transparent, human-readable rules. These rules can be used to further understand degradation phenomena and also provide transparency and trust for any underlying prognostic technique employed. A case study is presented on a wind turbine gearbox, utilising historical supervisory control and data acquisition (SCADA) data in conjunction with a physics of failure model. Training is performed without failure data, with the technique accurately identifying gearbox degradation and providing prognostic signatures up to 5 months before catastrophic failure occurred. A robust derivation of the Mahalanobis distance is employed to perform outlier analysis in the bivariate domain, enabling the rapid labelling of historical SCADA data on independent wind turbines. Following this, the RIPPER rule learner was utilised to extract transparent, human-readable rules from the labelled data. A mean classification accuracy of 95.98% of the autonomously derived condition was achieved on three independent test sets, with a mean kappa statistic of 93.96% reported. In total, 12 rules were extracted, with an independent domain expert providing critical analysis, two thirds of the rules were deemed to be intuitive in modelling fundamental degradation behaviour of the wind turbine gearbox.
- « first
- ‹ previous
- 1
- 2
- 3