Biblio
Cyber Physical Systems (CPS) operating in modern critical infrastructures (CIs) are increasingly being targeted by highly sophisticated cyber attacks. Threat actors have quickly learned of the value and potential impact of targeting CPS, and numerous tailored multi-stage cyber-physical attack campaigns, such as Advanced Persistent Threats (APTs), have been perpetrated in the last years. They aim at stealthily compromising systems' operations and cause severe impact on daily business operations such as shutdowns, equipment damage, reputation damage, financial loss, intellectual property theft, and health and safety risks. Protecting CIs against such threats has become as crucial as complicated. Novel distributed detection and reaction methodologies are necessary to effectively uncover these attacks, and timely mitigate their effects. Correlating large amounts of data, collected from a multitude of relevant sources, is fundamental for Security Operation Centers (SOCs) to establish cyber situational awareness, and allow to promptly adopt suitable countermeasures in case of attacks. In our previous work we introduced three methods for security information correlation. In this paper we define metrics and benchmarks to evaluate these correlation methods, we assess their accuracy, and we compare their performance. We finally demonstrate how the presented techniques, implemented within our cyber threat intelligence analysis engine called CAESAIR, can be applied to support incident handling tasks performed by SOCs.
The wireless boundaries of networks are becoming increasingly important from a security standpoint as the proliferation of 802.11 WiFi technology increases. Concurrently, the complexity of 802.11 access point implementation is rapidly outpacing the standardization process. The result is that nascent wireless functionality management is left up to the individual provider's implementation, which creates new vulnerabilities in wireless networks. One such functional improvement to 802.11 is the virtual access point (VAP), a method of broadcasting logically separate networks from the same physical equipment. Network reconnaissance benefits from VAP identification, not only because network topology is a primary aim of such reconnaissance, but because the knowledge that a secure network and an insecure network are both being broadcast from the same physical equipment is tactically relevant information. In this work, we present a novel graph-theoretic approach to VAP identification which leverages a body of research concerned with establishing community structure. We apply our approach to both synthetic data and a large corpus of real-world data to demonstrate its efficacy. In most real-world cases, near-perfect blind identification is possible highlighting the effectiveness of our proposed VAP identification algorithm.
Location privacy has become a significant challenge of big data. Particularly, by the advantage of big data handling tools availability, huge location data can be managed and processed easily by an adversary to obtain user private information from Location-Based Services (LBS). So far, many methods have been proposed to preserve user location privacy for these services. Among them, dummy-based methods have various advantages in terms of implementation and low computation costs. However, they suffer from the spatiotemporal correlation issue when users submit consecutive requests. To solve this problem, a practical hybrid location privacy protection scheme is presented in this paper. The proposed method filters out the correlated fake location data (dummies) before submissions. Therefore, the adversary can not identify the user's real location. Evaluations and experiments show that our proposed filtering technique significantly improves the performance of existing dummy-based methods and enables them to effectively protect the user's location privacy in the environment of big data.
Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.
Applying security to the transmitted image is very important issues, because the transmission channel is open and can be compromised by attackers. To secure this channel from the eavesdropping attack, man in the middle attack, and so on. A new hybrid encryption image mechanism that utilize triangular scrambling, DNA encoding and chaotic map is implemented. The scheme takes a master key with a length of 320 bit, and produces a group of sub-keys with two length (32 and 128 bit) to encrypt the blocks of images, then a new triangular scrambling method is used to increase the security of the image. Many experiments are implemented using several different images. The analysis results for these experiments show that the security obtained on by using the proposed method is very suitable for securing the transmitted images. The current work has been compared with other works and the result of comparison shows that the current work is very strong against attacks.
Barrier coverage has been widely adopted to prevent unauthorized invasion of important areas in sensor networks. As sensors are typically placed outdoors, they are susceptible to getting faulty. Previous works assumed that faulty sensors are easy to recognize, e.g., they may stop functioning or output apparently deviant sensory data. In practice, it is, however, extremely difficult to recognize faulty sensors as well as their invalid output. We, in this paper, propose a novel fault-tolerant intrusion detection algorithm (TrusDet) based on trust management to address this challenging issue. TrusDet comprises of three steps: i) sensor-level detection, ii) sink-level decision by collective voting, and iii) trust management and fault determination. In the Step i) and ii), TrusDet divides the surveillance area into a set of fine- grained subareas and exploits temporal and spatial correlation of sensory output among sensors in different subareas to yield a more accurate and robust performance of barrier coverage. In the Step iii), TrusDet builds a trust management based framework to determine the confidence level of sensors being faulty. We implement TrusDet on HC- SR501 infrared sensors and demonstrate that TrusDet has a desired performance.
Lithium Ion batteries usually degrade to an unacceptable capacity level after hundreds or even thousands of cycles. The continuously observed capacity fade data over time and their internal structure can be informative for constructing capacity fade models. This paper applies a mean-covariance decomposition modeling method to analyze the capacity fade data. The proposed approach directly examines the variances and correlations in data of interest and express the correlation matrix in hyper-spherical coordinates using angles and trigonometric functions. The proposed method is applied to model and predict key batteries performance metrics using testing data under various testing conditions.
In this paper, we introduce a fast, secure and robust scheme for digital image encryption using chaotic system of Lorenz, 4D hyper-chaotic system and the Secure Hash Algorithm SHA-1. The encryption process consists of three layers: sub-vectors confusion and two-diffusion process. In the first layer we divide the plainimage into sub-vectors then, the position of each one is changed using the chaotic index sequence generated with chaotic attractor of Lorenz, while the diffusion layers use hyper-chaotic system to modify the values of pixels using an XOR operation. The results of security analysis such as statistical tests, differential attacks, key space, key sensitivity, entropy information and the running time are illustrated and compared to recent encryption schemes where the highest security level and speed are improved.
Emerging communication technologies in distributed network systems require transfer of biometric digital images with high security. Network security is identified by the changes in system behavior which is either Dynamic or Deterministic. Performance computation is complex in dynamic system where cryptographic techniques are not highly suitable. Chaotic theory solves complex problems of nonlinear deterministic system. Several chaotic methods are combined to get hyper chaotic system for more security. Chaotic theory along with DNA sequence enhances security of biometric image encryption. Implementation proves the encrypted image is highly chaotic and resistant to various attacks.
In this paper a random number generation method based on a piecewise linear one dimensional (PL1D) discrete time chaotic maps is proposed for applications in cryptography and steganography. Appropriate parameters are determined by examining the distribution of underlying chaotic signal and random number generator (RNG) is numerically verified by four fundamental statistical test of FIPS 140-2. Proposed design is practically realized on the field programmable analog and digital arrays (FPAA-FPGA). Finally it is experimentally verified that the presented RNG fulfills the NIST 800-22 randomness test without post processing.
The paper considers the general structure of Pseudo-random binary sequence generator based on the numerical solution of chaotic differential equations. The proposed generator architecture divides the generation process in two stages: numerical simulation of the chaotic system and converting the resulting sequence to a binary form. The new method of calculation of normalization factor is applied to the conversion of state variables values to the binary sequence. Numerical solution of chaotic ODEs is implemented using semi-implicit symmetric composition D-method. Experimental study considers Thomas and Rössler attractors as test chaotic systems. Properties verification for the output sequences of generators is carried out using correlation analysis methods and NIST statistical test suite. It is shown that output sequences of investigated generators have statistical and correlation characteristics that are specific for the random sequences. The obtained results can be used in cryptography applications as well as in secure communication systems design.
Information security is crucial to data storage and transmission, which is necessary to protect information under various hostile environments. Cryptography serves as a major element to ensure confidentiality in both communication and information technology, where the encryption and decryption schemes are implemented to scramble the pure plaintext and descramble the secret ciphertext using security keys. There are two dominating types of encryption schemes: deterministic encryption and chaotic encryption. Encryption and decryption can be conducted in either spatial domain or frequency domain. To ensure secure transmission of digital information, comparisons on merits and drawbacks of two practical encryption schemes are conducted, where case studies on the true color digital image encryption are presented. Both deterministic encryption in spatial domain and chaotic encryption in frequency domain are analyzed in context, as well as the information integrity after decryption.
Microcavity polaritons are a hybrid photonic system that arises from the strong coupling of confined photons to quantum-well excitons. Due to their light-matter nature, polaritons possess a Kerr-like nonlinearity while being easily accessible by standard optical means. The ability to engineer confinement potentials in microcavities makes polaritons a very convenient system to study spatially localized bosonic populations, which might have great potential for the creation of novel photonic devices. Careful engineering of this system is predicted to induce Gaussian squeezing, a phenomenon that lies at a heart of the so-called unconventional photon blockade associated with single photon emission. This paper reveals a manifestation of the predicted squeezing by measuring the ultrafast time-dependent second-order correlation function g(2)(0) by means of a streak-camera acting as a single photon detector. The light emitted by the microcavity oscillates between Poissonian and super-Poissonian in phase with the Josephson dynamics. This behavior is remarkably well explained by quantum simulations, which predict such dynamical evolution of the squeezing parameters. The paper shows that a crucial prerequisite for squeezing is presence of a weak, but non-zero nonlinearity. Results open the way towards generation of nonclassical light in solid-state systems possessing a single particle nonlinearity like microwave Josephson junctions or silicon-on-chip resonators.
Supercomputers are widely applied in various domains, which have advantage of high processing capability and mass storage. With growing supercomputing users, the system security receives comprehensive attentions, and becomes more and more important. In this paper, according to the characteristics of supercomputing environment, we perform an in-depth analysis of existing security problems in the process of using resources. To solve these problems, we propose a security analysis method and a prototype system for supercomputing users' behavior. The basic idea is to restore the complete users' behavior paths and operation records based on the supercomputing business process and track the use of resources. Finally, the method is evaluated and the results show that the security analysis method of users' behavior can help administrators detect security incidents in time and respond quickly. The final purpose is to optimize and improve the security level of the whole system.
Protecting Critical Infrastructures (CIs) against contemporary cyber attacks has become a crucial as well as complex task. Modern attack campaigns, such as Advanced Persistent Threats (APTs), leverage weaknesses in the organization's business processes and exploit vulnerabilities of several systems to hit their target. Although their life-cycle can last for months, these campaigns typically go undetected until they achieve their goal. They usually aim at performing data exfiltration, cause service disruptions and can also undermine the safety of humans. Novel detection techniques and incident handling approaches are therefore required, to effectively protect CI's networks and timely react to this type of threats. Correlating large amounts of data, collected from a multitude of relevant sources, is necessary and sometimes required by national authorities to establish cyber situational awareness, and allow to promptly adopt suitable countermeasures in case of an attack. In this paper we propose three novel methods for security information correlation designed to discover relevant insights and support the establishment of cyber situational awareness.
Legacy work on correcting firewall anomalies operate with the premise of creating totally disjunctive rules. Unfortunately, such solutions are impractical from implementation point of view as they lead to an explosion of the number of firewall rules. In a related previous work, we proposed a new approach for performing assisted corrective actions, which in contrast to the-state-of-the-art family of radically disjunctive approaches, does not lead to a prohibitive increase of the configuration size. In this sense, we allow relaxation in the correction process by clearly distinguishing between constructive anomalies that can be tolerated and destructive anomalies that should be systematically fixed. However, a main disadvantage of the latter approach was its dependency on the guided input from the administrator which controversially introduces a new risk for human errors. In order to circumvent the latter disadvantage, we present in this paper a Firewall Policy Query Engine (FPQE) that renders the whole process of anomaly resolution a fully automated one and which does not require any human intervention. In this sense, instead of prompting the administrator for inserting the proper order corrective actions, FPQE executes those queries against a high level firewall policy. We have implemented the FPQE and the first results of integrating it with our legacy anomaly resolver are promising.
Many technologies have been developed to provide effective opportunities to enhance the safety of roads and improve transportation system. In face of that, the concept of Vehicular Ad-Hoc Networks (VANET) was introduced to provide intelligent transportation systems. In this work, we propose the use of an OBD Bluetooth adapter and a smartphone to gather data from two cars, then we analyze the relationships between RPM and speed data to identify if this reflects the vehicle's current gear. As a result, we found a coefficient that indicates the behavior of each gear along the time in a trace. We conclude that these analysis, although in the beginning, suggests a way to determine the gear state. Therefore, many services can be developed using this information as, recommendation of gear shift time, eco-driving support, security patterns and entertainment applications.
The main emphasis of this paper is to develop an approach able to detect and assess blindly the perceptual blur degradation in images. The idea deals with a statistical modelling of perceptual blur degradation in the frequency domain using the discrete cosine transform (DCT) and the Just Noticeable Blur (JNB) concept. A machine learning system is then trained using the considered statistical features to detect perceptual blur effect in the acquired image and eventually produces a quality score denoted BBQM for Blind Blur Quality Metric. The proposed BBQM efficiency is tested objectively by evaluating it's performance against some existing metrics in terms of correlation with subjective scores.
Cyber security operations centre (CSOC) is an essential business control aimed to protect ICT systems and support an organisation's Cyber Defense Strategy. Its overarching purpose is to ensure that incidents are identified and managed to resolution swiftly, and to maintain safe & secure business operations and services for the organisation. A CSOC framework is proposed comprising Log Collection, Analysis, Incident Response, Reporting, Personnel and Continuous Monitoring. Further, a Cyber Defense Strategy, supported by the CSOC framework, is discussed. Overlaid atop the strategy is the well-known Her Majesty's Government (HMG) Protective Monitoring Controls (PMCs). Finally, the difficulty and benefits of operating a CSOC are explained.
Strength of security and privacy of any cryptographic mechanisms that use random numbers require that the random numbers generated have two important properties namely 1. Uniform distribution and 2. Independence. With the growth of Internet many devices are connected to Internet that host sensors. One idea proposed is to use sensor data as seed for Random Number Generator (RNG) since sensors measure the physical phenomena that exhibit randomness over time. The random numbers generated from sensor data can be used for cryptographic algorithms in Internet activities. These sensor data also pose weaknesses where sensors may be under adversarial control that may lead to generating expected random sequence which breaks the security and privacy. This paper proposes a wash-rinse-spin approach to process the raw sensor data that increases randomness in the seed value. The generated sequences from two sensors are combined by Decimation method to improve unpredictability. This makes the sensor data to be more secure in generating random numbers preventing attackers from knowing the random sequence through adversarial control.
Online Social Networks exploit a lightweight process to identify their users so as to facilitate their fast adoption. However, such convenience comes at the price of making legitimate users subject to different threats created by fake accounts. Therefore, there is a crucial need to empower users with tools helping them in assigning a level of trust to whomever they interact with. To cope with this issue, in this paper we introduce a novel model, DIVa, that leverages on mining techniques to find correlations among user profile attributes. These correlations are discovered not from user population as a whole, but from individual communities, where the correlations are more pronounced. DIVa exploits a decentralized learning approach and ensures privacy preservation as each node in the OSN independently processes its local data and is required to know only its direct neighbors. Extensive experiments using real-world OSN datasets show that DIVa is able to extract fine-grained community-aware correlations among profile attributes with average improvements up to 50% than the global approach.
Spatial-multiplexing cameras have emerged as a promising alternative to classical imaging devices, often enabling acquisition of `more for less'. One popular architecture for spatial multiplexing is the single-pixel camera (SPC), which acquires coded measurements of the scene with pseudo-random spatial masks. Significant theoretical developments over the past few years provide a means for reconstruction of the original imagery from coded measurements at sub-Nyquist sampling rates. Yet, accurate reconstruction generally requires high measurement rates and high signal-to-noise ratios. In this paper, we enquire if one can perform high-level visual inference problems (e.g. face recognition or action recognition) from compressive cameras without the need for image reconstruction. This is an interesting question since in many practical scenarios, our goals extend beyond image reconstruction. However, most inference tasks often require non-linear features and it is not clear how to extract such features directly from compressed measurements. In this paper, we show that one can extract nontrivial correlational features directly without reconstruction of the imagery. As a specific example, we consider the problem of face recognition beyond the visible spectrum e.g in the short-wave infra-red region (SWIR) - where pixels are expensive. We base our framework on smashed filters which suggests that inner-products between high-dimensional signals can be computed in the compressive domain to a high degree of accuracy. We collect a new face image dataset of 30 subjects, obtained using an SPC. Using face recognition as an example, we show that one can indeed perform reconstruction-free inference with a very small loss of accuracy at very high compression ratios of 100 and more.
In this paper, we present the design, architecture, and implementation of a novel analysis engine, called Feature Collection and Correlation Engine (FCCE), that finds correlations across a diverse set of data types spanning over large time windows with very small latency and with minimal access to raw data. FCCE scales well to collecting, extracting, and querying features from geographically distributed large data sets. FCCE has been deployed in a large production network with over 450,000 workstations for 3 years, ingesting more than 2 billion events per day and providing low latency query responses for various analytics. We explore two security analytics use cases to demonstrate how we utilize the deployment of FCCE on large diverse data sets in the cyber security domain: 1) detecting fluxing domain names of potential botnet activity and identifying all the devices in the production network querying these names, and 2) detecting advanced persistent threat infection. Both evaluation results and our experience with real-world applications show that FCCE yields superior performance over existing approaches, and excels in the challenging cyber security domain by correlating multiple features and deriving security intelligence.
Side Channel Attacks (SCA) using power measurements are a known method of breaking cryptographic algorithms such as AES. Published research into attacks on AES frequently target only AES-128, and often target only the core Electronic Code-Book (ECB) algorithm, without discussing surrounding issues such as triggering, along with breaking the initialization vector. This paper demonstrates a complete attack on a secure bootloader, where the firmware files have been encrypted with AES-256-CBC. A classic Correlation Power Analysis (CPA) attack is performed on AES-256 to recover the complete 32-byte key, and a CPA attack is also used to attempt recovery of the initialization vector (IV).