Biblio
This paper studies the problem of designing optimal privacy mechanism with less energy cost. The eavesdropper and the defender with limited resources should choose which channel to eavesdrop and defend, respectively. A zero-sum stochastic game framework is used to model the interaction between the two players and the game is solved through the Nash Q-learning approach. A numerical example is given to verify the proposed method.
ISSN: 2688-0938
Many organizations process and store classified data within their computer networks. Owing to the value of data that they hold; such organizations are more vulnerable to targets from adversaries. Accordingly, the sensitive organizations resort to an ‘air-gap’ approach on their networks, to ensure better protection. However, despite the physical and logical isolation, the attackers have successfully manifested their capabilities by compromising such networks; examples of Stuxnet and Agent.btz in view. Such attacks were possible due to the successful manipulation of human beings. It has been observed that to build up such attacks, persistent reconnaissance of the employees, and their data collection often forms the first step. With the rapid integration of social media into our daily lives, the prospects for data-seekers through that platform are higher. The inherent risks and vulnerabilities of social networking sites/apps have cultivated a rich environment for foreign adversaries to cherry-pick personal information and carry out successful profiling of employees assigned with sensitive appointments. With further targeted social engineering techniques against the identified employees and their families, attackers extract more and more relevant data to make an intelligent picture. Finally, all the information is fused to design their further sophisticated attacks against the air-gapped facility for data pilferage. In this regard, the success of the adversaries in harvesting the personal information of the victims largely depends upon the common errors committed by legitimate users while on duty, in transit, and after their retreat. Such errors would keep on repeating unless these are aligned with their underlying human behaviors and weaknesses, and the requisite mitigation framework is worked out.
Security is an essential requirement of Industrial Control System (ICS) environments and its underlying communication infrastructure. Especially the lowest communication level within Supervisory Control and Data Acquisition (SCADA) systems - the field level - commonly lacks security measures.Since emerging wireless technologies within field level expose the lowest communication infrastructure towards potential attackers, additional security measures above the prevalent concept of air-gapped communication must be considered.Therefore, this work analyzes security aspects for the wireless communication protocol IO-Link Wireless (IOLW), which is commonly used for sensor and actuator field level communication. A possible architecture for an IOLW safety layer has already been presented recently [1].In this paper, the overall attack surface of IOLW within its typical environment is analyzed and attack preconditions are investigated to assess the effectiveness of different security measures. Additionally, enhanced security measures are evaluated for the communication systems and the results are summarized. Also, interference of security measures and functional safety principles within the communication are investigated, which do not necessarily complement one another but may also have contradictory requirements.This work is intended to discuss and propose enhancements of the IOLW standard with additional security considerations in future implementations.
Today's companies are increasingly relying on Internet of Everything (IoE) to modernize their operations. The very complexes characteristics of such system expose their applications and their exchanged data to multiples risks and security breaches that make them targets for cyber attacks. The aim of our work in this paper is to provide an cybersecurity strategy whose objective is to prevent and anticipate threats related to the IoE. An economic approach is used in order to help to take decisions according to the reduction of the risks generated by the non definition of the appropriate levels of security. The considered problem have been resolved by exploiting a combinatorial optimization approach with a practical case of knapsack. We opted for a bi-objective modeling under uncertainty with a constraint of cardinality and a given budget to be respected. To guarantee a robustness of our strategy, we have also considered the criterion of uncertainty by taking into account all the possible threats that can be generated by a cyber attacks over IoE. Our strategy have been implemented and simulated under MATLAB environement and its performance results have been compared to those obtained by NSGA-II metaheuristic. Our proposed cyber security strategy recorded a clear improvment of efficiency according to the optimization of the security level and cost parametrs.
Reliable and secure grid operations become more and more challenging in context of increasing IT/OT convergence and decreasing dynamic margins in today's power systems. To ensure the correct operation of monitoring and control functions in control centres, an intelligent assessment of the different information sources is necessary to provide a robust data source in case of critical physical events as well as cyber-attacks. Within this paper, a holistic data stream assessment methodology is proposed using an expert knowledge based cyber-physical situational awareness for different steady and transient system states. This approach goes beyond existing techniques by combining high-resolution PMU data with SCADA information as well as Digital Twin and AI based anomaly detection functionalities.
Security has become a crucial consideration and is one of the most important design goals for an embedded system. This paper examines the type of boot sequence, and more specifically a trusted boot which utilizes the method of chain of trust. After defining these terms, this paper will examine the limitations of the existing safe boot, and finally propose the method of trusted boot based on hypothesis testing benchmark and the cost it takes to perform this method.
Cyber reconnaissance is the process of gathering information about a target network for the purpose of compromising systems within that network. Network-based deception has emerged as a promising approach to disrupt attackers' reconnaissance efforts. However, limited work has been done so far on measuring the effectiveness of network-based deception. Furthermore, given that Software-Defined Networking (SDN) facilitates cyber deception by allowing network traffic to be modified and injected on-the-fly, understanding the effectiveness of employing different cyber deception strategies is critical. In this paper, we present a model to study the reconnaissance surface of a network and model the process of gathering information by attackers as interactions with a cyber defensive system that may use deception. To capture the evolution of the attackers' knowledge during reconnaissance, we design a belief system that is updated by using a Bayesian inference method. For the proposed model, we present two metrics based on KL-divergence to quantify the effectiveness of network deception. We tested the model and the two metrics by conducting experiments with a simulated attacker in an SDN-based deception system. The results of the experiments match our expectations, providing support for the model and proposed metrics.
Compressed sensing can represent the sparse signal with a small number of measurements compared to Nyquist-rate samples. Considering the high-complexity of reconstruction algorithms in CS, recently compressive detection is proposed, which performs detection directly in compressive domain without reconstruction. Different from existing work that generally considers the measurements corrupted by dense noises, this paper studies the compressive detection problem when the measurements are corrupted by both dense noises and sparse errors. The sparse errors exist in many practical systems, such as the ones affected by impulse noise or narrowband interference. We derive the theoretical performance of compressive detection when the sparse error is either deterministic or random. The theoretical results are further verified by simulations.
Feedback loss can severely degrade the overall system performance, in addition, it can affect the control and computation of the Cyber-physical Systems (CPS). CPS hold enormous potential for a wide range of emerging applications including stochastic and time-critical traffic patterns. Stochastic data has a randomness in its nature which make a great challenge to maintain the real-time control whenever the data is lost. In this paper, we propose a data recovery scheme, called the Efficient Temporal and Spatial Data Recovery (ETSDR) scheme for stochastic incomplete feedback of CPS. In this scheme, we identify the temporal model based on the traffic patterns and consider the spatial effect of the nearest neighbor. Numerical results reveal that the proposed ETSDR outperforms both the weighted prediction (WP) and the exponentially weighted moving average (EWMA) algorithm regardless of the increment percentage of missing data in terms of the root mean square error, the mean absolute error, and the integral of absolute error.
To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: 1) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. 2) The inclusion probabilities must be: a) knowable for nonsampled units and b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne very high resolution images, where: 1) an original Categorical Variable Pair Similarity Index (proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and 2) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session, the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic Mapper (SIAM™) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps, and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAM™ by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAM™ pre-classification maps proposed in this contribution, together with OQIs claimed for SIAM™ by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAM™ software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems initiative and the QA4EO international guidelines.