Biblio
The Case Studies in Cyber Supply Chain Risk Management series engaged with several companies that are mature in managing cyber supply chain risk. These case studies build on the Best Practices in Cyber Supply Chain Risk Management case studies originally published in 2015 with the goals of covering new organizations in new industries and bringing to light any changes in cyber supply chain risk management practices.
The Case Studies in Cyber Supply Chain Risk Management series engaged with several companies that are mature in managing cyber supply chain risk. These case studies build on the Best Practices in Cyber Supply Chain Risk Management case studies originally published in 2015 with the goals of covering new organizations in new industries and bringing to light any changes in cyber supply chain risk management practices.
Zero Trust Model ensures each node is responsible for the approval of the transaction before it gets committed. The data owners can track their data while it’s shared amongst the various data custodians ensuring data security. The consensus algorithm enables the users to trust the network as malicious nodes fail to get approval from all nodes, thereby causing the transaction to be aborted. The use case chosen to demonstrate the proposed consensus algorithm is the college placement system. The algorithm has been extended to implement a diversified, decentralized, automated placement system, wherein the data owner i.e. the student, maintains an immutable certificate vault and the student’s data has been validated by a verifier network i.e. the academic department and placement department. The data transfer from student to companies is recorded as transactions in the distributed ledger or blockchain allowing the data to be tracked by the student.
The Automation industries that uses Supervisory Control and Data Acquisition (SCADA) systems are highly vulnerable for Network threats. Systems that are air-gapped and isolated from the internet are highly affected due to insider attacks like Spoofing, DOS and Malware threats that affects confidentiality, integrity and availability of Operational Technology (OT) system elements and degrade its performance even though security measures are taken. In this paper, a behavior-based intrusion prevention system (IPS) is designed for OT networks. The proposed system is implemented on SCADA test bed with two systems replicates automation scenarios in industry. This paper describes 4 main classes of cyber-attacks with their subclasses against SCADA systems and methodology with design of components of IPS system, database creation, Baselines and deployment of system in environment. IPS system identifies not only IT protocols but also Industry Control System (ICS) protocols Modbus and DNP3 with their inside communication fields using deep packet inspection (DPI). The analytical results show 99.89% accuracy on binary classification and 97.95% accuracy on multiclass classification of different attack vectors performed on network with low false positive rate. These results are also validated by actual deployment of IPS in SCADA systems with the prevention of DOS attack.
Learning-enabled components (LECs) are widely used in cyber-physical systems (CPS) since they can handle the uncertainty and variability of the environment and increase the level of autonomy. However, it has been shown that LECs such as deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction. The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS. The proposed approach is based on inductive conformal prediction and uses a regression model based on variational autoencoder. The architecture allows to take into consideration both the input and the neural network prediction for detecting adversarial, and more generally, out-of-distribution examples. We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars where a DNN is used to estimate the distance to an obstacle. The simulation results show that the method can effectively detect adversarial examples with a short detection delay.
In recent years, the damage caused by unauthorized access using bots has increased. Compared with attacks on conventional login screens, the success rate is higher and detection of them is more difficult. CAPTCHA is commonly utilized as a technology for avoiding attacks by bots. But user's experience declines as the difficulty of CAPTCHA becomes higher corresponding to the advancement of the bot. As a solution, adaptive difficulty setting of CAPTCHA combining with bot detection technologies is considered. In this research, we focus on Capy puzzle CAPTCHA, which is widely used in commercial service. We use a supervised machine learning approach to detect bots. As a training data, we use access logs to several Web services, and add flags to attacks by bots detected in the past. We have extracted vectors fields like HTTP-User-Agent and some information from IP address (e.g. geographical information) from the access logs, and the dataset is investigated using supervised learning. By using XGBoost and LightGBM, we have achieved high ROC-AUC score more than 0.90, and further have detected suspicious accesses from some ISPs that has no bot discrimination flag.
New research fields and applications on human computer interaction will emerge based on the recognition of emotions on faces. With such aim, our study evaluates the features extracted from faces to recognize emotions. To increase the success rate of these features, we have run several tests to demonstrate how age and gender affect the results. The artificial neural networks were trained by the apparent regions on the face such as eyes, eyebrows, nose, mouth, and jawline and then the networks are tested with different age and gender groups. According to the results, faces of older people have a lower performance rate of emotion recognition. Then, age and gender based groups are created manually, and we show that performance rates of facial emotion recognition have increased for the networks that are trained using these particular groups.
Code randomization is considered as the basis of mitigation against code reuse attacks, fundamentally supporting some recent proposals such as execute-only memory (XOM) that aims at dynamic return-oriented programming (ROP) attacks. However, existing code randomization methods are hard to achieve a good balance between high-randomization entropy and semantic consistency. In particular, they always ignore code semantic consistency, incurring performance loss and incompatibility with current security schemes, e.g., control flow integrity (CFI). In this paper, we present an enhanced code randomization method termed as HCRESC, which can improve the randomization entropy significantly, meanwhile ensure the semantic consistency between variants and the original code. HCRESC reschedules instructions within the range of functions rather than basic blocks, thus producing more variants of the original code and preserving the code's semantic. We implement HCRESC on Linux platform of x86-64 architecture and demonstrate that HCRESC can increase the randomization entropy of x86-64 code over than 120% compared with existing methods while ensuring control flow and size of the code unaltered.
The rapid growth of artificial intelligence has contributed a lot to the technology world. As the traditional algorithms failed to meet the human needs in real time, Machine learning and deep learning algorithms have gained great success in different applications such as classification systems, recommendation systems, pattern recognition etc. Emotion plays a vital role in determining the thoughts, behaviour and feeling of a human. An emotion recognition system can be built by utilizing the benefits of deep learning and different applications such as feedback analysis, face unlocking etc. can be implemented with good accuracy. The main focus of this work is to create a Deep Convolutional Neural Network (DCNN) model that classifies 5 different human facial emotions. The model is trained, tested and validated using the manually collected image dataset.
Existing cyber security solutions have been basically developed using knowledge-based models that often cannot trigger new cyber-attack families. With the boom of Artificial Intelligence (AI), especially Deep Learning (DL) algorithms, those security solutions have been plugged-in with AI models to discover, trace, mitigate or respond to incidents of new security events. The algorithms demand a large number of heterogeneous data sources to train and validate new security systems. This paper presents the description of new datasets, the so-called ToNİoT, which involve federated data sources collected from Telemetry datasets of IoT services, Operating system datasets of Windows and Linux, and datasets of Network traffic. The paper introduces the testbed and description of TONİoT datasets for Windows operating systems. The testbed was implemented in three layers: edge, fog and cloud. The edge layer involves IoT and network devices, the fog layer contains virtual machines and gateways, and the cloud layer involves cloud services, such as data analytics, linked to the other two layers. These layers were dynamically managed using the platforms of software-Defined Network (SDN) and Network-Function Virtualization (NFV) using the VMware NSX and vCloud NFV platform. The Windows datasets were collected from audit traces of memories, processors, networks, processes and hard disks. The datasets would be used to evaluate various AI-based cyber security solutions, including intrusion detection, threat intelligence and hunting, privacy preservation and digital forensics. This is because the datasets have a wide range of recent normal and attack features and observations, as well as authentic ground truth events. The datasets can be publicly accessed from this link [1].
Video presentation from Carnegie Melon University "Implementing Cyber Security in DoD Supply Chains," 2020.
Two-phase I/O is a well-known strategy for implementing collective MPI-IO functions. It redistributes I/O requests among the calling processes into a form that minimizes the file access costs. As modern parallel computers continue to grow into the exascale era, the communication cost of such request redistribution can quickly overwhelm collective I/O performance. This effect has been observed from parallel jobs that run on multiple compute nodes with a high count of MPI processes on each node. To reduce the communication cost, we present a new design for collective I/O by adding an extra communication layer that performs request aggregation among processes within the same compute nodes. This approach can significantly reduce inter-node communication contention when redistributing the I/O requests. We evaluate the performance and compare it with the original two-phase I/O on Cray XC40 parallel computers (Theta and Cori) with Intel KNL and Haswell processors. Using I/O patterns from two large-scale production applications and an I/O benchmark, we show our proposed method effectively reduces the communication cost and hence maintains the scalability for a large number of processes.
Information security of logistics services. Information security of logistics services is understood as a complex activity aimed at using information and means of its processing in order to increase the level of protection and normal functioning of the object's information environment. At the same time the main recommendations for ensuring information security of logistics processes include: logistics support of processes for ensuring the security of information flows of the enterprise; assessment of the quality and reliability of elements, reliability and efficiency of obtaining information about the state of logistics processes. However, it is possible to assess the level of information security within the organization's controlled part of the supply chain through levels and indicators. In this case, there are four levels and elements of information security of supply chains.
Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.
Supply chain security threats pose new challenges to security risk modeling techniques for complex ICT systems such as the IoT. With established techniques drawn from attack trees and reliability analysis providing needed points of reference, graph-based analysis can provide a framework for considering the role of suppliers in such systems. We present such a framework here while highlighting the need for a component-centered model. Given resource limitations when applying this model to existing systems, we study various classes of uncertainties in model development, including structural uncertainties and uncertainties in the magnitude of estimated event probabilities. Using case studies, we find that structural uncertainties constitute a greater challenge to model utility and as such should receive particular attention. Best practices in the face of these uncertainties are proposed.
In this paper, we present the concept of boosting the resiliency of optimization-based observers for cyber-physical systems (CPS) using auxiliary sources of information. Due to the tight coupling of physics, communication and computation, a malicious agent can exploit multiple inherent vulnerabilities in order to inject stealthy signals into the measurement process. The problem setting considers the scenario in which an attacker strategically corrupts portions of the data in order to force wrong state estimates which could have catastrophic consequences. The goal of the proposed observer is to compute the true states in-spite of the adversarial corruption. In the formulation, we use a measurement prior distribution generated by the auxiliary model to refine the feasible region of a traditional compressive sensing-based regression problem. A constrained optimization-based observer is developed using l1-minimization scheme. Numerical experiments show that the solution of the resulting problem recovers the true states of the system. The developed algorithm is evaluated through a numerical simulation example of the IEEE 14-bus system.