Biblio
Learning-enabled components (LECs) are widely used in cyber-physical systems (CPS) since they can handle the uncertainty and variability of the environment and increase the level of autonomy. However, it has been shown that LECs such as deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction. The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS. The proposed approach is based on inductive conformal prediction and uses a regression model based on variational autoencoder. The architecture allows to take into consideration both the input and the neural network prediction for detecting adversarial, and more generally, out-of-distribution examples. We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars where a DNN is used to estimate the distance to an obstacle. The simulation results show that the method can effectively detect adversarial examples with a short detection delay.
The time-varying properties of the wireless channel are a powerful source of information that can complement and enhance traditional security mechanisms. Therefore, we propose a cross-layer authentication mechanism that combines physical layer channel information and traditional authentication mechanism in LTE. To verify the feasibility of the proposed mechanism, we build a cross-layer authentication system that extracts the phase shift information of a typical UE and use the ensemble learning method to train the fingerprint map based on OAI LTE. Experimental results show that our cross-layer authentication mechanism can effectively prompt the security of LTE system.
In recent times, an increasing amount of intelligent electronic devices (IEDs) are being deployed to make power systems more reliable and economical. While these technologies are necessary for realizing a cyber-physical infrastructure for future smart power grids, they also introduce new vulnerabilities in the grid to different cyber-attacks. Traditional methods such as state vector estimation (SVE) are not capable of identifying cyber-attacks while the geometric information is also injected as an attack vector. In this paper, a machine learning based smart grid attack identification method is proposed. The proposed method is carried out by first collecting smart grid power flow data for machine learning training purposes which is later used to classify the attacks. The performance of both the proposed SVM method and the traditional SVE method are validated on IEEE 14, 30, 39, 57 and 118 bus systems, and the performance regarding the scale of the power system is evaluated. The results show that the SVM-based method performs better than the SVE-based in attack identification over a much wider scale of power systems.
Enforcing security and resilience in a cloud platform is an essential but challenging problem due to the presence of a large number of heterogeneous applications running on shared resources. A security analysis system that can detect threats or malware must exist inside the cloud infrastructure. Much research has been done on machine learning-driven malware analysis, but it is limited in computational complexity and detection accuracy. To overcome these drawbacks, we proposed a new malware detection system based on the concept of clustering and trend micro locality sensitive hashing (TLSH). We used Cuckoo sandbox, which provides dynamic analysis reports of files by executing them in an isolated environment. We used a novel feature extraction algorithm to extract essential features from the malware reports obtained from the Cuckoo sandbox. Further, the most important features are selected using principal component analysis (PCA), random forest, and Chi-square feature selection methods. Subsequently, the experimental results are obtained for clustering and non-clustering approaches on three classifiers, including Decision Tree, Random Forest, and Logistic Regression. The model performance shows better classification accuracy and false positive rate (FPR) as compared to the state-of-the-art works and non-clustering approach at significantly lesser computation cost.
Whenever any internet user visits a website, a scripting language runs in the background known as JavaScript. The embedding of malicious activities within the script poses a great threat to the cyberworld. Attackers take advantage of the dynamic nature of the JavaScript and embed malicious code within the website to download malware and damage the host. JavaScript developers obfuscate the script to keep it shielded from getting detected by the malware detectors. In this paper, we propose a novel technique for analysing and detecting JavaScript using sandbox assisted ensemble model. We extract the payload using malware-jail sandbox to get the real script. Upon getting the extracted script, we analyse it to define the features that are needed for creating the dataset. We compute Pearson's r between every feature for feature extraction. An ensemble model consisting of Sequential Minimal Optimization (SMO), Voted Perceptron and AdaBoost algorithm is used with voting technique to detect malicious JavaScript. Experimental results show that our proposed model can detect obfuscated and de-obfuscated malicious JavaScript with an accuracy of 99.6% and 0.03s detection time. Our model performs better than other state-of-the-art models in terms of accuracy and least training and detection time.
With its huge real-world demands, large-scale confidential computing still cannot be supported by today's Trusted Execution Environment (TEE), due to the lack of scalable and effective protection of high-throughput accelerators like GPUs, FPGAs, and TPUs etc. Although attempts have been made recently to extend the CPU-like enclave to GPUs, these solutions require change to the CPU or GPU chips, may introduce new security risks due to the side-channel leaks in CPU-GPU communication and are still under the resource constraint of today's CPU TEE.To address these problems, we present the first Heterogeneous TEE design that can truly support large-scale compute or data intensive (CDI) computing, without any chip-level change. Our approach, called HETEE, is a device for centralized management of all computing units (e.g., GPUs and other accelerators) of a server rack. It is uniquely designed to work with today's data centres and clouds, leveraging modern resource pooling technologies to dynamically compartmentalize computing tasks, and enforce strong isolation and reduce TCB through hardware support. More specifically, HETEE utilizes the PCIe ExpressFabric to allocate its accelerators to the server node on the same rack for a non-sensitive CDI task, and move them back into a secure enclave in response to the demand for confidential computing. Our design runs a thin TCB stack for security management on a security controller (SC), while leaving a large set of software (e.g., AI runtime, GPU driver, etc.) to the integrated microservers that operate enclaves. An enclaves is physically isolated from others through hardware and verified by the SC at its inception. Its microserver and computing units are restored to a secure state upon termination.We implemented HETEE on a real hardware system, and evaluated it with popular neural network inference and training tasks. Our evaluations show that HETEE can easily support the CDI tasks on the real-world scale and incurred a maximal throughput overhead of 2.17% for inference and 0.95% for training on ResNet152.