Visible to the public Biblio

Filters: Keyword is anomaly detector  [Clear All Filters]
2021-01-15
Khodabakhsh, A., Busch, C..  2020.  A Generalizable Deepfake Detector based on Neural Conditional Distribution Modelling. 2020 International Conference of the Biometrics Special Interest Group (BIOSIG). :1—5.
Photo- and video-realistic generation techniques have become a reality following the advent of deep neural networks. Consequently, there are immense concerns regarding the difficulty in differentiating what content is real from what is synthetic. An example of video-realistic generation techniques is the infamous Deepfakes, which exploit the main modality by which humans identify each other. Deepfakes are a category of synthetic face generation methods and are commonly based on generative adversarial networks. In this article, we propose a novel two-step synthetic face image detection method in which general-purpose features are extracted in a first step, trivializing the task of detecting synthetic images. The anomaly detector predicts the conditional probabilities for observing every individual pixel in the image and is trained on pristine data only. The extracted anomaly features demonstrate true generalization capacity across widely different unknown synthesis methods while showing a minimal loss in performance with regard to the detection of known synthetic samples.
2020-07-20
Boumiza, Safa, Braham, Rafik.  2019.  An Anomaly Detector for CAN Bus Networks in Autonomous Cars based on Neural Networks. 2019 International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). :1–6.
The domain of securing in-vehicle networks has attracted both academic and industrial researchers due to high danger of attacks on drivers and passengers. While securing wired and wireless interfaces is important to defend against these threats, detecting attacks is still the critical phase to construct a robust secure system. There are only a few results on securing communication inside vehicles using anomaly-detection techniques despite their efficiencies in systems that need real-time detection. Therefore, we propose an intrusion detection system (IDS) based on Multi-Layer Perceptron (MLP) neural network for Controller Area Networks (CAN) bus. This IDS divides data according to the ID field of CAN packets using K-means clustering algorithm, then it extracts suitable features and uses them to train and construct the neural network. The proposed IDS works for each ID separately and finally it combines their individual decisions to construct the final score and generates alert in the presence of attack. The strength of our intrusion detection method is that it works simultaneously for two types of attacks which will eliminate the use of several separate IDS and thus reduce the complexity and cost of implementation.
2018-07-06
Lampesberger, H..  2016.  An Incremental Learner for Language-Based Anomaly Detection in XML. 2016 IEEE Security and Privacy Workshops (SPW). :156–170.

The Extensible Markup Language (XML) is a complex language, and consequently, XML-based protocols are susceptible to entire classes of implicit and explicit security problems. Message formats in XML-based protocols are usually specified in XML Schema, and as a first-line defense, schema validation should reject malformed input. However, extension points in most protocol specifications break validation. Extension points are wildcards and considered best practice for loose composition, but they also enable an attacker to add unchecked content in a document, e.g., for a signature wrapping attack. This paper introduces datatyped XML visibly pushdown automata (dXVPAs) as language representation for mixed-content XML and presents an incremental learner that infers a dXVPA from example documents. The learner generalizes XML types and datatypes in terms of automaton states and transitions, and an inferred dXVPA converges to a good-enough approximation of the true language. The automaton is free from extension points and capable of stream validation, e.g., as an anomaly detector for XML-based protocols. For dealing with adversarial training data, two scenarios of poisoning are considered: a poisoning attack is either uncovered at a later time or remains hidden. Unlearning can therefore remove an identified poisoning attack from a dXVPA, and sanitization trims low-frequent states and transitions to get rid of hidden attacks. All algorithms have been evaluated in four scenarios, including a web service implemented in Apache Axis2 and Apache Rampart, where attacks have been simulated. In all scenarios, the learned automaton had zero false positives and outperformed traditional schema validation.

2018-03-19
Jeon, H., Eun, Y..  2017.  Sensor Security Index for Control Systems. 2017 17th International Conference on Control, Automation and Systems (ICCAS). :145–148.

Security of control systems have become a new and important field of research since malicious attacks on control systems indeed occurred including Stuxnet in 2011 and north eastern electrical grid black out in 2003. Attacks on sensors and/or actuators of control systems cause malfunction, instability, and even system destruction. The impact of attack may differ by which instrumentation (sensors and/or actuators) is being attacked. In particular, for control systems with multiple sensors, attack on each sensor may have different impact, i.e., attack on some sensors leads to a greater damage to the system than those for other sensors. To investigate this, we consider sensor bias injection attacks in linear control systems equipped with anomaly detector, and quantify the maximum impact of attack on sensors while the attack remains undetected. Then, we introduce a notion of sensor security index for linear dynamic systems to quantify the vulnerability under sensor attacks. Method of reducing system vulnerability is also discussed using the notion of sensor security index.