Desta, Araya Kibrom, Ohira, Shuji, Arai, Ismail, Fujikawa, Kazutoshi.
2022.
U-CAN: A Convolutional Neural Network Based Intrusion Detection for Controller Area Networks. 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC). :1481–1488.
The Controller area network (CAN) is the most extensively used in-vehicle network. It is set to enable communication between a number of electronic control units (ECU) that are widely found in most modern vehicles. CAN is the de facto in-vehicle network standard due to its error avoidance techniques and similar features, but it is vulnerable to various attacks. In this research, we propose a CAN bus intrusion detection system (IDS) based on convolutional neural networks (CNN). U-CAN is a segmentation model that is trained by monitoring CAN traffic data that are preprocessed using hamming distance and saliency detection algorithm. The model is trained and tested using publicly available datasets of raw and reverse-engineered CAN frames. With an F\_1 Score of 0.997, U-CAN can detect DoS, Fuzzy, spoofing gear, and spoofing RPM attacks of the publicly available raw CAN frames. The model trained on reverse-engineered CAN signals that contain plateau attacks also results in a true positive rate and false-positive rate of 0.971 and 0.998, respectively.
ISSN: 0730-3157
Chechik, Marsha.
2019.
Uncertain Requirements, Assurance and Machine Learning. 2019 IEEE 27th International Requirements Engineering Conference (RE). :2–3.
From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem - rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging. In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.