Visible to the public Biblio

Filters: Keyword is autonomous systems  [Clear All Filters]
2023-08-03
Chen, Wenlong, Wang, Xiaolin, Wang, Xiaoliang, Xu, Ke, Guo, Sushu.  2022.  LRVP: Lightweight Real-Time Verification of Intradomain Forwarding Paths. IEEE Systems Journal. 16:6309–6320.
The correctness of user traffic forwarding paths is an important goal of trusted transmission. Many network security issues are related to it, i.e., denial-of-service attacks, route hijacking, etc. The current path-aware network architecture can effectively overcome this issue through path verification. At present, the main problems of path verification are high communication and high computation overhead. To this aim, this article proposes a lightweight real-time verification mechanism of intradomain forwarding paths in autonomous systems to achieve a path verification architecture with no communication overhead and low computing overhead. The problem situation is that a packet finally reaches the destination, but its forwarding path is inconsistent with the expected path. The expected path refers to the packet forwarding path determined by the interior gateway protocols. If the actual forwarding path is different from the expected one, it is regarded as an incorrect forwarding path. This article focuses on the most typical intradomain routing environment. A few routers are set as the verification routers to block the traffic with incorrect forwarding paths and raise alerts. Experiments prove that this article effectively solves the problem of path verification and the problem of high communication and computing overhead.
Conference Name: IEEE Systems Journal
2023-02-17
Anderegg, Alfred H. Andy, Ferrell, Uma D..  2022.  Assurance Case Along a Safety Continuum. 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC). :1–10.
The FAA proposes Safety Continuum that recognizes the public expectation for safety outcomes vary with aviation sectors that have different missions, aircraft, and environments. The purpose is to align the rigor of oversight to the public expectations. An aircraft, its variants or derivatives may be used in operations with different expectations. The differences in mission might bring immutable risks for some applications that reuse or revise the original aircraft type design. The continuum enables a more agile design approval process for innovations in the context of a dynamic ecosystems, addressing the creation of variants for different sectors and needs. Since an aircraft type design can be reused in various operations under part 91 or 135 with different mission risks the assurance case will have many branches reflecting the variants and derivatives.This paper proposes a model for the holistic, performance-based, through-life safety assurance case that focuses applicant and oversight alike on achieving the safety outcomes. This paper describes the application of goal-based, technology-neutral features of performance-based assurance cases extending the philosophy of UL 4600, to the Safety Continuum. This paper specifically addresses component reuse including third-party vehicle modifications and changes to operational concept or eco-system. The performance-based assurance argument offers a way to combine the design approval more seamlessly with the oversight functions by focusing all aspects of the argument and practice together to manage the safety outcomes. The model provides the context to assure mitigated risk are consistent with an operation’s place on the safety continuum, while allowing the applicant to reuse parts of the assurance argument to innovate variants or derivatives. The focus on monitoring performance to constantly verify the safety argument complements compliance checking as a way to assure products are "fit-for-use". The paper explains how continued operational safety becomes a natural part of monitoring the assurance case for growing variety in a product line by accounting for the ecosystem changes. Such a model could be used with the Safety Continuum to promote applicant and operator accountability delivering the expected safety outcomes.
ISSN: 2155-7209
Ferrell, Uma D., Anderegg, Alfred H. Andy.  2022.  Holistic Assurance Case for System-of-Systems. 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC). :1–9.
Aviation is a highly sophisticated and complex System-of-Systems (SoSs) with equally complex safety oversight. As novel products with autonomous functions and interactions between component systems are adopted, the number of interdependencies within and among the SoS grows. These interactions may not always be obvious. Understanding how proposed products (component systems) fit into the context of a larger SoS is essential to promote the safe use of new as well as conventional technology.UL 4600, is a Standard for Safety for the Evaluation of Autonomous Products specifically written for completely autonomous Load vehicles. The goal-based, technology-neutral features of this standard make it adaptable to other industries and applications.This paper, using the philosophy of UL 4600, gives guidance for creating an assurance case for products in an SoS context. An assurance argument is a cogent structured argument concluding that an autonomous aircraft system possesses all applicable through-life performance and safety properties. The assurance case process can be repeated at each level in the SoS: aircraft, aircraft system, unmodified components, and modified components. The original Equipment Manufacturer (OEM) develops the assurance case for the whole aircraft envisioned in the type certification process. Assurance cases are continuously validated by collecting and analyzing Safety Performance Indicators (SPIs). SPIs provide predictive safety information, thus offering an opportunity to improve safety by preventing incidents and accidents. Continuous validation is essential for risk-based approval of autonomously evolving (dynamic) systems, learning systems, and new technology. System variants, derivatives, and components are captured in a subordinate assurance case by their developer. These variants of the assurance case inherently reflect the evolution of the vehicle-level derivatives and options in the context of their specific target ecosystem. These subordinate assurance cases are nested under the argument put forward by the OEM of components and aircraft, for certification credit.It has become a common practice in aviation to address design hazards through operational mitigations. It is also common for hazards noted in an aircraft component system to be mitigated within another component system. Where a component system depends on risk mitigation in another component of the SoS, organizational responsibilities must be stated explicitly in the assurance case. However, current practices do not formalize accounting for these dependencies by the parties responsible for design; consequently, subsequent modifications are made without the benefit of critical safety-related information from the OEMs. The resulting assurance cases, including 3rd party vehicle modifications, must be scrutinized as part of the holistic validation process.When changes are made to a product represented within the assurance case, their impact must be analyzed and reflected in an updated assurance case. An OEM can facilitate this by integrating affected assurance cases across their customer’s supply chains to ensure their validity. The OEM is expected to exercise the sphere-of-control over their product even if it includes outsourced components. Any organization that modifies a product (with or without assurance argumentation information from other suppliers) is accountable for validating the conditions for any dependent mitigations. For example, the OEM may manage the assurance argumentation by identifying requirements and supporting SPI that must be applied in all component assurance cases. For their part, component assurance cases must accommodate all spheres-of-control that mitigate the risks they present in their respective contexts. The assurance case must express how interdependent mitigations will collectively assure the outcome. These considerations are much more than interface requirements and include explicit hazard mitigation dependencies between SoS components. A properly integrated SoS assurance case reflects a set of interdependent systems that could be independently developed..Even in this extremely interconnected environment, stakeholders must make accommodations for the independent evolution of products in a manner that protects proprietary information, domain knowledge, and safety data. The collective safety outcome for the SoS is based on the interdependence of mitigations by each constituent component and could not be accomplished by any single component. This dependency must be explicit in the assurance case and should include operational mitigations predicated on people and processes.Assurance cases could be used to gain regulatory approval of conventional and new technology. They can also serve to demonstrate consistency with a desired level of safety, especially in SoSs whose existing standards may not be adequate. This paper also provides guidelines for preserving alignment between component assurance cases along a product supply chain, and the respective SoSs that they support. It shows how assurance is a continuous process that spans product evolution through the monitoring of interdependent requirements and SPI. The interdependency necessary for a successful assurance case encourages stakeholders to identify and formally accept critical interconnections between related organizations. The resulting coordination promotes accountability for safety through increased awareness and the cultivation of a positive safety culture.
ISSN: 2155-7209
2022-06-09
Dizaji, Lida Ghaemi, Hu, Yaoping.  2021.  Building And Measuring Trust In Human-Machine Systems. 2021 IEEE International Conference on Autonomous Systems (ICAS). :1–5.
In human-machine systems (HMS), trust placed by humans on machines is a complex concept and attracts increasingly research efforts. Herein, we reviewed recent studies on building and measuring trust in HMS. The review was based on one comprehensive model of trust – IMPACTS, which has 7 features of intention, measurability, performance, adaptivity, communication, transparency, and security. The review found that, in the past 5 years, HMS fulfill the features of intention, measurability, communication, and transparency. Most of the HMS consider the feature of performance. However, all of the HMS address rarely the feature of adaptivity and neglect the feature of security due to using stand-alone simulations. These findings indicate that future work considering the features of adaptivity and/or security is imperative to foster human trust in HMS.
Hou, Ming.  2021.  Enabling Trust in Autonomous Human-Machine Teaming. 2021 IEEE International Conference on Autonomous Systems (ICAS). :1–1.
The advancement of AI enables the evolution of machines from relatively simple automation to completely autonomous systems that augment human capabilities with improved quality and productivity in work and life. The singularity is near! However, humans are still vulnerable. The COVID-19 pandemic reminds us of our limited knowledge about nature. The recent accidents involving Boeing 737 Max passengers ring the alarm again about the potential risks when using human-autonomy symbiosis technologies. A key challenge of safe and effective human-autonomy teaming is enabling “trust” between the human-machine team. It is even more challenging when we are facing insufficient data, incomplete information, indeterministic conditions, and inexhaustive solutions for uncertain actions. This calls for the imperative needs of appropriate design guidance and scientific methodologies for developing safety-critical autonomous systems and AI functions. The question is how to build and maintain a safe, effective, and trusted partnership between humans and autonomous systems. This talk discusses a context-based and interaction-centred design (ICD) approach for developing a safe and collaborative partnership between humans and technology by optimizing the interaction between human intelligence and AI. An associated trust model IMPACTS (Intention, Measurability, Performance, Adaptivity, Communications, Transparency, and Security) will also be introduced to enable the practitioners to foster an assured and calibrated trust relationship between humans and their partner autonomous systems. A real-world example of human-autonomy teaming in a military context will be explained to illustrate the utility and effectiveness of these trust enablers.
2022-02-24
Klenze, Tobias, Sprenger, Christoph, Basin, David.  2021.  Formal Verification of Secure Forwarding Protocols. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1–16.
Today's Internet is built on decades-old networking protocols that lack scalability, reliability, and security. In response, the networking community has developed path-aware Internet architectures that solve these issues while simultaneously empowering end hosts. In these architectures, autonomous systems construct authenticated forwarding paths based on their routing policies. Each end host then selects one of these authorized paths and includes it in the packet header, thus allowing routers to efficiently determine how to forward the packet. A central security property of these architectures is path authorization, requiring that packets can only travel along authorized paths. This property protects the routing policies of autonomous systems from malicious senders.The fundamental role of packet forwarding in the Internet and the complexity of the authentication mechanisms employed call for a formal analysis. In this vein, we develop in Isabelle/HOL a parameterized verification framework for path-aware data plane protocols. We first formulate an abstract model without an attacker for which we prove path authorization. We then refine this model by introducing an attacker and by protecting authorized paths using (generic) cryptographic validation fields. This model is parameterized by the protocol's authentication mechanism and assumes five simple verification conditions that are sufficient to prove the refinement of the abstract model. We validate our framework by instantiating it with several concrete protocols from the literature and proving that they each satisfy the verification conditions and hence path authorization. No invariants must be proven for the instantiation. Our framework thus supports low-effort security proofs for data plane protocols. The results hold for arbitrary network topologies and sets of authorized paths, a guarantee that state-of-the-art automated security protocol verifiers cannot currently provide.
2022-02-03
Doroftei, Daniela, De Vleeschauwer, Tom, Bue, Salvatore Lo, Dewyn, Michaël, Vanderstraeten, Frik, De Cubber, Geert.  2021.  Human-Agent Trust Evaluation in a Digital Twin Context. 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). :203—207.
Autonomous systems have the potential to accomplish missions more quickly and effectively, while reducing risks to human operators and costs. However, since the use of autonomous systems is still relatively new, there are still a lot of challenges associated with trusting these systems. Without operators in direct control of all actions, there are significant concerns associated with endangering human lives or damaging equipment. For this reason, NATO has issued a challenge seeking to identify ways to improve decision-maker and operator trust when deploying autonomous systems, and de-risk their adoption. This paper presents the proposal of the winning solution to this NATO challenge. It approaches trust as a multi-dimensional concept, by incorporating the four dimensions of human-agent trust establishment in a digital twin context.
2022-01-25
Rouff, Christopher, Watkins, Lanier, Sterritt, Roy, Hariri, Salim.  2021.  SoK: Autonomic Cybersecurity - Securing Future Disruptive Technologies. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :66—72.
This paper is a systemization of knowledge of autonomic cybersecurity. Disruptive technologies, such as IoT, AI and autonomous systems, are becoming more prevalent and often have little or no cybersecurity protections. This lack of security is contributing to the expanding cybersecurity attack surface. The autonomic computing initiative was started to address the complexity of administering complex computing systems by making them self-managing. Autonomic systems contain attributes to address cyberattacks, such as self-protecting and self-healing that can secure new technologies. There has been a number of research projects on autonomic cybersecurity, with different approaches and target technologies, many of them disruptive. This paper reviews autonomic computing, analyzes research on autonomic cybersecurity, and provides a systemization of knowledge of the research. The paper concludes with identification of gaps in autonomic cybersecurity for future research.
2021-11-08
He, Hongmei, Gray, John, Cangelosi, Angelo, Meng, Qinggang, McGinnity, T. M., Mehnen, Jörn.  2020.  The Challenges and Opportunities of Artificial Intelligence for Trustworthy Robots and Autonomous Systems. 2020 3rd International Conference on Intelligent Robotic and Control Engineering (IRCE). :68–74.
Trust is essential in designing autonomous and semiautonomous Robots and Autonomous Systems (RAS), because of the ``No trust, no use'' concept. RAS should provide high quality services, with four key properties that make them trustworthy: they must be (i) robust with regards to any system health related issues, (ii) safe for any matters in their surrounding environments, (iii) secure against any threats from cyber spaces, and (iv) trusted for human-machine interaction. This article thoroughly analyses the challenges in implementing the trustworthy RAS in respects of the four properties, and addresses the power of AI in improving the trustworthiness of RAS. While we focus on the benefits that AI brings to human, we should realize the potential risks that could be caused by AI. This article introduces for the first time the set of key aspects of human-centered AI for RAS, which can serve as a cornerstone for implementing trustworthy RAS by design in the future.
2021-09-17
Christie V, Samuel H., Smirnova, Daria, Chopra, Amit K., Singh, Munindar P..  2020.  Protocols Over Things: A Decentralized Programming Model for the Internet of Things. 53:60–68.
Current programming models for developing Internet of Things (IoT) applications are logically centralized and ill-suited for most IoT applications. We contribute Protocols over Things, a decentralized programming model that represents an IoT application via a protocol between the parties involved and provides improved performance over network-level delivery guarantees.
2021-03-29
Agirre, I..  2020.  Safe and secure software updates on high-performance embedded systems. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :68—69.

The next generation of dependable embedded systems feature autonomy and higher levels of interconnection. Autonomy is commonly achieved with the support of artificial intelligence algorithms that pose high computing demands on the hardware platform, reaching a high performance scale. This involves a dramatic increase in software and hardware complexity, fact that together with the novelty of the technology, raises serious concerns regarding system dependability. Traditional approaches for certification require to demonstrate that the system will be acceptably safe to operate before it is deployed into service. The nature of autonomous systems, with potentially infinite scenarios, configurations and unanticipated interactions, makes it increasingly difficult to support such claim at design time. In this context, the extended networking technologies can be exploited to collect post-deployment evidence that serve to oversee whether safety assumptions are preserved during operation and to continuously improve the system through regular software updates. These software updates are not only convenient for critical bug fixing but also necessary for keeping the interconnected system resilient against security threats. However, such approach requires a recondition of the traditional certification practices.

2021-02-01
Papadopoulos, A. V., Esterle, L..  2020.  Situational Trust in Self-aware Collaborating Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :91–94.
Trust among humans affects the way we interact with each other. In autonomous systems, this trust is often predefined and hard-coded before the systems are deployed. However, when systems encounter unfolding situations, requiring them to interact with others, a notion of trust will be inevitable. In this paper, we discuss trust as a fundamental measure to enable an autonomous system to decide whether or not to interact with another system, whether biological or artificial. These decisions become increasingly important when continuously integrating with others during runtime.
2020-12-17
Sandoval, S., Thulasiraman, P..  2019.  Cyber Security Assessment of the Robot Operating System 2 for Aerial Networks. 2019 IEEE International Systems Conference (SysCon). :1—8.

The Robot Operating System (ROS) is a widely adopted standard robotic middleware. However, its preliminary design is devoid of any network security features. Military grade unmanned systems must be guarded against network threats. ROS 2 is built upon the Data Distribution Service (DDS) standard and is designed to provide solutions to identified ROS 1 security vulnerabilities by incorporating authentication, encryption, and process profile features, which rely on public key infrastructure. The Department of Defense is looking to use ROS 2 for its military-centric robotics platform. This paper seeks to demonstrate that ROS 2 and its DDS security architecture can serve as a functional platform for use in military grade unmanned systems, particularly in unmanned Naval aerial swarms. In this paper, we focus on the viability of ROS 2 to safeguard communications between swarms and a ground control station (GCS). We test ROS 2's ability to mitigate and withstand certain cyber threats, specifically that of rogue nodes injecting unauthorized data and accessing services that will disable parts of the UAV swarm. We use the Gazebo robotics simulator to target individual UAVs to ascertain the effectiveness of our attack vectors under specific conditions. We demonstrate the effectiveness of ROS 2 in mitigating the chosen attack vectors but observed a measurable operational delay within our simulations.

2020-12-11
Ghose, N., Lazos, L., Rozenblit, J., Breiger, R..  2019.  Multimodal Graph Analysis of Cyber Attacks. 2019 Spring Simulation Conference (SpringSim). :1—12.

The limited information on the cyberattacks available in the unclassified regime, hardens standardizing the analysis. We address the problem of modeling and analyzing cyberattacks using a multimodal graph approach. We formulate the stages, actors, and outcomes of cyberattacks as a multimodal graph. Multimodal graph nodes include cyberattack victims, adversaries, autonomous systems, and the observed cyber events. In multimodal graphs, single-modality graphs are interconnected according to their interaction. We apply community and centrality analysis on the graph to obtain in-depth insights into the attack. In community analysis, we cluster those nodes that exhibit “strong” inter-modal ties. We further use centrality to rank the nodes according to their importance. Classifying nodes according to centrality provides the progression of the attack from the attacker to the targeted nodes. We apply our methods to two popular case studies, namely GhostNet and Putter Panda and demonstrate a clear distinction in the attack stages.

2020-12-01
Attia, M., Hossny, M., Nahavandi, S., Dalvand, M., Asadi, H..  2018.  Towards Trusted Autonomous Surgical Robots. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4083—4088.

Throughout the last few decades, a breakthrough took place in the field of autonomous robotics. They have been introduced to perform dangerous, dirty, difficult, and dull tasks, to serve the community. They have been also used to address health-care related tasks, such as enhancing the surgical skills of the surgeons and enabling surgeries in remote areas. This may help to perform operations in remote areas efficiently and in timely manner, with or without human intervention. One of the main advantages is that robots are not affected with human-related problems such as: fatigue or momentary lapses of attention. Thus, they can perform repeated and tedious operations. In this paper, we propose a framework to establish trust in autonomous medical robots based on mutual understanding and transparency in decision making.

2020-11-23
Wang, M., Hussein, A., Rojas, R. F., Shafi, K., Abbass, H. A..  2018.  EEG-Based Neural Correlates of Trust in Human-Autonomy Interaction. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). :350–357.
This paper aims at identifying the neural correlates of human trust in autonomous systems using electroencephalography (EEG) signals. Quantifying the relationship between trust and brain activities allows for real-time assessment of human trust in automation. This line of effort contributes to the design of trusted autonomous systems, and more generally, modeling the interaction in human-autonomy interaction. To study the correlates of trust, we use an investment game in which artificial agents with different levels of trustworthiness are employed. We collected EEG signals from 10 human subjects while they are playing the game; then computed three types of features from these signals considering the signal time-dependency, complexity and power spectrum using an autoregressive model (AR), sample entropy and Fourier analysis, respectively. Results of a mixed model analysis showed significant correlation between human trust and EEG features from certain electrodes. The frontal and the occipital area are identified as the predominant brain areas correlated with trust.
2020-10-05
Kanellopoulos, Aris, Vamvoudakis, Kyriakos G., Gupta, Vijay.  2019.  Decentralized Verification for Dissipativity of Cascade Interconnected Systems. 2019 IEEE 58th Conference on Decision and Control (CDC). :3629—3634.

In this paper, we consider the problem of decentralized verification for large-scale cascade interconnections of linear subsystems such that dissipativity properties of the overall system are guaranteed with minimum knowledge of the dynamics. In order to achieve compositionality, we distribute the verification process among the individual subsystems, which utilize limited information received locally from their immediate neighbors. Furthermore, to obviate the need for full knowledge of the subsystem parameters, each decentralized verification rule employs a model-free learning structure; a reinforcement learning algorithm that allows for online evaluation of the appropriate storage function that can be used to verify dissipativity of the system up to that point. Finally, we show how the interconnection can be extended by adding learning-enabled subsystems while ensuring dissipativity.

2020-09-28
Gawanmeh, Amjad, Alomari, Ahmad.  2018.  Taxonomy Analysis of Security Aspects in Cyber Physical Systems Applications. 2018 IEEE International Conference on Communications Workshops (ICC Workshops). :1–6.
The notion of Cyber Physical Systems is based on using recent computing, communication, and control methods to design and operate intelligent and autonomous systems that can provide using innovative technologies. The existence of several critical applications within the scope of cyber physical systems results in many security and privacy concerns. On the other hand, the distributive nature of these CPS increases security risks. In addition, certain CPS, such as medical ones, generate and process sensitive data regularly, hence, this data must be protected at all levels of generation, processing, and transmission. In this paper, we present a taxonomy based analysis for the state of the art work on security issues in CPS. We identify four types of analysis for security issues in CPS: Modeling, Detection, Prevention, and Response. In addition, we identified six applications of CPS where security is relevant: eHealth and medical, smart grid and power related, vehicular technologies, industrial control and manufacturing, autonomous systems and UAVs, and finally IoT related issues. Then we mapped existing works in the literature into these categories.
2020-07-16
Xiao, Jiaping, Jiang, Jianchun.  2018.  Real-time Security Evaluation for Unmanned Aircraft Systems under Data-driven Attacks*. 2018 13th World Congress on Intelligent Control and Automation (WCICA). :842—847.

With rapid advances in the fields of the Internet of Things and autonomous systems, the network security of cyber-physical systems(CPS) becomes more and more important. This paper focuses on the real-time security evaluation for unmanned aircraft systems which are cyber-physical systems relying on information communication and control system to achieve autonomous decision making. Our problem formulation is motivated by scenarios involving autonomous unmanned aerial vehicles(UAVs) working continuously under data-driven attacks when in an open, uncertain, and even hostile environment. Firstly, we investigated the state estimation method in CPS integrated with data-driven attacks model, and then proposed a real-time security scoring algorithm to evaluate the security condition of unmanned aircraft systems under different threat patterns, considering the vulnerability of the systems and consequences brought by data attacks. Our simulation in a UAV illustrated the efficiency and reliability of the algorithm.

2020-06-29
Giri, Nupur, Jaisinghani, Rahul, Kriplani, Rohit, Ramrakhyani, Tarun, Bhatia, Vinay.  2019.  Distributed Denial Of Service(DDoS) Mitigation in Software Defined Network using Blockchain. 2019 Third International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :673–678.
A DDoS attack is a spiteful attempt to disrupt legitimate traffic to a server by overwhelming the target with a flood of requests from geographically dispersed systems. Today attackers prefer DDoS attack methods to disrupt target services as they generate GBs to TBs of random data to flood the target. In existing mitigation strategies, because of lack of resources and not having the flexibility to cope with attacks by themselves, they are not considered to be that effective. So effective DDoS mitigation techniques can be provided using emerging technologies such as blockchain and SDN(Software-Defined Networking). We propose an architecture where a smart contract is deployed in a private blockchain, which facilitates a collaborative DDoS mitigation architecture across multiple network domains. Blockchain application is used as an additional security service. With Blockchain, shared protection is enabled among all hosts. With help of smart contracts, rules are distributed among all hosts. In addition, SDN can effectively enable services and security policies dynamically. This mechanism provides ASes(Autonomous Systems) the possibility to deploy their own DPS(DDoS Prevention Service) and there is no need to transfer control of the network to the third party. This paper focuses on the challenges of protecting a hybridized enterprise from the ravages of rapidly evolving Distributed Denial of Service(DDoS) attack.
2020-02-26
Saad, Muhammad, Anwar, Afsah, Ahmad, Ashar, Alasmary, Hisham, Yuksel, Murat, Mohaisen, Aziz.  2019.  RouteChain: Towards Blockchain-Based Secure and Efficient BGP Routing. 2019 IEEE International Conference on Blockchain and Cryptocurrency (ICBC). :210–218.

Routing on the Internet is defined among autonomous systems (ASes) based on a weak trust model where it is assumed that ASes are honest. While this trust model strengthens the connectivity among ASes, it results in an attack surface which is exploited by malicious entities to hijacking routing paths. One such attack is known as the BGP prefix hijacking, in which a malicious AS broadcasts IP prefixes that belong to a target AS, thereby hijacking its traffic. In this paper, we proposeRouteChain: a blockchain-based secure BGP routing system that counters BGP hijacking and maintains a consistent view of the Internet routing paths. Towards that, we leverage provenance assurance and tamper-proof properties of blockchains to augment trust among ASes. We group ASes based on their geographical (network) proximity and construct a bihierarchical blockchain model that detects false prefixes prior to their spread over the Internet. We validate strengths of our design by simulations and show its effectiveness by drawing a case study with the Youtube hijacking of 2008. Our proposed scheme is a standalone service that can be incrementally deployed without the need of a central authority.

2019-12-18
Essaid, Meryam, Kim, DaeYong, Maeng, Soo Hoon, Park, Sejin, Ju, Hong Taek.  2019.  A Collaborative DDoS Mitigation Solution Based on Ethereum Smart Contract and RNN-LSTM. 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS). :1–6.

Recently Distributed Denial-of-Service (DDoS) are becoming more and more sophisticated, which makes the existing defence systems not capable of tolerating by themselves against wide-ranging attacks. Thus, collaborative protection mitigation has become a needed alternative to extend defence mechanisms. However, the existing coordinated DDoS mitigation approaches either they require a complex configuration or are highly-priced. Blockchain technology offers a solution that reduces the complexity of signalling DDoS system, as well as a platform where many autonomous systems (Ass) can share hardware resources and defence capabilities for an effective DDoS defence. In this work, we also used a Deep learning DDoS detection system; we identify individual DDoS attack class and also define whether the incoming traffic is legitimate or attack. By classifying the attack traffic flow separately, our proposed mitigation technique could deny only the specific traffic causing the attack, instead of blocking all the traffic coming towards the victim(s).

2019-12-16
Lopes, José, Robb, David A., Ahmad, Muneeb, Liu, Xingkun, Lohan, Katrin, Hastie, Helen.  2019.  Towards a Conversational Agent for Remote Robot-Human Teaming. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :548–549.

There are many challenges when it comes to deploying robots remotely including lack of operator situation awareness and decreased trust. Here, we present a conversational agent embodied in a Furhat robot that can help with the deployment of such remote robots by facilitating teaming with varying levels of operator control.

2019-03-06
Aniculaesei, Adina, Grieser, Jörg, Rausch, Andreas, Rehfeldt, Karina, Warnecke, Tim.  2018.  Towards a Holistic Software Systems Engineering Approach for Dependable Autonomous Systems. Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems. :23-30.

Autonomous systems are gaining momentum in various application domains, such as autonomous vehicles, autonomous transport robotics and self-adaptation in smart homes. Product liability regulations impose high standards on manufacturers of such systems with respect to dependability (safety, security and privacy). Today's conventional engineering methods are not adequate for providing guarantees with respect to dependability requirements in a cost-efficient manner, e.g. road tests in the automotive industry sum up millions of miles before a system can be considered sufficiently safe. System engineers will no longer be able to test and respectively formally verify autonomous systems during development time in order to guarantee the dependability requirements in advance. In this vision paper, we introduce a new holistic software systems engineering approach for autonomous systems, which integrates development time methods as well as operation time techniques. With this approach, we aim to give the users a transparent view of the confidence level of the autonomous system under use with respect to the dependability requirements. We present already obtained results and point out research goals to be addressed in the future.

Peruma, Anthony, Krutz, Daniel E..  2018.  Security: A Critical Quality Attribute in Self-Adaptive Systems. Proceedings of the 13th International Conference on Software Engineering for Adaptive and Self-Managing Systems. :188-189.

Self-Adaptive Systems (SAS) are revolutionizing many aspects of our society. From server clusters to autonomous vehicles, SAS are becoming more ubiquitous and essential to our world. Security is frequently a priority for these systems as many SAS conduct mission-critical operations, or work with sensitive information. Fortunately, security is being more recognized as an indispensable aspect of virtually all aspects of computing systems, in all phases of software development. Despite the growing prominence in security, from computing education to vulnerability detection systems, it is just another concern of creating good software. Despite how critical security is, it is a quality attribute like other aspects such as reliability, stability, or adaptability in a SAS.