Visible to the public Biblio

Filters: Keyword is human operators  [Clear All Filters]
2019-12-02
Elfar, Mahmoud, Zhu, Haibei, Cummings, M. L., Pajic, Miroslav.  2019.  Security-Aware Synthesis of Human-UAV Protocols. 2019 International Conference on Robotics and Automation (ICRA). :8011–8017.
In this work, we synthesize collaboration protocols for human-unmanned aerial vehicle (H-UAV) command and control systems, where the human operator aids in securing the UAV by intermittently performing geolocation tasks to confirm its reported location. We first present a stochastic game-based model for the system that accounts for both the operator and an adversary capable of launching stealthy false-data injection attacks, causing the UAV to deviate from its path. We also describe a synthesis challenge due to the UAV's hidden-information constraint. Next, we perform human experiments using a developed RESCHU-SA testbed to recognize the geolocation strategies that operators adopt. Furthermore, we deploy machine learning techniques on the collected experimental data to predict the correctness of a geolocation task at a given location based on its geographical features. By representing the model as a delayed-action game and formalizing the system objectives, we utilize off-the-shelf model checkers to synthesize protocols for the human-UAV coalition that satisfy these objectives. Finally, we demonstrate the usefulness of the H-UAV protocol synthesis through a case study where the protocols are experimentally analyzed and further evaluated by human operators.
2015-05-01
Mohagheghi, S..  2014.  Integrity Assessment Scheme for Situational Awareness in Utility Automation Systems. Smart Grid, IEEE Transactions on. 5:592-601.

Today's more reliable communication technology, together with the availability of higher computational power, have paved the way for introduction of more advanced automation systems based on distributed intelligence and multi-agent technology. However, abundance of data, while making these systems more powerful, can at the same time act as their biggest vulnerability. In a web of interconnected devices and components functioning within an automation framework, potential impact of malfunction in a single device, either through internal failure or external damage/intrusion, may lead to detrimental side-effects spread across the whole underlying system. The potentially large number of devices, along with their inherent interrelations and interdependencies, may hinder the ability of human operators to interpret events, identify their scope of impact and take remedial actions if necessary. Through utilization of the concepts of graph-theoretic fuzzy cognitive maps (FCM) and expert systems, this paper puts forth a solution that is able to reveal weak links and vulnerabilities of an automation system, should it become exposed to partial internal failure or external damage. A case study has been performed on the IEEE 34-bus test distribution system to show the efficiency of the proposed scheme.

2015-04-30
Howser, G., McMillin, B..  2014.  A Modal Model of Stuxnet Attacks on Cyber-physical Systems: A Matter of Trust. Software Security and Reliability (SERE), 2014 Eighth International Conference on. :225-234.

Multiple Security Domains Nondeducibility, MSDND, yields results even when the attack hides important information from electronic monitors and human operators. Because MSDND is based upon modal frames, it is able to analyze the event system as it progresses rather than relying on traces of the system. Not only does it provide results as the system evolves, MSDND can point out attacks designed to be missed in other security models. This work examines information flow disruption attacks such as Stuxnet and formally explains the role that implicit trust in the cyber security of a cyber physical system (CPS) plays in the success of the attack. The fact that the attack hides behind MSDND can be used to help secure the system by modifications to break MSDND and leave the attack nowhere to hide. Modal operators are defined to allow the manipulation of belief and trust states within the model. We show how the attack hides and uses the operator's trust to remain undetected. In fact, trust in the CPS is key to the success of the attack.