Visible to the public Biblio

Filters: Keyword is supervisory control  [Clear All Filters]
2023-05-12
Yao, Jingshi, Yin, Xiang, Li, Shaoyuan.  2022.  Sensor Deception Attacks Against Initial-State Privacy in Supervisory Control Systems. 2022 IEEE 61st Conference on Decision and Control (CDC). :4839–4845.
This paper investigates the problem of synthesizing sensor deception attackers against privacy in the context of supervisory control of discrete-event systems (DES). We consider a plant controlled by a supervisor, which is subject to sensor deception attacks. Specifically, we consider an active attacker that can tamper with the observations received by the supervisor. The privacy requirement of the supervisory control system is to maintain initial-state opacity, i.e., it does not want to reveal the fact that it was initiated from a secret state during its operation. On the other hand, the attacker aims to deceive the supervisor, by tampering with its observations, such that initial-state opacity is violated due to incorrect control actions. We investigate from the attacker’s point of view by presenting an effective approach for synthesizing sensor attack strategies threatening the privacy of the system. To this end, we propose the All Attack Structure (AAS) that records state estimates for both the supervisor and the attacker. This structure serves as a basis for synthesizing a sensor attack strategy. We also discuss how to simplify the synthesis complexity by leveraging the structural properties. A running academic example is provided to illustrate the synthesis procedure.
ISSN: 2576-2370
Ogawa, Kanta, Sawada, Kenji, Sakata, Kosei.  2022.  Vulnerability Modeling and Protection Strategies via Supervisory Control Theory. 2022 IEEE 11th Global Conference on Consumer Electronics (GCCE). :559–560.
The paper aims to discover vulnerabilities by application of supervisory control theory and to design a defensive supervisor against vulnerability attacks. Supervisory control restricts the system behavior to satisfy the control specifications. The existence condition of the supervisor, sometimes results in undesirable plant behavior, which can be regarded as a vulnerability of the control specifications. We aim to design a more robust supervisor against this vulnerability.
ISSN: 2378-8143
2022-03-22
Molina-Barros, Lucas, Romero-Rodriguez, Miguel, Pietrac, Laurent, Dumitrescu, Emil.  2021.  Supervisory control of post-fault restoration schemes in reconfigurable HVDC grids. 2021 23rd European Conference on Power Electronics and Applications (EPE'21 ECCE Europe). :1—10.
This paper studies the use of Supervisory Control Theory to design and implement post-fault restoration schemes in a HVDC grid. Our study focuses on the synthesis of discrete controllers and on the management of variable control rules during the execution of the protection strategy. The resulting supervisory control system can be proven "free of deadlocks" in the sense that designated tasks are always completed.
2021-12-20
Zheng, Shengbao, Shu, Shaolong, Lin, Feng.  2021.  Modeling and Control of Discrete Event Systems under Joint Sensor-Actuator Cyber Attacks. 2021 6th International Conference on Automation, Control and Robotics Engineering (CACRE). :216–220.
In this paper, we investigate joint sensor-actuator cyber attacks in discrete event systems. We assume that attackers can attack some sensors and actuators at the same time by altering observations and control commands. Because of the nondeterminism in observation and control caused by cyber attacks, the behavior of the supervised systems becomes nondeterministic and deviates from the target. We define two bounds on languages, an upper-bound and a lower-bound, to describe the nondeterministic behavior. We then use the upper-bound language to investigate the safety supervisory control problem under cyber attacks. After introducing CA-controllability and CA-observability, we successfully solve the supervisory control problem under cyber attacks.
2021-02-01
Lee, J., Abe, G., Sato, K., Itoh, M..  2020.  Impacts of System Transparency and System Failure on Driver Trust During Partially Automated Driving. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–3.
The objective of this study is to explore changes of trust by a situation where drivers need to intervene. Trust in automation is a key determinant for appropriate interaction between drivers and the system. System transparency and types of system failure influence shaping trust in a supervisory control. Subjective ratings of trust were collected to examine the impact of two factors: system transparency (Detailed vs. Less) and system failure (by Limits vs. Malfunction) in a driving simulator study in which drivers experienced a partially automated vehicle. We examined trust ratings at three points: before and after driver intervention in the automated vehicle, and after subsequent experience of flawless automated driving. Our result found that system transparency did not have significant impacts on trust change from before to after the intervention. System-malfunction led trust reduction compared to those of before the intervention, whilst system-limits did not influence trust. The subsequent experience recovered decreased trust, in addition, when the system-limit occurred to drivers who have detailed information about the system, trust prompted in spite of the intervention. The present finding has implications for automation design to achieve the appropriate level of trust.
2020-12-01
Nam, C., Li, H., Li, S., Lewis, M., Sycara, K..  2018.  Trust of Humans in Supervisory Control of Swarm Robots with Varied Levels of Autonomy. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :825—830.

In this paper, we study trust-related human factors in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We compare three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual LOA, the human operator chooses headings for a flocking swarm, issuing new headings as needed. In the fully autonomous LOA, the swarm is redirected automatically by changing headings using a search algorithm. In the mixed-initiative LOA, if performance declines, control is switched from human to swarm or swarm to human. The result of this work extends the current knowledge on human factors in swarm supervisory control. Specifically, the finding that the relationship between trust and performance improved for passively monitoring operators (i.e., improved situation awareness in higher LOAs) is particularly novel in its contradiction of earlier work. We also discover that operators switch the degree of autonomy when their trust in the swarm system is low. Last, our analysis shows that operator's preference for a lower LOA is confirmed for a new domain of swarm control.

2020-07-27
Babay, Amy, Tantillo, Thomas, Aron, Trevor, Platania, Marco, Amir, Yair.  2018.  Network-Attack-Resilient Intrusion-Tolerant SCADA for the Power Grid. 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :255–266.
As key components of the power grid infrastructure, Supervisory Control and Data Acquisition (SCADA) systems are likely to be targeted by nation-state-level attackers willing to invest considerable resources to disrupt the power grid. We present Spire, the first intrusion-tolerant SCADA system that is resilient to both system-level compromises and sophisticated network-level attacks and compromises. We develop a novel architecture that distributes the SCADA system management across three or more active sites to ensure continuous availability in the presence of simultaneous intrusions and network attacks. A wide-area deployment of Spire, using two control centers and two data centers spanning 250 miles, delivered nearly 99.999% of all SCADA updates initiated over a 30-hour period within 100ms. This demonstrates that Spire can meet the latency requirements of SCADA for the power grid.
2020-01-13
Zhu, Yuting, Lin, Liyong, Su, Rong.  2019.  Supervisor Obfuscation Against Actuator Enablement Attack. 2019 18th European Control Conference (ECC). :1760–1765.
In this paper, we propose and address the problem of supervisor obfuscation against actuator enablement attack, in a common setting where the actuator attacker can eavesdrop the control commands issued by the supervisor. We propose a method to obfuscate an (insecure) supervisor to make it resilient against actuator enablement attack in such a way that the behavior of the original closed-loop system is preserved. An additional feature of the obfuscated supervisor, if it exists, is that it has exactly the minimum number of states among the set of all the resilient and behavior-preserving supervisors. Our approach involves a simple combination of two basic ideas: 1) a formulation of the problem of computing behavior-preserving supervisors as the problem of computing separating finite state automata under controllability and observability constraints, which can be tackled by using SAT solvers, and 2) the use of a recently proposed technique for the verification of attackability in our setting, with a normality assumption imposed on both the actuator attackers and supervisors.
Lin, Liyong, Thuijsman, Sander, Zhu, Yuting, Ware, Simon, Su, Rong, Reniers, Michel.  2019.  Synthesis of Supremal Successful Normal Actuator Attackers on Normal Supervisors. 2019 American Control Conference (ACC). :5614–5619.
In this paper, we propose and develop an actuator attack model for discrete-event systems. We assume the actuator attacker partially observes the execution of the closed-loop system and eavesdrops the control commands issued by the supervisor. The attacker can modify each control command on a specified subset of attackable events. The goal of the actuator attacker is to remain covert until it can establish a successful attack and lead the attacked closed-loop system into generating certain damaging strings. We then present a characterization for the existence of a successful attacker and prove the existence of the supremal successful attacker, when both the supervisor and the attacker are normal. Finally, we present an algorithm to synthesize the supremal successful normal attackers.
2019-08-26
Gonzalez, D., Alhenaki, F., Mirakhorli, M..  2019.  Architectural Security Weaknesses in Industrial Control Systems (ICS) an Empirical Study Based on Disclosed Software Vulnerabilities. 2019 IEEE International Conference on Software Architecture (ICSA). :31–40.

Industrial control systems (ICS) are systems used in critical infrastructures for supervisory control, data acquisition, and industrial automation. ICS systems have complex, component-based architectures with many different hardware, software, and human factors interacting in real time. Despite the importance of security concerns in industrial control systems, there has not been a comprehensive study that examined common security architectural weaknesses in this domain. Therefore, this paper presents the first in-depth analysis of 988 vulnerability advisory reports for Industrial Control Systems developed by 277 vendors. We performed a detailed analysis of the vulnerability reports to measure which components of ICS have been affected the most by known vulnerabilities, which security tactics were affected most often in ICS and what are the common architectural security weaknesses in these systems. Our key findings were: (1) Human-Machine Interfaces, SCADA configurations, and PLCs were the most affected components, (2) 62.86% of vulnerability disclosures in ICS had an architectural root cause, (3) the most common architectural weaknesses were “Improper Input Validation”, followed by “Im-proper Neutralization of Input During Web Page Generation” and “Improper Authentication”, and (4) most tactic-related vulnerabilities were related to the tactics “Validate Inputs”, “Authenticate Actors” and “Authorize Actors”.

2018-02-14
Gutzwiller, R. S., Reeder, J..  2017.  Human interactive machine learning for trust in teams of autonomous robots. 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA). :1–3.

Unmanned systems are increasing in number, while their manning requirements remain the same. To decrease manpower demands, machine learning techniques and autonomy are gaining traction and visibility. One barrier is human perception and understanding of autonomy. Machine learning techniques can result in “black box” algorithms that may yield high fitness, but poor comprehension by operators. However, Interactive Machine Learning (IML), a method to incorporate human input over the course of algorithm development by using neuro-evolutionary machine-learning techniques, may offer a solution. IML is evaluated here for its impact on developing autonomous team behaviors in an area search task. Initial findings show that IML-generated search plans were chosen over plans generated using a non-interactive ML technique, even though the participants trusted them slightly less. Further, participants discriminated each of the two types of plans from each other with a high degree of accuracy, suggesting the IML approach imparts behavioral characteristics into algorithms, making them more recognizable. Together the results lay the foundation for exploring how to team humans successfully with ML behavior.