Biblio
Today's more reliable communication technology, together with the availability of higher computational power, have paved the way for introduction of more advanced automation systems based on distributed intelligence and multi-agent technology. However, abundance of data, while making these systems more powerful, can at the same time act as their biggest vulnerability. In a web of interconnected devices and components functioning within an automation framework, potential impact of malfunction in a single device, either through internal failure or external damage/intrusion, may lead to detrimental side-effects spread across the whole underlying system. The potentially large number of devices, along with their inherent interrelations and interdependencies, may hinder the ability of human operators to interpret events, identify their scope of impact and take remedial actions if necessary. Through utilization of the concepts of graph-theoretic fuzzy cognitive maps (FCM) and expert systems, this paper puts forth a solution that is able to reveal weak links and vulnerabilities of an automation system, should it become exposed to partial internal failure or external damage. A case study has been performed on the IEEE 34-bus test distribution system to show the efficiency of the proposed scheme.
Multiple Security Domains Nondeducibility, MSDND, yields results even when the attack hides important information from electronic monitors and human operators. Because MSDND is based upon modal frames, it is able to analyze the event system as it progresses rather than relying on traces of the system. Not only does it provide results as the system evolves, MSDND can point out attacks designed to be missed in other security models. This work examines information flow disruption attacks such as Stuxnet and formally explains the role that implicit trust in the cyber security of a cyber physical system (CPS) plays in the success of the attack. The fact that the attack hides behind MSDND can be used to help secure the system by modifications to break MSDND and leave the attack nowhere to hide. Modal operators are defined to allow the manipulation of belief and trust states within the model. We show how the attack hides and uses the operator's trust to remain undetected. In fact, trust in the CPS is key to the success of the attack.