Visible to the public Biblio

Found 12044 results

Filters: Keyword is Resiliency  [Clear All Filters]
2018-09-28
Li, Z., Li, S..  2017.  Random forest algorithm under differential privacy. 2017 IEEE 17th International Conference on Communication Technology (ICCT). :1901–1905.

Trying to solve the risk of data privacy disclosure in classification process, a Random Forest algorithm under differential privacy named DPRF-gini is proposed in the paper. In the process of building decision tree, the algorithm first disturbed the process of feature selection and attribute partition by using exponential mechanism, and then meet the requirement of differential privacy by adding Laplace noise to the leaf node. Compared with the original algorithm, Empirical results show that protection of data privacy is further enhanced while the accuracy of the algorithm is slightly reduced.

Lu, Z., Shen, H..  2017.  A New Lower Bound of Privacy Budget for Distributed Differential Privacy. 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). :25–32.

Distributed data aggregation via summation (counting) helped us to learn the insights behind the raw data. However, such computing suffered from a high privacy risk of malicious collusion attacks. That is, the colluding adversaries infer a victim's privacy from the gaps between the aggregation outputs and their source data. Among the solutions against such collusion attacks, Distributed Differential Privacy (DDP) shows a significant effect of privacy preservation. Specifically, a DDP scheme guarantees the global differential privacy (the presence or absence of any data curator barely impacts the aggregation outputs) by ensuring local differential privacy at the end of each data curator. To guarantee an overall privacy performance of a distributed data aggregation system against malicious collusion attacks, part of the existing work on such DDP scheme aim to provide an estimated lower bound of privacy budget for the global differential privacy. However, there are two main problems: low data utility from using a large global function sensitivity; unknown privacy guarantee when the aggregation sensitivity of the whole system is less than the sum of the data curator's aggregation sensitivity. To address these problems while ensuring distributed differential privacy, we provide a new lower bound of privacy budget, which works with an unconditional aggregation sensitivity of the whole distributed system. Moreover, we study the performance of our privacy bound in different scenarios of data updates. Both theoretical and experimental evaluations show that our privacy bound offers better global privacy performance than the existing work.

2018-09-12
Boureanu, Ioana, Gérault, David, Lafourcade, Pascal, Onete, Cristina.  2017.  Breaking and Fixing the HB+DB Protocol. Proceedings of the 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks. :241–246.

HB+ is a lightweight authentication scheme, which is secure against passive attacks if the Learning Parity with Noise Problem (LPN) is hard. However, HB+ is vulnerable to a key-recovery, man-in-the-middle (MiM) attack dubbed GRS. The HB+DB protocol added a distance-bounding dimension to HB+, and was experimentally proven to resist the GRS attack. We exhibit several security flaws in HB+DB. First, we refine the GRS strategy to induce a different key-recovery MiM attack, not deterred by HB+DB's distancebounding. Second, we prove HB+DB impractical as a secure distance-bounding (DB) protocol, as its DB security-levels scale poorly compared to other DB protocols. Third, we refute that HB+DB's security against passive attackers relies on the hardness of LPN; moreover, (erroneously) requiring such hardness lowers HB+DB's efficiency and security. We also propose anew distance-bounding protocol called BLOG. It retains parts of HB+DB, yet BLOG is provably secure and enjoys better (asymptotical) security.

Mohan, Manisha, Sra, Misha, Schmandt, Chris.  2017.  Technological Interventions to Detect, Communicate and Deter Sexual Assault. Proceedings of the 2017 ACM International Symposium on Wearable Computers. :126–129.
Every 98 seconds an American is sexually assaulted. Our work explores the use of on-body sensors to detect, communicate and prevent sexual assault. We present a stick-on clothing sensor which responds to initial signs of sexual assault such as disrobing to deter sexual abuse. The smart clothing operates in two modes: an active mode for instances when the victim is unconscious, and a passive mode where the victim can self-actuate the safety mechanism. Both modes alert the victim's friends and family, actuate an auditory alarm, activate odor-emitting capsules to create an immediate repulsion effect, and call emergency services. Our design is based on input from sexual assault survivors and college students who evaluated the clothing for aesthetic appeal, functionality, cultural sensitivity and their sense of personal safety. We show the practicality of our unobtrusive design with two user studies to demonstrate that our techno-social approach can help improve user safety and prevent sexual assault.
Rubio-Medrano, Carlos E., Lamp, Josephine, Doupé, Adam, Zhao, Ziming, Ahn, Gail-Joon.  2017.  Mutated Policies: Towards Proactive Attribute-based Defenses for Access Control. Proceedings of the 2017 Workshop on Moving Target Defense. :39–49.
Recently, both academia and industry have recognized the need for leveraging real-time information for the purposes of specifying, enforcing and maintaining rich and flexible authorization policies. In such a context, security-related properties, a.k.a., attributes, have been recognized as a convenient abstraction for providing a well-defined representation of such information, allowing for them to be created and exchanged by different independently-run organizational domains for authorization purposes. However, attackers may attempt to compromise the way attributes are generated and communicated by recurring to hacking techniques, e.g., forgery, in an effort to bypass authorization policies and their corresponding enforcement mechanisms and gain unintended access to sensitive resources as a result. In this paper, we propose a novel technique that allows for enterprises to pro-actively collect attributes from the different entities involved in the access request process, e.g., users, subjects, protected resources, and running environments. After the collection, we aim to carefully select the attributes that uniquely identify the aforementioned entities, and randomly mutate the original access policies over time by adding additional policy rules constructed from the newly-identified attributes. This way, even when attackers are able to compromise the original attributes, our mutated policies may offer an additional layer of protection to deter ongoing and future attacks. We present the rationale and experimental results supporting our proposal, which provide evidence of its suitability for being deployed in practice.
Veloudis, Simeon, Paraskakis, Iraklis, Petsos, Christos.  2017.  An Ontological Framework for Determining the Repercussions of Retirement Actions Targeted at Complex Access Control Policies in Cloud Environments. Companion Proceedings of the10th International Conference on Utility and Cloud Computing. :21–28.
By migrating their data and operations to the cloud, enterprises are able to gain significant benefits in terms of cost savings, increased availability, agility and productivity. Yet, the shared and on-demand nature of the cloud paradigm introduces a new breed of security threats that generally deter stakeholders from relinquishing control of their critical assets to third-party cloud providers. One way to thwart these threats is to instill suitable access control policies into cloud services that protect these assets. Nevertheless, the dynamic nature of cloud environments calls for policies that are able to incorporate a potentially complex body of contextual knowledge. This complexity is further amplified by the interplay that inevitably occurs between the different policies, as well as by the dynamically-evolving nature of an organisation's business and security needs. We argue that one way to tame this complexity is to devise a generic framework that facilitates the governance of policies. This paper presents a particular aspect of such a framework, namely an approach to determining the repercussions that policy retirement actions have on the overall protection of critical assets in the cloud.
Khazankin, G. R., Komarov, S., Kovalev, D., Barsegyan, A., Likhachev, A..  2017.  System architecture for deep packet inspection in high-speed networks. 2017 Siberian Symposium on Data Science and Engineering (SSDSE). :27–32.

To solve the problems associated with large data volume real-time processing, heterogeneous systems using various computing devices are increasingly used. The characteristic of solving this class of problems is related to the fact that there are two directions for improving methods of real-time data analysis: the first is the development of algorithms and approaches to analysis, and the second is the development of hardware and software. This article reviews the main approaches to the architecture of a hardware-software solution for traffic capture and deep packet inspection (DPI) in data transmission networks with a bandwidth of 80 Gbit/s and higher. At the moment there are software and hardware tools that allow designing the architecture of capture system and deep packet inspection: 1) Using only the central processing unit (CPU); 2) Using only the graphics processing unit (GPU); 3) Using the central processing unit and graphics processing unit simultaneously (CPU + GPU). In this paper, we consider these key approaches. Also attention is paid to both hardware and software requirements for the architecture of solutions. Pain points and remedies are described.

Domínguez, A., Carballo, P. P., Núñez, A..  2017.  Programmable SoC platform for deep packet inspection using enhanced Boyer-Moore algorithm. 2017 12th International Symposium on Reconfigurable Communication-centric Systems-on-Chip (ReCoSoC). :1–8.

This paper describes the work done to design a SoC platform for real-time on-line pattern search in TCP packets for Deep Packet Inspection (DPI) applications. The platform is based on a Xilinx Zynq programmable SoC and includes an accelerator that implements a pattern search engine that extends the original Boyer-Moore algorithm with timing and logical rules, that produces a very complex set of rules. Also, the platform implements different modes of operation, including SIMD and MISD parallelism, which can be configured on-line. The platform is scalable depending of the analysis requirement up to 8 Gbps. High-Level synthesis and platform based design methodologies have been used to reduce the time to market of the completed system.

Özer, E., İskefiyeli, M..  2017.  Detection of DDoS attack via deep packet analysis in real time systems. 2017 International Conference on Computer Science and Engineering (UBMK). :1137–1140.

One of the biggest problems of today's internet technologies is cyber attacks. In this paper whether DDoS attacks will be determined by deep packet inspection. Initially packets are captured by listening of network traffic. Packet filtering was achieved at desired number and type. These packets are recorded to database to be analyzed, daily values and average values are compared by known attack patterns and will be determined whether a DDoS attack attempts in real time systems.

Al-hisnawi, M., Ahmadi, M..  2017.  Deep packet inspection using Cuckoo filter. 2017 Annual Conference on New Trends in Information Communications Technology Applications (NTICT). :197–202.

Nowadays, Internet Service Providers (ISPs) have been depending on Deep Packet Inspection (DPI) approaches, which are the most precise techniques for traffic identification and classification. However, constructing high performance DPI approaches imposes a vigilant and an in-depth computing system design because the demands for the memory and processing power. Membership query data structures, specifically Bloom filter (BF), have been employed as a matching check tool in DPI approaches. It has been utilized to store signatures fingerprint in order to examine the presence of these signatures in the incoming network flow. The main issue that arise when employing Bloom filter in DPI approaches is the need to use k hash functions which, in turn, imposes more calculations overhead that degrade the performance. Consequently, in this paper, a new design and implementation for a DPI approach have been proposed. This DPI utilizes a membership query data structure called Cuckoo filter (CF) as a matching check tool. CF has many advantages over BF like: less memory consumption, less false positive rate, higher insert performance, higher lookup throughput, support delete operation. The achieved experiments show that the proposed approach offers better performance results than others that utilize Bloom filter.

Renukadevi, B., Raja, S. D. M..  2017.  Deep packet inspection Management application in SDN. 2017 2nd International Conference on Computing and Communications Technologies (ICCCT). :256–259.

DPI Management application which resides on the north-bound of SDN architecture is to analyze the application signature data from the network. The data being read and analyzed are of format JSON for effective data representation and flows provisioned from North-bound application is also of JSON format. The data analytic engine analyzes the data stored in the non-relational data base and provides the information about real-time applications used by the network users. Allows the operator to provision flows dynamically with the data from the network to allow/block flows and also to boost the bandwidth. The DPI Management application allows decoupling of application with the controller; thus providing the facility to run it in any hyper-visor within network. Able to publish SNMP trap notifications to the network operators with application threshold and flow provisioning behavior. Data purging from non-relational database at frequent intervals to remove the obsolete analyzed data.

Nagendra, Vasudevan, Yegneswaran, Vinod, Porras, Phillip.  2017.  Securing Ultra-High-Bandwidth Science DMZ Networks with Coordinated Situational Awareness. Proceedings of the 16th ACM Workshop on Hot Topics in Networks. :22–28.

The Science DMZ (SDMZ) is a special purpose network infrastructure that is engineered to cater to the ultra-high bandwidth needs of the scientific and high performance computing (HPC) communities. These networks are isolated from stateful security devices such as firewalls and deep packet inspection (DPI) engines to allow HPC data transfer nodes (DTNs) to efficiently transfer petabytes of data without associated bandwidth and performance bottlenecks. This paper presents our ongoing effort toward the development of more fine-grained data flow access control policies to manage SDMZ networks that service large-scale experiments with varying data sensitivity levels and privacy constraints. We present a novel system, called CoordiNetZ (CNZ), that provides coordinated security monitoring and policy enforcement for sites participating in SDMZ projects by using an intent-based policy framework for effectively capturing the high-level policy intents of non-admin SDMZ project users (e.g., scientists, researchers, students). Central to our solution is the notion of coordinated situational awareness that is extracted from the synthesis of context derived from SDMZ host DTN applications and the network substrate. To realize this vision, we present a specialized process-monitoring system and flow-monitoring tool that facilitate context-aware data-flow intervention and policy enforcement in ultra-highspeed data transfer environments. We evaluate our prototype implementation using case studies that highlight the utility of our framework and demonstrate how security policy could be effectively specified and implemented within and across SDMZ networks.

Luinaud, Thomas, Savaria, Yvon, Langlois, J.M. Pierre.  2017.  An FPGA Coarse Grained Intermediate Fabric for Regular Expression Search. Proceedings of the on Great Lakes Symposium on VLSI 2017. :423–426.

Deep Packet Inspection systems such as Snort and Bro express complex rules with regular expressions. In Snort, the search of a regular expression is performed with a Non-deterministic Finite Automaton (NFA). Traversing an NFA sequentially with a CPU is not deterministic in time, and it can be very time consuming. The sequential traversal of an NFA with a CPU is not deterministic in time consequently it can be time consuming. A fully parallel NFA implemented in hardware can search all rules, but most of the time only a small part is active. Furthermore, a string filter determines the traversal of an NFA. This paper proposes an FPGA Intermediate Fabric that can efficiently search regular expressions. The architecture is configured for a specific NFA based on a partial match of a rule found by the string filter. It can thus support all rules from a set such as Snort, while significantly reduce compute resources and power con-sumption compared to a fully parallel implementation. Multiple parameters can be selected to find the best tradeoff between resource consumption and the number and types of supported expressions. This architecture was implemented on a Xilinx R XC7VX1140 Virtex-7. The reported implementation, can sustain up to 512 regular expressions, while requiring 2% of the slices and 16% of the BRAM resources, for a throughput of 200 million characters per second.

Canard, Sébastien, Diop, Aïda, Kheir, Nizar, Paindavoine, Marie, Sabt, Mohamed.  2017.  BlindIDS: Market-Compliant and Privacy-Friendly Intrusion Detection System over Encrypted Traffic. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :561–574.

The goal of network intrusion detection is to inspect network traffic in order to identify threats and known attack patterns. One of its key features is Deep Packet Inspection (DPI), that extracts the content of network packets and compares it against a set of detection signatures. While DPI is commonly used to protect networks and information systems, it requires direct access to the traffic content, which makes it blinded against encrypted network protocols such as HTTPS. So far, a difficult choice was to be made between the privacy of network users and security through the inspection of their traffic content to detect attacks or malicious activities. This paper presents a novel approach that bridges the gap between network security and privacy. It makes possible to perform DPI directly on encrypted traffic, without knowing neither the traffic content, nor the patterns of detection signatures. The relevance of our work is that it preserves the delicate balance in the security market ecosystem. Indeed, security editors will be able to protect their distinctive detection signatures and supply service providers only with encrypted attack patterns. In addition, service providers will be able to integrate the encrypted signatures in their architectures and perform DPI without compromising the privacy of network communications. Finally, users will be able to preserve their privacy through traffic encryption, while also benefiting from network security services. The extensive experiments conducted in this paper prove that, compared to existing encryption schemes, our solution reduces by 3 orders of magnitude the connection setup time for new users, and by 6 orders of magnitude the consumed memory space on the DPI appliance.

Han, Juhyeng, Kim, Seongmin, Ha, Jaehyeong, Han, Dongsu.  2017.  SGX-Box: Enabling Visibility on Encrypted Traffic Using a Secure Middlebox Module. Proceedings of the First Asia-Pacific Workshop on Networking. :99–105.

A network middlebox benefits both users and network operators by offering a wide range of security-related in-network functions, such as web firewalls and intrusion detection systems (IDS). However, the wide usage of encryption protocol restricts functionalities of network middleboxes. This forces network operators and users to make a choice between end-to-end privacy and security. This paper presents SGX-Box, a secure middlebox system that enables visibility on encrypted traffic by leveraging Intel SGX technology. The entire process of SGX-Box ensures that the sensitive information, such as decrypted payloads and session keys, is securely protected within the SGX enclave. SGX-Box provides easy-to-use abstraction and a high-level programming language, called SB lang for handling encrypted traffic in middleboxes. It greatly enhances programmability by hiding details of the cryptographic operations and the implementation details in SGX enclave processing. We implement a proof-of-concept IDS using SB lang. Our preliminary evaluation shows that SGX-Box incurs acceptable performance overhead while it dramatically reduces middlebox developer's effort.

Miura, Ryosuke, Takano, Yuuki, Miwa, Shinsuke, Inoue, Tomoya.  2017.  GINTATE: Scalable and Extensible Deep Packet Inspection System for Encrypted Network Traffic: Session Resumption in Transport Layer Security Communication Considered Harmful to DPI. Proceedings of the Eighth International Symposium on Information and Communication Technology. :234–241.
Deep packet inspection (DPI) is a basic monitoring technology, which realizes network traffic control based on application payload. The technology is used to prevent threats (e.g., intrusion detection systems, firewalls) and extract information (e.g., content filtering systems). Moreover, transport layer security (TLS) monitoring is required because of the increasing use of the TLS protocol, particularly by hypertext transfer protocol secure (HTTPS). TLS monitoring is different from TCP monitoring in two aspects. First, monitoring systems cannot inspect the content in TLS communication, which is encrypted. Second, TLS communication is a session unit composed of one or more TCP connections. In enterprise networks, dedicated TLS proxies are deployed to perform TLS monitoring. However, the proxies cannot be used when monitored devices are unable to use a custom certificate. Additionally, these networks contain problems of scale and complexity that affect the monitoring. Therefore, the DPI processing using another method requires high-speed processing and various protocol analyses across TCP connections in TLS monitoring. However, it is difficult to realize both simultaneously. We propose GINTATE, which decrypts TLS communication using shared keys and monitors the results. GINTATE is a scalable architecture that uses distributed computing and considers relational sessions across multiple TCP connections in TLS communication. Additionally, GINTATE achieves DPI processing by adding an extensible analysis module. By comparing GINTATE against other systems, we show that it can perform DPI processing by managing relational sessions via distributed computing and that it is scalable.
Armknecht, Frederik, Boyd, Colin, Davies, Gareth T., Gjøsteen, Kristian, Toorani, Mohsen.  2017.  Side Channels in Deduplication: Trade-offs Between Leakage and Efficiency. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :266–274.
Deduplication removes redundant copies of files or data blocks stored on the cloud. Client-side deduplication, where the client only uploads the file upon the request of the server, provides major storage and bandwidth savings, but introduces a number of security concerns. Harnik et al. (2010) showed how cross-user client-side deduplication inherently gives the adversary access to a (noisy) side-channel that may divulge whether or not a particular file is stored on the server, leading to leakage of user information. We provide formal definitions for deduplication strategies and their security in terms of adversarial advantage. Using these definitions, we provide a criterion for designing good strategies and then prove a bound characterizing the necessary trade-off between security and efficiency.
Doan, Khue, Quang, Minh Nguyen, Le, Bac.  2017.  Applied Cuckoo Algorithm for Association Rule Hiding Problem. Proceedings of the Eighth International Symposium on Information and Communication Technology. :26–33.
Nowadays, the database security problem is becoming significantly interesting in the data mining field. How can exploit legitimate data and avoid disclosing sensitive information. There have been many approaches in which the outstanding solution among them is privacy preservation in association rule mining to hide sensitive rules. In the recent years, a meta-heuristic algorithm is becoming effective for this goal, the algorithm is applied in the cuckoo optimization algorithm (COA4ARH). In this paper, an improved proposal of the COA4ARH to minimize the side effect of the missing non-sensitive rules will be introduced. The main contribution of this study is a new pre-process stage to determine the minimum number of necessary transactions for the process of initializing an initial habitat, thus restriction of modified operation on the original data. To evaluate the effectiveness of the proposed method, we conducted several experiments on the real datasets. The experimental results show that the improved approach has higher performance in compared to the original algorithm.
Nagaratna, M., Sowmya, Y..  2017.  M-sanit: Computing misusability score and effective sanitization of big data using Amazon elastic MapReduce. 2017 International Conference on Computation of Power, Energy Information and Commuincation (ICCPEIC). :029–035.
The invent of distributed programming frameworks like Hadoop paved way for processing voluminous data known as big data. Due to exponential growth of data, enterprises started to exploit the availability of cloud infrastructure for storing and processing big data. Insider attacks on outsourced data causes leakage of sensitive data. Therefore, it is essential to sanitize data so as to preserve privacy or non-disclosure of sensitive data. Privacy Preserving Data Publishing (PPDP) and Privacy Preserving Data Mining (PPDM) are the areas in which data sanitization plays a vital role in preserving privacy. The existing anonymization techniques for MapReduce programming can be improved to have a misusability measure for determining the level of sanitization to be applied to big data. To overcome this limitation we proposed a framework known as M-Sanit which has mechanisms to exploit misusability score of big data prior to performing sanitization using MapReduce programming paradigm. Our empirical study using the real world cloud eco system such as Amazon Elastic Cloud Compute (EC2) and Amazon Elastic MapReduce (EMR) reveals the effectiveness of misusability score based sanitization of big data prior to publishing or mining it.
Yousef, K. M. A., AlMajali, A., Hasan, R., Dweik, W., Mohd, B..  2017.  Security risk assessment of the PeopleBot mobile robot research platform. 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA). :1–5.

Nowadays, robots are widely ubiquitous and integral part in our daily lives, which can be seen almost everywhere in industry, hospitals, military, etc. To provide remote access and control, usually robots are connected to local network or to the Internet through WiFi or Ethernet. As such, it is of great importance and of a critical mission to maintain the safety and the security access of such robots. Security threats may result in completely preventing the access and control of the robot. The consequences of this may be catastrophic and may cause an immediate physical damage to the robot. This paper aims to present a security risk assessment of the well-known PeopleBot; a mobile robot platform from Adept MobileRobots Company. Initially, we thoroughly examined security threats related to remote accessing the PeopleBot robot. We conducted an impact-oriented analysis approach on the wireless communication medium; the main method considered to remotely access the PeopleBot robot. Numerous experiments using SSH and server-client applications were conducted, and they demonstrated that certain attacks result in denying remote access service to the PeopleBot robot. Consequently and dangerously the robot becomes unavailable. Finally, we suggested one possible mitigation and provided useful conclusions to raise awareness of possible security threats on the robotic systems; especially when the robots are involved in critical missions or applications.

Houchouas, V., Esteves, J. L., Cottais, E., Kasmi, C., Armstrong, K..  2017.  Immunity assessment of a servomotor exposed to an intentional train of RF pulses. 2017 International Symposium on Electromagnetic Compatibility - EMC EUROPE. :1–5.

Conducted emission of motors is a domain of interest for EMC as it may introduce disturbances in the system in which they are integrated. Nevertheless few publications deal with the susceptibility of motors, and especially, servomotors despite this devices are more and more used in automated production lines as well as for robotics. Recent papers have been released devoted to the possibility of compromising such systems by cyber-attacks. One could imagine the use of smart intentional electromagnetic interference to modify their behavior or damage them leading in the modification of the industrial process. This paper aims to identify the disturbances that may affect the behavior of a Commercial Off-The-Shelf servomotor when exposed to an electromagnetic field and the criticality of the effects with regards to its application. Experiments have shown that a train of radio frequency pulses may induce an erroneous reading of the position value of the servomotor and modify in an unpredictable way the movement of the motor's axis.

Weintraub, E..  2017.  Estimating Target Distribution in security assessment models. 2017 IEEE 2nd International Verification and Security Workshop (IVSW). :82–87.

Organizations are exposed to various cyber-attacks. When a component is exploited, the overall computed damage is impacted by the number of components the network includes. This work is focuses on estimating the Target Distribution characteristic of an attacked network. According existing security assessment models, Target Distribution is assessed by using ordinal values based on users' intuitive knowledge. This work is aimed at defining a formula which enables measuring quantitatively the attacked components' distribution. The proposed formula is based on the real-time configuration of the system. Using the proposed measure, firms can quantify damages, allocate appropriate budgets to actual real risks and build their configuration while taking in consideration the risks impacted by components' distribution. The formula is demonstrated as part of a security continuous monitoring system.

Jillepalli, A. A., Sheldon, F. T., Leon, D. C. de, Haney, M., Abercrombie, R. K..  2017.  Security management of cyber physical control systems using NIST SP 800-82r2. 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC). :1864–1870.

Cyber-attacks and intrusions in cyber-physical control systems are, currently, difficult to reliably prevent. Knowing a system's vulnerabilities and implementing static mitigations is not enough, since threats are advancing faster than the pace at which static cyber solutions can counteract. Accordingly, the practice of cybersecurity needs to ensure that intrusion and compromise do not result in system or environment damage or loss. In a previous paper [2], we described the Cyberspace Security Econometrics System (CSES), which is a stakeholder-aware and economics-based risk assessment method for cybersecurity. CSES allows an analyst to assess a system in terms of estimated loss resulting from security breakdowns. In this paper, we describe two new related contributions: 1) We map the Cyberspace Security Econometrics System (CSES) method to the evaluation and mitigation steps described by the NIST Guide to Industrial Control Systems (ICS) Security, Special Publication 800-82r2. Hence, presenting an economics-based and stakeholder-aware risk evaluation method for the implementation of the NIST-SP-800-82 guide; and 2) We describe the application of this tailored method through the use of a fictitious example of a critical infrastructure system of an electric and gas utility.

Damodaran, Suresh K., Mittal, Saurabh.  2017.  Controlled Environments for Cyber Risk Assessment of Cyber-physical Systems. Proceedings of the Summer Simulation Multi-Conference. :3:1–3:12.

Cyber risk assessment of a Cyber-Physical System (CPS) without damaging it and without contaminating it with malware is an important and hard problem. Previous work developed a solution to this problem using a control component for simulating cyber effects in a CPS model to mimic a cyber attack. This paper extends the previous work by presenting an algorithm for semi-automated insertion of control components into a CPS model based on Discrete Event Systems (DEVS) formalism. We also describe how to use this algorithm to insert a control component into Live, Virtual, Constructive (LVC) environments that may have non-DEVS models, thereby extending our solution to other systems in general.

Lakhdhar, Yosra, Rekhis, Slim, Boudriga, Noureddine.  2017.  Proactive Damage Assessment of Cyber Attacks Using Mobile Observer Agents. Proceedings of the 15th International Conference on Advances in Mobile Computing & Multimedia. :29–38.

One of the most critical challenges facing cyber defense nowadays is the complexity of recent released cyber-attacks, which are capable of disrupting critical industries and jeopardizing national economy. In this context, moving beyond common security approaches to make it possible to neutralize and react to security attacks at their early stages, becomes a requisite. We develop in this paper a formal model for the proactive assessment of security damages. We define a network of observer agents capable of observing incomplete information about attacks and affected cyber systems, and generating security observations useful for the identification of ongoing attack scenarios and their evolution in the future. A set of analytics are developed for the generation and management of scenario contexts as a set of measures useful for the proactive assessment of damages in the future, and the launching of countermeasures. A case study is provided to exemplify the proposal.