Visible to the public Biblio

Found 12044 results

Filters: Keyword is Resiliency  [Clear All Filters]
2018-02-06
Iqbal, H., Ma, J., Mu, Q., Ramaswamy, V., Raymond, G., Vivanco, D., Zuena, J..  2017.  Augmenting Security of Internet-of-Things Using Programmable Network-Centric Approaches: A Position Paper. 2017 26th International Conference on Computer Communication and Networks (ICCCN). :1–6.

Advances in nanotechnology, large scale computing and communications infrastructure, coupled with recent progress in big data analytics, have enabled linking several billion devices to the Internet. These devices provide unprecedented automation, cognitive capabilities, and situational awareness. This new ecosystem–termed as the Internet-of-Things (IoT)–also provides many entry points into the network through the gadgets that connect to the Internet, making security of IoT systems a complex problem. In this position paper, we argue that in order to build a safer IoT system, we need a radically new approach to security. We propose a new security framework that draws ideas from software defined networks (SDN), and data analytics techniques; this framework provides dynamic policy enforcements on every layer of the protocol stack and can adapt quickly to a diverse set of industry use-cases that IoT deployments cater to. Our proposal does not make any assumptions on the capabilities of the devices - it can work with already deployed as well as new types of devices, while also conforming to a service-centric architecture. Even though our focus is on industrial IoT systems, the ideas presented here are applicable to IoT used in a wide array of applications. The goal of this position paper is to initiate a dialogue among standardization bodies and security experts to help raise awareness about network-centric approaches to IoT security.

Shepherd, L. A., Archibald, J..  2017.  Security Awareness and Affective Feedback: Categorical Behaviour vs. Reported Behaviour. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–6.

A lack of awareness surrounding secure online behaviour can lead to end-users, and their personal details becoming vulnerable to compromise. This paper describes an ongoing research project in the field of usable security, examining the relationship between end-user-security behaviour, and the use of affective feedback to educate end-users. Part of the aforementioned research project considers the link between categorical information users reveal about themselves online, and the information users believe, or report that they have revealed online. The experimental results confirm a disparity between information revealed, and what users think they have revealed, highlighting a deficit in security awareness. Results gained in relation to the affective feedback delivered are mixed, indicating limited short-term impact. Future work seeks to perform a long-term study, with the view that positive behavioural changes may be reflected in the results as end-users become more knowledgeable about security awareness.

Nojoumian, M., Golchubian, A., Saputro, N., Akkaya, K..  2017.  Preventing Collusion between SDN Defenders Anc Attackers Using a Game Theoretical Approach. 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :802–807.

In this paper, a game-theoretical solution concept is utilized to tackle the collusion attack in a SDN-based framework. In our proposed setting, the defenders (i.e., switches) are incentivized not to collude with the attackers in a repeated-game setting that utilizes a reputation system. We first illustrate our model and its components. We then use a socio-rational approach to provide a new anti-collusion solution that shows cooperation with the SDN controller is always Nash Equilibrium due to the existence of a long-term utility function in our model.

Andrea, K., Gumusalan, A., Simon, R., Harney, H..  2017.  The Design and Implementation of a Multicast Address Moving Target Defensive System for Internet-of-Things Applications. MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM). :531–538.

Distributed Denial of Service (DDoS) attacks serve to diminish the ability of the network to perform its intended function over time. The paper presents the design, implementation and analysis of a protocol based upon a technique for address agility called DDoS Resistant Multicast (DRM). After describing the our architecture and implementation we show an analysis that quantifies the overhead on network performance. We then present the Simple Agile RPL multiCAST (SARCAST), an Internet-of-Things routing protocol for DDoS protection. We have implemented and evaluated SARCAST in a working IoT operating system and testbed. Our results show that SARCAST provides very high levels of protection against DDoS attacks with virtually no impact on overall performance.

Lin, P. C., Li, P. C., Nguyen, V. L..  2017.  Inferring OpenFlow Rules by Active Probing in Software-Defined Networks. 2017 19th International Conference on Advanced Communication Technology (ICACT). :415–420.

Software-defined networking (SDN) separates the control plane from underlying devices, and allows it to control the data plane from a global view. While SDN brings conveniences to management, it also introduces new security threats. Knowing reactive rules, attackers can launch denial-of-service (DoS) attacks by sending numerous rule-matched packets which trigger packet-in packets to overburden the controller. In this work, we present a novel method ``INferring SDN by Probing and Rule Extraction'' (INSPIRE) to discover the flow rules in SDN from probing packets. We evaluate the delay time from probing packets, classify them into defined classes, and infer the rules. This method involves three relevant steps: probing, clustering and rule inference. First, forged packets with various header fields are sent to measure processing and propagation time in the path. Second, it classifies the packets into multiple classes by using k-means clustering based on packet delay time. Finally, the apriori algorithm will find common header fields in the classes to infer the rules. We show how INSPIRE is able to infer flow rules via simulation, and the accuracy of inference can be up to 98.41% with very low false-positive rates.

Scheitle, Q., Gasser, O., Rouhi, M., Carle, G..  2017.  Large-Scale Classification of IPv6-IPv4 Siblings with Variable Clock Skew. 2017 Network Traffic Measurement and Analysis Conference (TMA). :1–9.

Linking the growing IPv6 deployment to existing IPv4 addresses is an interesting field of research, be it for network forensics, structural analysis, or reconnaissance. In this work, we focus on classifying pairs of server IPv6 and IPv4 addresses as siblings, i.e., running on the same machine. Our methodology leverages active measurements of TCP timestamps and other network characteristics, which we measure against a diverse ground truth of 682 hosts. We define and extract a set of features, including estimation of variable (opposed to constant) remote clock skew. On these features, we train a manually crafted algorithm as well as a machine-learned decision tree. By conducting several measurement runs and training in cross-validation rounds, we aim to create models that generalize well and do not overfit our training data. We find both models to exceed 99% precision in train and test performance. We validate scalability by classifying 149k siblings in a large-scale measurement of 371k sibling candidates. We argue that this methodology, thoroughly cross-validated and likely to generalize well, can aid comparative studies of IPv6 and IPv4 behavior in the Internet. Striving for applicability and replicability, we release ready-to-use source code and raw data from our study.

Li, X., Smith, J. D., Thai, M. T..  2017.  Adaptive Reconnaissance Attacks with Near-Optimal Parallel Batching. 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). :699–709.

In assessing privacy on online social networks, it is important to investigate their vulnerability to reconnaissance strategies, in which attackers lure targets into being their friends by exploiting the social graph in order to extract victims' sensitive information. As the network topology is only partially revealed after each successful friend request, attackers need to employ an adaptive strategy. Existing work only considered a simple strategy in which attackers sequentially acquire one friend at a time, which causes tremendous delay in waiting for responses before sending the next request, and which lack the ability to retry failed requests after the network has changed. In contrast, we investigate an adaptive and parallel strategy, of which attackers can simultaneously send multiple friend requests in batch and recover from failed requests by retrying after topology changes, thereby significantly reducing the time to reach the targets and greatly improving robustness. We cast this approach as an optimization problem, Max-Crawling, and show it inapproximable within (1 - 1/e + $ε$). We first design our core algorithm PM-AReST which has an approximation ratio of (1 - e-(1-1/e)) using adaptive monotonic submodular properties. We next tighten our algorithm to provide a nearoptimal solution, i.e. having a ratio of (1 - 1/e), via a two-stage stochastic programming approach. We further establish the gap bound of (1 - e-(1-1/e)2) between batch strategies versus the optimal sequential one. We experimentally validate our theoretical results, finding that our algorithm performs nearoptimally in practice and that this is robust under a variety of problem settings.

Sun, J., Sun, K., Li, Q..  2017.  CyberMoat: Camouflaging Critical Server Infrastructures with Large Scale Decoy Farms. 2017 IEEE Conference on Communications and Network Security (CNS). :1–9.

Traditional deception-based cyber defenses often undertake reactive strategies that utilize decoy systems or services for attack detection and information gathering. Unfortunately, the effectiveness of these defense mechanisms has been largely constrained by the low decoy fidelity, the poor scalability of decoy platform, and the static decoy configurations, which allow the attackers to identify and bypass the deployed decoys. In this paper, we develop a decoy-enhanced defense framework that can proactively protect critical servers against targeted remote attacks through deception. To achieve both high fidelity and good scalability, our system follows a hybrid architecture that separates lightweight yet versatile front-end proxies from back-end high-fidelity decoy servers. Moreover, our system can further invalidate the attackers' reconnaissance through dynamic proxy address shuffling. To guarantee service availability, we develop a transparent connection translation strategy to maintain existing connections during shuffling. Our evaluation on a prototype implementation demonstrates the effectiveness of our approach in defeating attacker reconnaissance and shows that it only introduces small performance overhead.

Brust, M. R., Zurad, M., Hentges, L., Gomes, L., Danoy, G., Bouvry, P..  2017.  Target Tracking Optimization of UAV Swarms Based on Dual-Pheromone Clustering. 2017 3rd IEEE International Conference on Cybernetics (CYBCONF). :1–8.

Unmanned Aerial Vehicles (UAVs) are autonomous aircraft that, when equipped with wireless communication interfaces, can share data among themselves when in communication range. Compared to single UAVs, using multiple UAVs as a collaborative swarm is considerably more effective for target tracking, reconnaissance, and surveillance missions because of their capacity to tackle complex problems synergistically. Success rates in target detection and tracking depend on map coverage performance, which in turn relies on network connectivity between UAVs to propagate surveillance results to avoid revisiting already observed areas. In this paper, we consider the problem of optimizing three objectives for a swarm of UAVs: (a) target detection and tracking, (b) map coverage, and (c) network connectivity. Our approach, Dual-Pheromone Clustering Hybrid Approach (DPCHA), incorporates a multi-hop clustering and a dual-pheromone ant-colony model to optimize these three objectives. Clustering keeps stable overlay networks, while attractive and repulsive pheromones mark areas of detected targets and visited areas. Additionally, DPCHA introduces a disappearing target model for dealing with temporarily invisible targets. Extensive simulations show that DPCHA produces significant improvements in the assessment of coverage fairness, cluster stability, and connection volatility. We compared our approach with a pure dual- pheromone approach and a no-base model, which removes the base station from the model. Results show an approximately 50% improvement in map coverage compared to the pure dual-pheromone approach.

Egi, Y., Otero, C., Ridley, M., Eyceyurt, E..  2017.  An Efficient Architecture for Modeling Path Loss on Forest Canopy Using LiDAR and Wireless Sensor Networks Fusion. European Wireless 2017; 23th European Wireless Conference. :1–6.

Wireless Sensor Network (WSN) provide the means for efficient intelligence, surveillance, and reconnaissance (ISR) applications. However, deploying such networks in irregular terrains can be time-consuming, error-prone, and in most cases, result in unpredictable performance. For example, when WSN are deployed in forests or terrains with vegetation, measuring campaigns (using trial/error) are required to determine the path loss driving node positioning to ensure network connectivity. This paper proposes an architecture for planning optimal deployments of WSN. Specifically, it proposes the use of airborne Light Detection and Ranging (LiDAR), an Inertial Measurement Unit (IMU), a Global Positioning System (GPS), and a Stereo Camera (SC) to detect forest characteristics with real-time mapping which reduces the need for trial campaigns; thus, minimizing costs, time, and complexity. The proposed approach expands the state-of-the-art to optimize the performance of WSN upon deployment.

Komulainen, A., Nilsson, J., Sterner, U..  2017.  Effects of Topology Information on Routing in Contention-Based Underwater Acoustic Networks. OCEANS 2017 - Aberdeen. :1–7.

Underwater acoustic networks is an enabling technology for a range of applications such as mine countermeasures, intelligence and reconnaissance. Common for these applications is a need for robust information distribution while minimizing energy consumption. In terrestrial wireless networks topology information is often used to enhance the efficiency of routing, in terms of higher capacity and less overhead. In this paper we asses the effects of topology information on routing in underwater acoustic networks. More specifically, the interplay between long propagation delays, contention-based channels access and dissemination of varying degrees of topology information is investigated. The study is based on network simulations of a number of network protocols that make use of varying amounts of topology information. The results indicate that, in the considered scenario, relying on local topology information to reduce retransmissions may have adverse effects on the reliability. The difficult channel conditions and the contention-based channels access methods create a need for an increased amount of diversity, i.e., more retransmissions. In the scenario considered, an opportunistic flooding approach is a better, both in terms of robustness and energy consumption.

Xiong, X., Yang, L..  2017.  Multi End-Hopping Modeling and Optimization Using Cooperative Game. 2017 4th International Conference on Information Science and Control Engineering (ICISCE). :470–474.

End-hopping is an effective component of Moving Target Defense (MTD) by randomly hopping network configuration of host, which is a game changing technique against cyber-attack and can interrupt cyber kill chain in the early stage. In this paper, a novel end-hopping model, Multi End-hopping (MEH), is proposed to exploit the full potentials of MTD techniques by hosts cooperating with others to share possible configurable space (PCS). And an optimization method based on cooperative game is presented to make hosts form optimal alliances against reconnaissance, scanning and blind probing DoS attack. Those model and method confuse adversaries by establishing alliances of hosts to enlarge their PCS, which thwarts various malicious scanning and mitigates probing DoS attack intensity. Through simulations, we validate the correctness of MEH model and the effectiveness of optimization method. Experiment results show that the proposed model and method increase system stable operational probability while introduces a low overhead in optimization.

Ming, Z., Zheng-jiang, W., Liu, H..  2017.  Random Projection Data Perturbation Based Privacy Protection in WSNs. 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :493–498.

Wireless sensor networks are responsible for sensing, gathering and processing the information of the objects in the network coverage area. Basic data fusion technology generally does not provide data privacy protection mechanism, and the privacy protection mechanism in health care, military reconnaissance, smart home and other areas of the application is usually indispensable. In this paper, we consider the privacy, confidentiality, and the accuracy of fusion results, and propose a data fusion algorithm for privacy preserving. This algorithm relies on the characteristics of data fusion, and uses the method of pre-distribution random number in the node to get the privacy protection requirements of the original data. Theoretical analysis shows that the malicious attacker attempts to steal the difficulty of node privacy in PPND algorithm. At the same time in the TOSSIM simulation results also show that, compared with TAG, SMART algorithm, PPND algorithm in the data traffic, the convergence accuracy of the good performance.

MüUller, W., Kuwertz, A., Mühlenberg, D., Sander, J..  2017.  Semantic Information Fusion to Enhance Situational Awareness in Surveillance Scenarios. 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). :397–402.

In recent years, the usage of unmanned aircraft systems (UAS) for security-related purposes has increased, ranging from military applications to different areas of civil protection. The deployment of UAS can support security forces in achieving an enhanced situational awareness. However, in order to provide useful input to a situational picture, sensor data provided by UAS has to be integrated with information about the area and objects of interest from other sources. The aim of this study is to design a high-level data fusion component combining probabilistic information processing with logical and probabilistic reasoning, to support human operators in their situational awareness and improving their capabilities for making efficient and effective decisions. To this end, a fusion component based on the ISR (Intelligence, Surveillance and Reconnaissance) Analytics Architecture (ISR-AA) [1] is presented, incorporating an object-oriented world model (OOWM) for information integration, an expressive knowledge model and a reasoning component for detection of critical events. Approaches for translating the information contained in the OOWM into either an ontology for logical reasoning or a Markov logic network for probabilistic reasoning are presented.

Roth, J. D., Martin, J., Mayberry, T..  2017.  A Graph-Theoretic Approach to Virtual Access Point Correlation. 2017 IEEE Conference on Communications and Network Security (CNS). :1–9.

The wireless boundaries of networks are becoming increasingly important from a security standpoint as the proliferation of 802.11 WiFi technology increases. Concurrently, the complexity of 802.11 access point implementation is rapidly outpacing the standardization process. The result is that nascent wireless functionality management is left up to the individual provider's implementation, which creates new vulnerabilities in wireless networks. One such functional improvement to 802.11 is the virtual access point (VAP), a method of broadcasting logically separate networks from the same physical equipment. Network reconnaissance benefits from VAP identification, not only because network topology is a primary aim of such reconnaissance, but because the knowledge that a secure network and an insecure network are both being broadcast from the same physical equipment is tactically relevant information. In this work, we present a novel graph-theoretic approach to VAP identification which leverages a body of research concerned with establishing community structure. We apply our approach to both synthetic data and a large corpus of real-world data to demonstrate its efficacy. In most real-world cases, near-perfect blind identification is possible highlighting the effectiveness of our proposed VAP identification algorithm.

Allodi, Luca, Massacci, Fabio.  2017.  Attack Potential in Impact and Complexity. Proceedings of the 12th International Conference on Availability, Reliability and Security. :32:1–32:6.

Vulnerability exploitation is reportedly one of the main attack vectors against computer systems. Yet, most vulnerabilities remain unexploited by attackers. It is therefore of central importance to identify vulnerabilities that carry a high 'potential for attack'. In this paper we rely on Symantec data on real attacks detected in the wild to identify a trade-off in the Impact and Complexity of a vulnerability in terms of attacks that it generates; exploiting this effect, we devise a readily computable estimator of the vulnerability's Attack Potential that reliably estimates the expected volume of attacks against the vulnerability. We evaluate our estimator performance against standard patching policies by measuring foiled attacks and demanded workload expressed as the number of vulnerabilities entailed to patch. We show that our estimator significantly improves over standard patching policies by ruling out low-risk vulnerabilities, while maintaining invariant levels of coverage against attacks in the wild. Our estimator can be used as a first aid for vulnerability prioritisation to focus assessment efforts on high-potential vulnerabilities.

Pan, Liuxuan, Tomlinson, Allan, Koloydenko, Alexey A..  2017.  Time Pattern Analysis of Malware by Circular Statistics. Proceedings of the Symposium on Architectures for Networking and Communications Systems. :119–130.

Circular statistics present a new technique to analyse the time patterns of events in the field of cyber security. We apply this technique to analyse incidents of malware infections detected by network monitoring. In particular we are interested in the daily and weekly variations of these events. Based on "live" data provided by Spamhaus, we examine the hypothesis that attacks on four countries are distributed uniformly over 24 hours. Specifically, we use Rayleigh and Watson tests. While our results are mainly exploratory, we are able to demonstrate that the attacks are not uniformly distributed, nor do they follow a Poisson distribution as reported in other research. Our objective in this is to identify a distribution that can be used to establish risk metrics. Moreover, our approach provides a visual overview of the time patterns' variation, indicating when attacks are most likely. This will assist decision makers in cyber security to allocate resources or estimate the cost of system monitoring during high risk periods. Our results also reveal that the time patterns are influenced by the total number of attacks. Networks subject to a large volume of attacks exhibit bimodality while one case, where attacks were at relatively lower rate, showed a multi-modal daily variation.

Jonker, Mattijs, King, Alistair, Krupp, Johannes, Rossow, Christian, Sperotto, Anna, Dainotti, Alberto.  2017.  Millions of Targets Under Attack: A Macroscopic Characterization of the DoS Ecosystem. Proceedings of the 2017 Internet Measurement Conference. :100–113.

Denial-of-Service attacks have rapidly increased in terms of frequency and intensity, steadily becoming one of the biggest threats to Internet stability and reliability. However, a rigorous comprehensive characterization of this phenomenon, and of countermeasures to mitigate the associated risks, faces many infrastructure and analytic challenges. We make progress toward this goal, by introducing and applying a new framework to enable a macroscopic characterization of attacks, attack targets, and DDoS Protection Services (DPSs). Our analysis leverages data from four independent global Internet measurement infrastructures over the last two years: backscatter traffic to a large network telescope; logs from amplification honeypots; a DNS measurement platform covering 60% of the current namespace; and a DNS-based data set focusing on DPS adoption. Our results reveal the massive scale of the DoS problem, including an eye-opening statistic that one-third of all / 24 networks recently estimated to be active on the Internet have suffered at least one DoS attack over the last two years. We also discovered that often targets are simultaneously hit by different types of attacks. In our data, Web servers were the most prominent attack target; an average of 3% of the Web sites in .com, .net, and .org were involved with attacks, daily. Finally, we shed light on factors influencing migration to a DPS.

Jain, Bhushan, Tsai, Chia-Che, Porter, Donald E..  2017.  A Clairvoyant Approach to Evaluating Software (In)Security. Proceedings of the 16th Workshop on Hot Topics in Operating Systems. :62–68.

Nearly all modern software has security flaws–-either known or unknown by the users. However, metrics for evaluating software security (or lack thereof) are noisy at best. Common evaluation methods include counting the past vulnerabilities of the program, or comparing the size of the Trusted Computing Base (TCB), measured in lines of code (LoC) or binary size. Other than deleting large swaths of code from project, it is difficult to assess whether a code change decreased the likelihood of a future security vulnerability. Developers need a practical, constructive way of evaluating security. This position paper argues that we actually have all the tools needed to design a better, empirical method of security evaluation. We discuss related work that estimates the severity and vulnerability of certain attack vectors based on code properties that can be determined via static analysis. This paper proposes a grand, unified model that can predict the risk and severity of vulnerabilities in a program. Our prediction model uses machine learning to correlate these code features of open-source applications with the history of vulnerabilities reported in the CVE (Common Vulnerabilities and Exposures) database. Based on this model, one can incorporate an analysis into the standard development cycle that predicts whether the code is becoming more or less prone to vulnerabilities.

Bullough, Benjamin L, Yanchenko, Anna K, Smith, Christopher L, Zipkin, Joseph R.  2017.  Predicting Exploitation of Disclosed Software Vulnerabilities Using Open-Source Data. Proceeding IWSPA '17 Proceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics.

Each year, thousands of software vulnerabilities are discovered and reported to the public. Unpatched known vulnerabilities are a significant security risk. It is imperative that software vendors quickly provide patches once vulnerabilities are known and users quickly install those patches as soon as they are available. However, most vulnerabilities are never actually exploited. Since writing, testing, and installing software patches can involve considerable resources, it would be desirable to prioritize the remediation of vulnerabilities that are likely to be exploited. Several published research studies have reported moderate success in applying machine learning techniques to the task of predicting whether a vulnerability will be exploited. These approaches typically use features derived from vulnerability databases (such as the summary text describing the vulnerability) or social media posts that mention the vulnerability by name. However, these prior studies share multiple methodological shortcomings that inflate predictive power of these approaches. We replicate key portions of the prior work, compare their approaches, and show how selection of training and test data critically affect the estimated performance of predictive models. The results of this study point to important methodological considerations that should be taken into account so that results reflect real-world utility.

 

Boukoros, Spyros, Katzenbeisser, Stefan.  2017.  Measuring Privacy in High Dimensional Microdata Collections. Proceedings of the 12th International Conference on Availability, Reliability and Security. :15:1–15:8.

Microdata is collected by companies in order to enhance their quality of service as well as the accuracy of their recommendation systems. These data often become publicly available after they have been sanitized. Recent reidentification attacks on publicly available, sanitized datasets illustrate the privacy risks involved in microdata collections. Currently, users have to trust the provider that their data will be safe in case data is published or if a privacy breach occurs. In this work, we empower users by developing a novel, user-centric tool for privacy measurement and a new lightweight privacy metric. The goal of our tool is to estimate users' privacy level prior to sharing their data with a provider. Hence, users can consciously decide whether to contribute their data. Our tool estimates an individuals' privacy level based on published popularity statistics regarding the items in the provider's database, and the users' microdata. In this work, we describe the architecture of our tool as well as a novel privacy metric, which is necessary for our setting where we do not have access to the provider's database. Our tool is user friendly, relying on smart visual results that raise privacy awareness. We evaluate our tool using three real world datasets, collected from major providers. We demonstrate strong correlations between the average anonymity set per user and the privacy score obtained by our metric. Our results illustrate that our tool which uses minimal information from the provider, estimates users' privacy levels comparably well, as if it had access to the actual database.

Muñoz-González, Luis, Sgandurra, Daniele, Paudice, Andrea, Lupu, Emil C..  2017.  Efficient Attack Graph Analysis Through Approximate Inference. ACM Trans. Priv. Secur.. 20:10:1–10:30.

Attack graphs provide compact representations of the attack paths an attacker can follow to compromise network resources from the analysis of network vulnerabilities and topology. These representations are a powerful tool for security risk assessment. Bayesian inference on attack graphs enables the estimation of the risk of compromise to the system's components given their vulnerabilities and interconnections and accounts for multi-step attacks spreading through the system. While static analysis considers the risk posture at rest, dynamic analysis also accounts for evidence of compromise, for example, from Security Information and Event Management software or forensic investigation. However, in this context, exact Bayesian inference techniques do not scale well. In this article, we show how Loopy Belief Propagation—an approximate inference technique—can be applied to attack graphs and that it scales linearly in the number of nodes for both static and dynamic analysis, making such analyses viable for larger networks. We experiment with different topologies and network clustering on synthetic Bayesian attack graphs with thousands of nodes to show that the algorithm's accuracy is acceptable and that it converges to a stable solution. We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages and gains of approximate inference techniques when scaling to larger attack graphs.

Milo\v sević, Jezdimir, Tanaka, Takashi, Sandberg, Henrik, Johansson, Karl Henrik.  2017.  Exploiting Submodularity in Security Measure Allocation for Industrial Control Systems. Proceedings of the 1st ACM Workshop on the Internet of Safe Things. :64–69.

Industrial control systems are cyber-physical systems that are used to operate critical infrastructures such as smart grids, traffic systems, industrial facilities, and water distribution networks. The digitalization of these systems increases their efficiency and decreases their cost of operation, but also makes them more vulnerable to cyber-attacks. In order to protect industrial control systems from cyber-attacks, the installation of multiple layers of security measures is necessary. In this paper, we study how to allocate a large number of security measures under a limited budget, such as to minimize the total risk of cyber-attacks. The security measure allocation problem formulated in this way is a combinatorial optimization problem subject to a knapsack (budget) constraint. The formulated problem is NP-hard, therefore we propose a method to exploit submodularity of the objective function so that polynomial time algorithms can be applied to obtain solutions with guaranteed approximation bounds. The problem formulation requires a preprocessing step in which attack scenarios are selected, and impacts and likelihoods of these scenarios are estimated. We discuss how the proposed method can be applied in practice.

 

Bahri, Leila.  2017.  Identity Related Threats, Vulnerabilities and Risk Mitigation in Online Social Networks: A Tutorial. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2603–2605.

This tutorial provides a thorough review of the main research directions in the field of identity management and identity related security threats in Online Social Networks (OSNs). The continuous increase in the numbers and sophistication levels of fake accounts constitutes a big threat to the privacy and to the security of honest OSN users. Uninformed OSN users could be easily fooled into accepting friendship links with fake accounts, giving them by that access to personal information they intend to exclusively share with their real friends. Moreover, these fake accounts subvert the security of the system by spreading malware, connecting with honest users for nefarious goals such as sexual harassment or child abuse, and make the social computing environment mostly untrustworthy. The tutorial introduces the main available research results available in this area, and presents our work on collaborative identity validation techniques to estimate OSN profiles trustworthiness.

Ishikawa, Tomohisa, Sakurai, Kouichi.  2017.  A Proposal of Event Study Methodology with Twitter Sentimental Analysis for Risk Management. Proceeding IMCOM '17 Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication Article No. 14 .

Once organizations have the security incident and breaches, they have to pay tremendous costs. Although visible cost, such as the incident response cost, customer follow-up care, and legal cost are predictable and calculable, it is tough to evaluate and estimate the invisible damage, such as losing customer loyalty, reputation impact, and the damage of branding. This paper proposes a new method, called "Event Study Methodology with Twitter Sentimental Analysis" to evaluate the invisible cost. This method helps to assess the impact of the security breach and the impact on corporate valuation.