Biblio
A two-server password-based authentication (2PA) protocol is a special kind of authentication primitive that provides additional protection for the user's password. Through a 2PA protocol, a user can distribute his low-entropy password between two authentication servers in the initialization phase and authenticate himself merely via a matching password in the login phase. No single server can learn any information about the user's password, nor impersonate the legitimate user to authenticate to the honest server. In this paper, we first formulate and realize the security definition of two-server password-based authentication in the well-known universal composability (UC) framework, which thus provides desirable properties such as composable security. We show that our construction is suitable for the asymmetric communication model in which one server acts as the front-end server interacting directly with the user and the other stays backstage. Then, we show that our protocol could be easily extended to more complicate password-based cryptographic protocols such as two-server password-authenticated key exchange (2PAKE) and two-server password-authenticated secret sharing (2PASS), which enjoy stronger security guarantees and better efficiency performances in comparison with the existing schemes.
Network flow classification is fundamental to network management and network security. However, it is challenging to classify network flows at very high line rates while simultaneously preserving user privacy. Machine learning based classification techniques utilize only meta-information of a flow and have been shown to be effective in identifying network flows. We analyze a group of widely used machine learning classifiers, and observe that the effectiveness of different classification models depends highly upon the protocol types as well as the flow features collected from network data.We propose vTC, a design of virtual network functions to flexibly select and apply the best suitable machine learning classifiers at run time. The experimental results show that the proposed NFV for flow classification can improve the accuracy of classification by up to 13%.
Virtual machine live migration technology, as an important support for cloud computing, has become a central issue in recent years. The virtual machines' runtime environment is migrated from the original physical server to another physical server, maintaining the virtual machines running at the same time. Therefore, it can make load balancing among servers and ensure the quality of service. However, virtual machine migration security issue cannot be ignored due to the immature development of it. This paper we analyze the security threats of the virtual machine migration, and compare the current proposed protection measures. While, these methods either rely on hardware, or lack adequate security and expansibility. In the end, we propose a security model of live virtual machine migration based on security policy transfer and encryption, named as SPLM (Security Protection of Live Migration) and analyze its security and reliability, which proves that SPLM is better than others. This paper can be useful for the researchers to work on this field. The security study of live virtual machine migration in this paper provides a certain reference for the research of virtualization security, and is of great significance.
The deterministic nature of existing routing protocols has resulted into an ossified Internet with static and predictable network routes. This gives persistent attackers (e.g. eavesdroppers and DDoS attackers) plenty of time to study the network and identify the vulnerable (critical) links to plan devastating and stealthy attacks. Recently, Moving Target Defense (MTD) based approaches have been proposed to to defend against DoS attacks. However, MTD based approaches for route mutation are oriented towards re-configuring the parameters in Local Area Networks (LANs), and do not provide any protection against infrastructure level attacks, which inherently limits their use for mission critical services over the Internet infrastructure. To cope with these issues, we extend the current routing architecture to consider end-hosts as routing elements, and present a formal method based agile defense mechanism to embed resiliency in the existing cyber infrastructure. The major contributions of this paper include: (1) formalization of efficient and resilient End to End (E2E) reachability problem as a constraint satisfaction problem, which identifies the potential end-hosts to reach a destination while satisfying resilience and QoS constraints, (2) design and implementation of a novel decentralized End Point Route Mutation (EPRM) protocol, and (3) design and implementation of planning algorithm to minimize the overlap between multiple flows, for the sake of maximizing the agility in the system. Our PlanetLab based implementation and evaluation validates the correctness, effectiveness and scalability of the proposed approach.
The Security Behavior Intentions Scale (SeBIS) measures the computer security attitudes of end-users. Because intentions are a prerequisite for planned behavior, the scale could therefore be useful for predicting users' computer security behaviors. We performed three experiments to identify correlations between each of SeBIS's four sub-scales and relevant computer security behaviors. We found that testing high on the awareness sub-scale correlated with correctly identifying a phishing website; testing high on the passwords sub-scale correlated with creating passwords that could not be quickly cracked; testing high on the updating sub-scale correlated with applying software updates; and testing high on the securement sub-scale correlated with smartphone lock screen usage (e.g., PINs). Our results indicate that SeBIS predicts certain computer security behaviors and that it is a reliable and valid tool that should be used in future research.
The orthodox paradigm to defend against automated social-engineering attacks in large-scale socio-technical systems is reactive and victim-agnostic. Defenses generally focus on identifying the attacks/attackers (e.g., phishing emails, social-bot infiltrations, malware offered for download). To change the status quo, we propose to identify, even if imperfectly, the vulnerable user population, that is, the users that are likely to fall victim to such attacks. Once identified, information about the vulnerable population can be used in two ways. First, the vulnerable population can be influenced by the defender through several means including: education, specialized user experience, extra protection layers and watchdogs. In the same vein, information about the vulnerable population can ultimately be used to fine-tune and reprioritize defense mechanisms to offer differentiated protection, possibly at the cost of additional friction generated by the defense mechanism. Secondly, information about the user population can be used to identify an attack (or compromised users) based on differences between the general and the vulnerable population. This paper considers the implications of the proposed paradigm on existing defenses in three areas (phishing of user credentials, malware distribution and socialbot infiltration) and discusses how using knowledge of the vulnerable population can enable more robust defenses.
Data intensive computing research and technology developments offer the potential of providing significant improvements in several security log management challenges. Approaches to address the complexity, timeliness, expense, diversity, and noise issues have been identified. These improvements are motivated by the increasingly important role of analytics. Machine learning and expert systems that incorporate attack patterns are providing greater detection insights. Finding actionable indicators requires the analysis to combine security event log data with other network data such and access control lists, making the big-data problem even bigger. Automation of threat intelligence is recognized as not complete with limited adoption of standards. With limited progress in anomaly signature detection, movement towards using expert systems has been identified as the path forward. Techniques focus on matching behaviors of attackers to patterns of abnormal activity in the network. The need to stream, parse, and analyze large volumes of small, semi-structured data files can be feasibly addressed through a variety of techniques identified by researchers. This report highlights research in key areas, including protection of the data, performance of the systems and network bandwidth utilization.
Privacy protection in Internet of Things (IoTs) has long been the topic of extensive research in the last decade. The perceptual layer of IoTs suffers the most significant privacy disclosing because of the limitation of hardware resources. Data encryption and anonymization are the most common methods to protect private information for the perceptual layer of IoTs. However, these efforts are ineffective to avoid privacy disclosure if the communication environment exists unknown wireless nodes which could be malicious devices. Therefore, in this paper we derive an innovative and passive method called Horizontal Hierarchy Slicing (HHS) method to detect the existence of unknown wireless devices which could result negative means to the privacy. PAM algorithm is used to cluster the HHS curves and analyze whether unknown wireless devices exist in the communicating environment. Link Quality Indicator data are utilized as the network parameters in this paper. The simulation results show their effectiveness in privacy protection.
Popular anonymity mechanisms such as Tor provide low communication latency but are vulnerable to traffic analysis attacks that can de-anonymize users. Moreover, known traffic-analysis-resistant techniques such as Dissent are impractical for use in latency-sensitive settings such as wireless networks. In this paper, we propose PriFi, a low-latency protocol for anonymous communication in local area networks that is provably secure against traffic analysis attacks. This allows members of an organization to access the Internet anonymously while they are on-site, via privacy-preserving WiFi networking, or off-site, via privacy-preserving virtual private networking (VPN). PriFi reduces communication latency using a client/relay/server architecture in which a set of servers computes cryptographic material in parallel with the clients to minimize unnecessary communication latency. We also propose a technique for protecting against equivocation attacks, with which a malicious relay might de-anonymize clients. This is achieved without adding extra latency by encrypting client messages based on the history of all messages they have received so far. As a result, any equivocation attempt makes the communication unintelligible, preserving clients' anonymity while holding the servers accountable.
The specification and implementation of network protocols are difficult tasks. This is particularly the case for WSN nodes which are typically implemented on low power microcontrollers with limited processing capabilities. Those platforms do not have the resources to run a full-fledged operating system but instead are programmed using a Real Time Operating System (RTOS) specialized in low-power wireless communications. The most popular are Contiki and TinyOS. Those RTOS support a fully-compliant IPv6 stack including the 6LoWPAN adaptation layer [1], several radio duty-cycling MAC protocols [2], [3] and multiple routing protocols [4].
In order to integrate equipment from different vendors, wireless sensor networks need to become more standardized. Using IP as the basis of low power radio networks, together with application layer standards designed for this purpose is one way forward. This research focuses on implementing and deploying a system using Contiki, 6LoWPAN over an 868 MHz radio network, together with CoAP as a standard application layer protocol. A system was deployed in the Cairngorm mountains in Scotland as an environmental sensor network, measuring streams, temperature profiles in peat and periglacial features. It was found that RPL provided an effective routing algorithm, and that the use of UDP packets with CoAP proved to be an energy efficient application layer. This combination of technologies can be very effective in large area sensor networks.
State estimation is a fundamental problem for monitoring and controlling systems. Engineering systems interconnect sensing and computing devices over a shared bandwidth-limited channels, and therefore, estimation algorithms should strive to use bandwidth optimally. We present a notion of entropy for state estimation of switched nonlinear dynamical systems, an upper bound for it and a state estimation algorithm for the case when the switching signal is unobservable. Our approach relies on the notion of topological entropy and uses techniques from the theory for control under limited information. We show that the average bit rate used is optimal in the sense that, the efficiency gap of the algorithm is within an additive constant of the gap between estimation entropy of the system and its known upper-bound. We apply the algorithm to two system models and discuss the performance implications of the number of tracked modes.
With limited battery supply, power is a scarce commodity in wireless sensor networks. Thus, to prolong the lifetime of the network, it is imperative that the sensor resources are managed effectively. This task is particularly challenging in heterogeneous sensor networks for which decisions and compromises regarding sensing strategies are to be made under time and resource constraints. In such networks, a sensor has to reason about its current state to take actions that are deemed appropriate with respect to its mission, its energy reserve, and the survivability of the overall network. Sensor Management controls and coordinates the use of the sensory suites in a manner that maximizes the success rate of the system in achieving its missions. This article focuses on formulating and developing an autonomous energy-aware sensor management system that strives to achieve network objectives while maximizing its lifetime. A team-theoretic formulation based on the Belief-Desire-Intention (BDI) model and the Joint Intention theory is proposed as a mechanism for effective and energy-aware collaborative decision-making. The proposed system models the collective behavior of the sensor nodes using the Joint Intention theory to enhance sensors’ collaboration and success rate. Moreover, the BDI modeling of the sensor operation and reasoning allows a sensor node to adapt to the environment dynamics, situation-criticality level, and availability of its own resources. The simulation scenario selected in this work is the surveillance of the Waterloo International Airport. Various experiments are conducted to investigate the effect of varying the network size, number of threats, threat agility, environment dynamism, as well as tracking quality and energy consumption, on the performance of the proposed system. The experimental results demonstrate the merits of the proposed approach compared to the state-of-the-art centralized approach adapted from Atia et al. [2011] and the localized approach in Hilal and Basir [2015] in terms of energy consumption, adaptability, and network lifetime. The results show that the proposed approach has 12 × less energy consumption than that of the popular centralized approach.
A number of small Open Source projects let independent providers measure different aspects of their quality that would otherwise be hard to see. This paper describes this observation as the pattern Quality Attestation. Quality Attestation belongs to a family of Open Source patterns written by various authors.
Attack graphs used in network security analysis are analyzed to determine sequences of exploits that lead to successful acquisition of privileges or data at critical assets. An attack graph edge corresponds to a vulnerability, tacitly assuming a connection exists and tacitly assuming the vulnerability is known to exist. In this paper we explore use of uncertain graphs to extend the paradigm to include lack of certainty in connection and/or existence of a vulnerability. We extend the standard notion of uncertain graph (where the existence of each edge is probabilistically independent) however, as signicant correlations on edge existence probabilities exist in practice, owing to common underlying causes for dis-connectivity and/or presence of vulnerabilities. Our extension describes each edge probability as a Boolean expression of independent indicator random variables. This paper (i) shows that this formalism is maximally descriptive in the sense that it can describe any joint probability distribution function of edge existence, (ii) shows that when these Boolean expressions are monotone then we can easily perform uncertainty analysis of edge probabilities, and (iii) uses these results to model a partial attack graph of the Stuxnet worm and a small enterprise network and to answer important security-related questions in a probabilistic manner.
Poster presented at the Symposium and Bootcamp in the Science of Security in Hanover, MD, April 4-5, 2017.
OpenFlow, as the prevailing technique for Software-Defined Networks (SDNs), introduces significant programmability, granularity, and flexibility for many network applications to effectively manage and process network flows. However, because OpenFlow attempts to keep the SDN data plane simple and efficient, it focuses solely on L2/L3 network transport and consequently lacks the fundamental ability of stateful forwarding for the data plane. Also, OpenFlow provides a very limited access to connection-level information in the SDN controller. In particular, for any network access management applications on SDNs that require comprehensive network state information, these inherent limitations of OpenFlow pose significant challenges in supporting network services. To address these challenges, we propose an innovative connection tracking framework called STATEMON that introduces a global state-awareness to provide better access control in SDNs. STATEMON is based on a lightweight extension of OpenFlow for programming the stateful SDN data plane, while keeping the underlying network devices as simple as possible. To demonstrate the practicality and feasibility of STATEMON, we implement and evaluate a stateful network firewall and port knocking applications for SDNs, using the APIs provided by STATEMON. Our evaluations show that STATEMON introduces minimal message exchanges for monitoring active connections in SDNs with manageable overhead (3.27% throughput degradation).
In 2012, two academic groups reported having computed the RSA private keys for 0.5% of HTTPS hosts on the internet, and traced the underlying issue to widespread random number generation failures on networked devices. The vulnerability was reported to dozens of vendors, several of whom responded with security advisories, and the Linux kernel was patched to fix a boottime entropy hole that contributed to the failures. In this paper, we measure the actions taken by vendors and end users over time in response to the original disclosure. We analyzed public internet-wide TLS scans performed between July 2010 and May 2016 and extracted 81 million distinct RSA keys. We then computed the pairwise common divisors for the entire set in order to factor over 313,000 keys vulnerable to the aw, and fingerprinted implementations to study patching behavior over time across vendors. We find that many vendors appear to have never produced a patch, and observed little to no patching behavior by end users of affected devices. The number of vulnerable hosts increased in the years after notification and public disclosure, and several newly vulnerable implementations have appeared since 2012. Vendor notification, positive vendor responses, and even vendor-produced public security advisories appear to have little correlation with end-user security.
Securing visible light communication (VLC) systems on the physical layer promises to prevent against a variety of attacks. Recent work shows that the adaption of existing legacy radio wave physical layer security (PLS) mechanisms is possible with minor changes. Yet, many adaptations open new vulnerabilities due to distinct propagation characteristics of visible light. A common understanding of threats arising from various attacker capabilities is missing. We specify a new attacker model for visible light physical layer attacks and evaluate the applicability of existing PLS approaches. Our results show that many attacks are not considered in current solutions.
Physical layer security for wireless communication is broadly considered as a promising approach to protect data confidentiality against eavesdroppers. However, despite its ample theoretical foundation, the transition to practical implementations of physical-layer security still lacks success. A close inspection of proven vulnerable physical-layer security designs reveals that the flaws are usually overlooked when the scheme is only evaluated against an inferior, single-antenna eavesdropper. Meanwhile, the attacks exposing vulnerabilities often lack theoretical justification. To reduce the gap between theory and practice, we posit that a physical-layer security scheme must be studied under multiple adversarial models to fully grasp its security strength. In this regard, we evaluate a specific physical-layer security scheme, i.e. orthogonal blinding, under multiple eavesdropper settings. We further propose a practical "ciphertext-only attack" that allows eavesdroppers to recover the original message by exploiting the low entropy fields in wireless packets. By means of simulation, we are able to reduce the symbol error rate at an eavesdropper below 1% using only the eavesdropper's receiving data and a general knowledge about the format of the wireless packets.
Logistic regression is a powerful machine learning tool to classify data. When dealing with sensitive data such as private or medical information, cares are necessary. In this paper, we propose a secure system for protecting the training data in logistic regression via homomorphic encryption. Perhaps surprisingly, despite the non-polynomial tasks of training in logistic regression, we show that only additively homomorphic encryption is needed to build our system. Our system is secure and scalable with the dataset size.
Embedded devices with constrained computational resources, such as wireless sensor network nodes, electronic tag readers, roadside units in vehicular networks, and smart watches and wristbands, are widely used in the Internet of Things. Many of such devices are deployed in untrustable environments, and others may be easy to lose, leading to possible capture by adversaries. Accordingly, in the context of security research, these devices are running in the white-box attack context, where the adversary may have total visibility of the implementation of the built-in cryptosystem with full control over its execution. It is undoubtedly a significant challenge to deal with attacks from a powerful adversary in white-box attack contexts. Existing encryption algorithms for white-box attack contexts typically require large memory use, varying from one to dozens of megabytes, and thus are not suitable for resource-constrained devices. As a countermeasure in such circumstances, we propose an ultra-lightweight encryption scheme for protecting the confidentiality of data in white-box attack contexts. The encryption is executed with secret components specialized for resource-constrained devices against white-box attacks, and the encryption algorithm requires a relatively small amount of static data, ranging from 48 to 92 KB. The security and efficiency of the proposed scheme have been theoretically analyzed with positive results, and experimental evaluations have indicated that the scheme satisfies the resource constraints in terms of limited memory use and low computational cost.
Emergency message delivery in packet networks is promising in terms of resiliency to failures and service delivery to handicapped persons. In this paper, we propose an NDN(Named Data Networking)-based emergency message delivery mechanism by leveraging multicasting and ABE (Attribute-Based Encryption) functions.