Biblio
Recently, Internet-based systems need to be changed their configuration dynamically. Traditional networks have very limited ability to cope up with such frequent changes and hinder innovations management and configuration procedures. To address this issue, Software Defined Networking (SDN) has been emerging as a new network architecture that allows for more flexibility through software-enabled network control. However, the dynamism of programmable networks also faces new security challenges that demand innovative solutions. Among the widespread mechanisms of SDN security control applications, anomaly-based IDS is an extremely effective technique in detecting both known and unknown (new) attack types. In this paper, we propose an anomaly-based Intrusion Detection architecture integrated on OpenFlow Switch. The proposed system can detect and prevent a network from many attack types, especially new attack types using anomaly detection. We implement the proposed system on the FPGA technology using a Xilinx Virtex-5 xc5vtx240t device. In this FPGA-based prototype, we integrate an anomaly-based intrusion detection technique to be able to defend against many attack types and anomalous on the network traffic. The experimental results show that our system achieves a detection rate exceeding 91.81% with a 0.55% false alarms rate at maximum.
A hardware Trojan (HT) detection method is presented that is based on measuring and detecting small systematic changes in path delays introduced by capacitive loading effects or series inserted gates of HTs. The path delays are measured using a high resolution on-chip embedded test structure called a time-to-digital converter (TDC) that provides approx. 25 ps of timing resolution. A calibration method for the TDC as well as a chip-averaging technique are demonstrated to nearly eliminate chip-to-chip and within-die process variation effects on the measured path delays across chips. This approach significantly improves the correlation between Trojan-free chips and a simulation-based golden model. Path delay tests are applied to multiple copies of a 90nm custom ASIC chip having two copies of an AES macro. The AES macros are exact replicas except for the insertion of several additional gates in the second hardware copy, which are designed to model HTs. Simple statistical detection methods are used to isolate and detect systematic changes introduced by these additional gates. We present hardware results which demonstrate that our proposed chip-averaging and calibration techniques in combination with a single nominal simulation model can be used to detect small delay anomalies introduced by the inserted gates of hardware Trojans.
As the number of small, battery-operated, wireless-enabled devices deployed in various applications of Internet of Things (IoT), Wireless Sensor Networks (WSN), and Cyber-physical Systems (CPS) is rapidly increasing, so is the number of data streams that must be processed. In cases where data do not need to be archived, centrally processed, or federated, in-network data processing is becoming more common. For this purpose, various platforms like DRAGON, Innet, and CJF were proposed. However, these platforms assume that all nodes in the network are the same, i.e. the network is homogeneous. As Moore's law still applies, nodes are becoming smaller, more powerful, and more energy efficient each year; which will continue for the foreseeable future. Therefore, we can expect that as sensor networks are extended and updated, hardware heterogeneity will soon be common in networks - the same trend as can be seen in cloud computing infrastructures. This heterogeneity introduces new challenges in terms of choosing an in-network data processing node, as not only its location, but also its capabilities, must be considered. This paper introduces a new methodology to tackle this challenge, comprising three new algorithms - Request, Traverse, and Mixed - for efficiently locating an in-network data processing node, while taking into account not only position within the network but also hardware capabilities. The proposed algorithms are evaluated against a naïve approach and achieve up to 90% reduction in network traffic during long-term data processing, while spending a similar amount time in the discovery phase.
Secure hardware design is a challenging task that goes far beyond ensuring functional correctness. Important design properties such as non-interference cannot be verified on functional circuit models due to the lack of essential information (e.g., sensitivity level) for reasoning about security. Hardware information flow tracking (IFT) techniques associate data objects in the hardware design with sensitivity labels for modeling security-related behaviors. They allow the designer to test and verify security properties related to confidentiality, integrity, and logical side channels. However, precisely accounting for each bit of information flow at the hardware level can be expensive. In this work, we focus on the precision of the IFT logic. The key idea is to selectively introduce only one sided errors (false positives); these provide a conservative and safe information flow response while reducing the complexity of the security logic. We investigate the effect of logic synthesis on the quality and complexity of hardware IFT and reveal how different logic synthesis optimizations affect the amount of false positives and design overheads of IFT logic. We propose novel techniques to further simplify the IFT logic while adding no, or only a minimum number of, false positives. Additionally, we provide a solution to quantitatively introduce false positives in order to accelerate information flow security verification. Experimental results using IWLS benchmarks show that our method can reduce complexity of GLIFT by 14.47% while adding 0.20% of false positives on average. By quantitatively introducing false positives, we can achieve up to a 55.72% speedup in verification time.
Tracking and maintaining satisfactory QoE for video streaming services is becoming a greater challenge for mobile network operators than ever before. Downloading and watching video content on mobile devices is currently a growing trend among users, that is causing a demand for higher bandwidth and better provisioning throughout the network infrastructure. At the same time, popular demand for privacy has led many online streaming services to adopt end-to-end encryption, leaving providers with only a handful of indicators for identifying QoE issues. In order to address these challenges, we propose a novel methodology for detecting video streaming QoE issues from encrypted traffic. We develop predictive models for detecting different levels of QoE degradation that is caused by three key influence factors, i.e. stalling, the average video quality and the quality variations. The models are then evaluated on the production network of a large scale mobile operator, where we show that despite encryption our methodology is able to accurately detect QoE problems with 72\textbackslash%-92\textbackslash% accuracy, while even higher performance is achieved when dealing with cleartext traffic
Physical unclonable functions (PUFs) utilize manufacturing ariations of circuit elements to produce unpredictable response to any challenge vector. The attack on PUF aims to predict the PUF response to all challenge vectors while only a small number of challenge-response pairs (CRPs) are known. The target PUFs in this paper include the Arbiter PUF (ArbPUF) and the Memristor Crossbar PUF (MXbarPUF). The manufacturing variations of the circuit elements in the targeted PUF can be characterized by a weight vector. An optimization-theoretic attack on the target PUFs is proposed. The feasible space for a PUF's weight vector is described by a convex polytope confined by the known CRPs. The centroid of the polytope is chosen as the estimate of the actual weight vector, while new CRPs are adaptively added into the original set of known CRPs. The linear behavior of both ArbPUF and MXbarPUF is proven which ensures that the feasible space for their weight vectors is convex. Simulation shows that our approach needs 71.4% fewer known CRPs and 86.5% less time than the state-of-the-art machine learning based approach.
This paper presents a comprehensive review of state-of-the-art research works in knowledge-based user authentication, covering the security and usability aspects of the most prominent user authentication schemes; text-, pin- and graphical-based. From the security perspective, we analyze current threats from a user and service provider perspective. Furthermore, based on current practices in authentication policies, we summarize and discuss their security strengths based on widely applied security metrics. From the usability point of view, we present and discuss the usability of each authentication scheme in regards with task performance and user experience. The analysis reveals that although a plethora of alternative user authentication schemes have been proposed in the literature and users interact differently with the various alternatives, online service providers do not yet adopt alternatives to text-based solutions. We further discuss and identify areas for further research and improved methodology with the aim to drive this research towards the design of sustainable, secure and usable authentication approaches.
The recent growth of anonymous social network services – such as 4chan, Whisper, and Yik Yak – has brought online anonymity into the spotlight. For these services to function properly, the integrity of user anonymity must be preserved. If an attacker can determine the physical location from where an anonymous message was sent, then the attacker can potentially use side information (for example, knowledge of who lives at the location) to de-anonymize the sender of the message. In this paper, we investigate whether the popular anonymous social media application Yik Yak is susceptible to localization attacks, thereby putting user anonymity at risk. The problem is challenging because Yik Yak application does not provide information about distances between user and message origins or any other message location information. We provide a comprehensive data collection and supervised machine learning methodology that does not require any reverse engineering of the Yik Yak protocol, is fully automated, and can be remotely run from anywhere. We show that we can accurately predict the locations of messages up to a small average error of 106 meters. We also devise an experiment where each message emanates from one of nine dorm colleges on the University of California Santa Cruz campus. We are able to determine the correct dorm college that generated each message 100\textbackslash% of the time.
Legacy work on correcting firewall anomalies operate with the premise of creating totally disjunctive rules. Unfortunately, such solutions are impractical from implementation point of view as they lead to an explosion of the number of firewall rules. In a related previous work, we proposed a new approach for performing assisted corrective actions, which in contrast to the-state-of-the-art family of radically disjunctive approaches, does not lead to a prohibitive increase of the configuration size. In this sense, we allow relaxation in the correction process by clearly distinguishing between constructive anomalies that can be tolerated and destructive anomalies that should be systematically fixed. However, a main disadvantage of the latter approach was its dependency on the guided input from the administrator which controversially introduces a new risk for human errors. In order to circumvent the latter disadvantage, we present in this paper a Firewall Policy Query Engine (FPQE) that renders the whole process of anomaly resolution a fully automated one and which does not require any human intervention. In this sense, instead of prompting the administrator for inserting the proper order corrective actions, FPQE executes those queries against a high level firewall policy. We have implemented the FPQE and the first results of integrating it with our legacy anomaly resolver are promising.
Embedded systems are becoming increasingly complex as designers integrate different functionalities into a single application for execution on heterogeneous hardware platforms. In this work we propose a system-level security approach in order to provide isolation of tasks without the need to trust a central authority at run-time. We discuss security requirements that can be found in complex embedded systems that use heterogeneous execution platforms, and by regulating memory access we create mechanisms that allow safe use of shared IP with direct memory access, as well as shared libraries. We also present a prototype Isolation Unit that checks memory transactions and allows for dynamic configuration of permissions.
Searchable Symmetric Encryption aims at making possible searching over an encrypted database stored on an untrusted server while keeping privacy of both the queries and the data, by allowing some small controlled leakage to the server. Recent work shows that dynamic schemes – in which the data is efficiently updatable – leaking some information on updated keywords are subject to devastating adaptative attacks breaking the privacy of the queries. The only way to thwart this attack is to design forward private schemes whose update procedure does not leak if a newly inserted element matches previous search queries. This work proposes Sophos as a forward private SSE scheme with performance similar to existing less secure schemes, and that is conceptually simpler (and also more efficient) than previous forward private constructions. In particular, it only relies on trapdoor permutations and does not use an ORAM-like construction. We also explain why Sophos is an optimal point of the security/performance tradeoff for SSE. Finally, an implementation and evaluation results demonstrate its practical efficiency.
As the number of devices that gain connectivity and join the category of smart-objects increases every year reaching unprecedented numbers, new challenges are imposed on our networks. While specialized solutions for certain use cases have been proposed, more flexible and scalable new approaches to networking will be required to deal with billions or trillions of smart objects connected to the Internet. With this paper, we take a step back looking at the set of basic problems that are posed by this group of devices. In order to develop an analysis on how these issues could be approached, we define which fundamental abstractions might help solving or at least reducing their impact on the network by offering support for fundamental matters such as mobility, group based delivery and support for distributed computing resources. Based on the concept of named-objects, we propose a set of solutions that network and show how this approach can address both scalability and functional requirements. Finally, we describe a comprehensive clean-slate network architecture (MobiityFirst) which attempts to realize the proposed capabilities.
To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., "download") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., "malware", "download") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95% and a coverage over 90%, which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.
Content Security Policy (CSP) is an emerging W3C standard introduced to mitigate the impact of content injection vulnerabilities on websites. We perform a systematic, large-scale analysis of four key aspects that impact on the effectiveness of CSP: browser support, website adoption, correct configuration and constant maintenance. While browser support is largely satisfactory, with the exception of few notable issues, our analysis unveils several shortcomings relative to the other three aspects. CSP appears to have a rather limited deployment as yet and, more crucially, existing policies exhibit a number of weaknesses and misconfiguration errors. Moreover, content security policies are not regularly updated to ban insecure practices and remove unintended security violations. We argue that many of these problems can be fixed by better exploiting the monitoring facilities of CSP, while other issues deserve additional research, being more rooted into the CSP design.
In the last couple of years, organizations have demonstrated an increased willingness to participate in threat intelligence sharing platforms. The open exchange of information and knowledge regarding threats, vulnerabilities, incidents and mitigation strategies results from the organizations' growing need to protect against today's sophisticated cyber attacks. To investigate data quality challenges that might arise in threat intelligence sharing, we conducted focus group discussions with ten expert stakeholders from security operations centers of various globally operating organizations. The study addresses several factors affecting shared threat intelligence data quality at multiple levels, including collecting, processing, sharing and storing data. As expected, the study finds that the main factors that affect shared threat intelligence data stem from the limitations and complexities associated with integrating and consolidating shared threat intelligence from different sources while ensuring the data's usefulness for an inhomogeneous group of participants.Data quality is extremely important for shared threat intelligence. As our study has shown, there are no fundamentally new data quality issues in threat intelligence sharing. However, as threat intelligence sharing is an emerging domain and a large number of threat intelligence sharing tools are currently being rushed to market, several data quality issues – particularly related to scalability and data source integration – deserve particular attention.
Browser fingerprinting is a widely used technique to uniquely identify web users and to track their online behavior. Until now, different tools have been proposed to protect the user against browser fingerprinting. However, these tools have usability restrictions as they deactivate browser features and plug-ins (like Flash) or the HTML5 canvas element. In addition, all of them only provide limited protection, as they randomize browser settings with unrealistic parameters or have methodical flaws, making them detectable for trackers. In this work we demonstrate the first anti-fingerprinting strategy, which protects against Flash fingerprinting without deactivating it, provides robust and undetectable anti-canvas fingerprinting, and uses a large set of real word data to hide the actual system and browser properties without losing usability. We discuss the methods and weaknesses of existing anti-fingerprinting tools in detail and compare them to our enhanced strategies. Our evaluation against real world fingerprinting tools shows a successful fingerprinting protection in over 99% of 70.000 browser sessions.
In this paper we describe a system that allows the real time creation of firewall rules in response to geographic and political changes in the control-plane. This allows an organization to mitigate data exfiltration threats by analyzing Border Gateway Protocol (BGP) updates and blocking packets from being routed through problematic jurisdictions. By inspecting the autonomous system paths and referencing external data sources about the autonomous systems, a BGP participant can infer the countries that traffic to a particular destination address will traverse. Based on this information, an organization can then define constraints on its egress traffic to prevent sensitive data from being sent via an untrusted region. In light of the many route leaks and BGP hijacks that occur today, this offers a new option to organizations willing to accept reduced availability over the risk to confidentiality. Similar to firewalls that allow organizations to block traffic originating from specific countries, our approach allows blocking outbound traffic from transiting specific jurisdictions. To illustrate the efficacy of this approach, we provide an analysis of paths to various financial services IP addresses over the course of a month from a single BGP vantage point that quantifies the frequency of path alterations resulting in the traversal of new countries. We conclude with an argument for the utility of country-based egress policies that do not require the cooperation of upstream providers.
Function Secret Sharing (FSS), introduced by Boyle et al. (Eurocrypt 2015), provides a way for additively secret-sharing a function from a given function family F. More concretely, an m-party FSS scheme splits a function f : \0, 1\n -textgreater G, for some abelian group G, into functions f1,...,fm, described by keys k1,...,km, such that f = f1 + ... + fm and every strict subset of the keys hides f. A Distributed Point Function (DPF) is a special case where F is the family of point functions, namely functions f\_\a,b\ that evaluate to b on the input a and to 0 on all other inputs. FSS schemes are useful for applications that involve privately reading from or writing to distributed databases while minimizing the amount of communication. These include different flavors of private information retrieval (PIR), as well as a recent application of DPF for large-scale anonymous messaging. We improve and extend previous results in several ways: * Simplified FSS constructions. We introduce a tensoring operation for FSS which is used to obtain a conceptually simpler derivation of previous constructions and present our new constructions. * Improved 2-party DPF. We reduce the key size of the PRG-based DPF scheme of Boyle et al. roughly by a factor of 4 and optimize its computational cost. The optimized DPF significantly improves the concrete costs of 2-server PIR and related primitives. * FSS for new function families. We present an efficient PRG-based 2-party FSS scheme for the family of decision trees, leaking only the topology of the tree and the internal node labels. We apply this towards FSS for multi-dimensional intervals. We also present a general technique for extending FSS schemes by increasing the number of parties. * Verifiable FSS. We present efficient protocols for verifying that keys (k*/1,...,k*/m ), obtained from a potentially malicious user, are consistent with some f in F. Such a verification may be critical for applications that involve private writing or voting by many users.
Amplification DDoS attacks have gained popularity and become a serious threat to Internet participants. However, little is known about where these attacks originate, and revealing the attack sources is a non-trivial problem due to the spoofed nature of the traffic. In this paper, we present novel techniques to uncover the infrastructures behind amplification DDoS attacks. We follow a two-step approach to tackle this challenge: First, we develop a methodology to impose a fingerprint on scanners that perform the reconnaissance for amplification attacks that allows us to link subsequent attacks back to the scanner. Our methodology attributes over 58% of attacks to a scanner with a confidence of over 99.9%. Second, we use Time-to-Live-based trilateration techniques to map scanners to the actual infrastructures launching the attacks. Using this technique, we identify 34 networks as being the source for amplification attacks at 98\textbackslash% certainty.
TLS and SSH are two of the most commonly used protocols for securing Internet traffic. Many of the implementations of these protocols rely on the cryptographic primitives provided in the OpenSSL library. In this work we disclose a vulnerability in OpenSSL, affecting all versions and forks (e.g. LibreSSL and BoringSSL) since roughly October 2005, which renders the implementation of the DSA signature scheme vulnerable to cache-based side-channel attacks. Exploiting the software defect, we demonstrate the first published cache-based key-recovery attack on these protocols: 260 SSH-2 handshakes to extract a 1024/160-bit DSA host key from an OpenSSH server, and 580 TLS 1.2 handshakes to extract a 2048/256-bit DSA key from an stunnel server.
Hardware Trojan detection has emerged as a critical challenge to ensure security and trustworthiness of integrated circuits. A vast majority of research efforts in this area has utilized side-channel analysis for Trojan detection. Functional test generation for logic testing is a promising alternative but it may not be helpful if a Trojan cannot be fully activated or the Trojan effect cannot be propagated to the observable outputs. Side-channel analysis, on the other hand, can achieve significantly higher detection coverage for Trojans of all types/sizes, since it does not require activation/propagation of an unknown Trojan. However, they have often limited effectiveness due to poor detection sensitivity under large process variations and small Trojan footprint in side-channel signature. In this paper, we address this critical problem through a novel side-channel-aware test generation approach, based on a concept of Multiple Excitation of Rare Switching (MERS), that can significantly increase Trojan detection sensitivity. The paper makes several important contributions: i) it presents in detail the statistical test generation method, which can generate high-quality testset for creating high relative activity in arbitrary Trojan instances; ii) it analyzes the effectiveness of generated testset in terms of Trojan coverage; and iii) it describes two judicious reordering methods can further tune the testset and greatly improve the side channel sensitivity. Simulation results demonstrate that the tests generated by MERS can significantly increase the Trojans sensitivity, thereby making Trojan detection effective using side-channel analysis.