Biblio
The popularity of cloud hosting services also brings in new security challenges: it has been reported that these services are increasingly utilized by miscreants for their malicious online activities. Mitigating this emerging threat, posed by such "bad repositories" (simply Bar), is challenging due to the different hosting strategy to traditional hosting service, the lack of direct observations of the repositories by those outside the cloud, the reluctance of the cloud provider to scan its customers' repositories without their consent, and the unique evasion strategies employed by the adversary. In this paper, we took the first step toward understanding and detecting this emerging threat. Using a small set of "seeds" (i.e., confirmed Bars), we identified a set of collective features from the websites they serve (e.g., attempts to hide Bars), which uniquely characterize the Bars. These features were utilized to build a scanner that detected over 600 Bars on leading cloud platforms like Amazon, Google, and 150K sites, including popular ones like groupon.com, using them. Highlights of our study include the pivotal roles played by these repositories on malicious infrastructures and other important discoveries include how the adversary exploited legitimate cloud repositories and why the adversary uses Bars in the first place that has never been reported. These findings bring such malicious services to the spotlight and contribute to a better understanding and ultimately eliminating this new threat.
In this paper, we consider side-channel mechanisms, specifically using smart device ambient light sensors, to capture information about user computing activity. We distinguish keyboard keystrokes using only the ambient light sensor readings from a smart watch worn on the user's non-dominant hand. Additionally, we investigate the feasibility of capturing screen emanations for determining user browser usage patterns. The experimental results expose privacy and security risks, as well as the potential for new mobile user interfaces and applications.
In the last decade, the number of available cores increased and heterogeneity grew. In this work, we ask the question whether the design of the current operating systems (OSes) is still appropriate if these trends continue and lead to abundantly available but heterogeneous cores, or whether it forces a fundamental rethinking of how systems are designed. We argue that: 1. hiding heterogeneity behind a common hardware interface unifies, to a large extent, the control and coordination of cores and accelerators in the OS, 2. isolating at the network-on-chip rather than with processor features (like privileged mode, memory management unit, ...), allows running untrusted code on arbitrary cores, and 3. providing OS services via protocols over the network-on-chip, instead of via system calls, makes them accessible to arbitrary types of cores as well. In summary, this turns accelerators into first-class citizens and enables a single and convenient programming environment for all cores without the need to trust any application. In this paper, we introduce network-on-chip-level isolation, present the design of our microkernel-based OS, M3, and the common hardware interface, and evaluate the performance of our prototype in comparison to Linux. A bit surprising, without using accelerators, M3 outperforms Linux in some application-level benchmarks by more than a factor of five.
With the growth of internet world has transformed into a global market with all monetary and business exercises being carried online. Being the most imperative resource of the developing scene, it is the vulnerable object and hence needs to be secured from the users with dangerous personality set. Since the Internet does not have focal surveillance component, assailants once in a while, utilizing varied and advancing hacking topologies discover a path to bypass framework's security and one such collection of assaults is Intrusion. An intrusion is a movement of breaking into the framework by compromising the security arrangements of the framework set up. The technique of looking at the system information for the conceivable intrusions is known intrusion detection. For the last two decades, automatic intrusion detection system has been an important exploration point. Till now researchers have developed Intrusion Detection Systems (IDS) with the capability of detecting attacks in several available environments; latest on the scene are Machine Learning approaches. Machine learning techniques are the set of evolving algorithms that learn with experience, have improved performance in the situations they have already encountered and also enjoy a broad range of applications in speech recognition, pattern detection, outlier analysis etc. There are a number of machine learning techniques developed for different applications and there is no universal technique that can work equally well on all datasets. In this work, we evaluate all the machine learning algorithms provided by Weka against the standard data set for intrusion detection i.e. KddCupp99. Different measurements contemplated are False Positive Rate, precision, ROC, True Positive Rate.
Cryptocurrencies record transactions in a decentralized data structure called a blockchain. Two of the most popular cryptocurrencies, Bitcoin and Ethereum, support the feature to encode rules or scripts for processing transactions. This feature has evolved to give practical shape to the ideas of smart contracts, or full-fledged programs that are run on blockchains. Recently, Ethereum's smart contract system has seen steady adoption, supporting tens of thousands of contracts, holding millions dollars worth of virtual coins. In this paper, we investigate the security of running smart contracts based on Ethereum in an open distributed network like those of cryptocurrencies. We introduce several new security problems in which an adversary can manipulate smart contract execution to gain profit. These bugs suggest subtle gaps in the understanding of the distributed semantics of the underlying platform. As a refinement, we propose ways to enhance the operational semantics of Ethereum to make contracts less vulnerable. For developers writing contracts for the existing Ethereum system, we build a symbolic execution tool called Oyente to find potential security bugs. Among 19, 336 existing Ethereum contracts, Oyente flags 8, 833 of them as vulnerable, including the TheDAO bug which led to a 60 million US dollar loss in June 2016. We also discuss the severity of other attacks for several case studies which have source code available and confirm the attacks (which target only our accounts) in the main Ethereum network.
IP tracking and cloaking are practices for identifying users which are used legitimately by websites to provide services and content tailored to particular users. However, it is believed that these practices are also used by malicious websites to avoid detection by anti-virus companies crawling the web to find malware. In addition, malicious websites are also believed to use IP tracking in order to deliver targeted malware based upon a history of previous visits by users. In this paper we empirically investigate these beliefs and collect a large dataset of suspicious URLs in order to identify at what level IP tracking takes place that is at the level of an individual address or at the level of their network provider or organisation (Network tracking). Our results illustrate that IP tracking is used in a small subset of domains within our dataset while no strong indication of network tracking was observed.
Defense-in-depth is an important security architecture principle that has significant application to industrial control systems (ICS), cloud services, storehouses of sensitive data, and many other areas. We claim that an ideal defense-in-depth posture is 'deep', containing many layers of security, and 'narrow', the number of node independent attack paths is minimized. Unfortunately, accurately calculating both depth and width is difficult using standard graph algorithms because of a lack of independence between multiple vulnerability instances (i.e., if an attacker can penetrate a particular vulnerability on one host then they can likely penetrate the same vulnerability on another host). To address this, we represent known weaknesses and vulnerabilities as a type of colored attack graph. We measure depth and width through solving the shortest color path and minimum color cut problems. We prove both of these to be NP-Hard and thus for our solution we provide a suite of greedy heuristics. We then empirically apply our approach to large randomly generated networks as well as to ICS networks generated from a published ICS attack template. Lastly, we discuss how to use these results to help guide improvements to defense-in-depth postures.
TLS has the potential to provide strong protection against network-based attackers and mass surveillance, but many implementations take security shortcuts in order to reduce the costs of cryptographic computations and network round trips. We report the results of a nine-week study that measures the use and security impact of these shortcuts for HTTPS sites among Alexa Top Million domains. We find widespread deployment of DHE and ECDHE private value reuse, TLS session resumption, and TLS session tickets. These practices greatly reduce the protection afforded by forward secrecy: connections to 38% of Top Million HTTPS sites are vulnerable to decryption if the server is compromised up to 24 hours later, and 10% up to 30 days later, regardless of the selected cipher suite. We also investigate the practice of TLS secrets and session state being shared across domains, finding that in some cases, the theft of a single secret value can compromise connections to tens of thousands of sites. These results suggest that site operators need to better understand the tradeoffs between optimizing TLS performance and providing strong security, particularly when faced with nation-state attackers with a history of aggressive, large-scale surveillance.
A key requirement for most security solutions is to provide secure cryptographic key storage in a way that will easily scale in the age of the Internet of Things. In this paper, we focus on providing such a solution based on Physical Unclonable Functions (PUFs). To this end, we focus on microelectromechanical systems (MEMS)-based gyroscopes and show via wafer-level measurements and simulations, that it is feasible to use the physical and electrical properties of these sensors for cryptographic key generation. After identifying the most promising features, we propose a novel quantization scheme to extract bit strings from the MEMS analog measurements. We provide upper and lower bounds for the minimum entropy of the derived bit strings and fully analyze the intra- and inter-class distributions across the operation range of the MEMS device. We complement these measurements via Monte-Carlo simulations based on the distributions of the parameters measured on actual devices. We also propose and evaluate a complete cryptographic key generation chain based on fuzzy extractors. We derive a full entropy 128-bit key using the obtained min-entropy estimates, requiring 1219 bits of helper data with an (authentication) failure probability of 4 . 10-7. In addition, we propose a dedicated MEMS-PUF design, which is superior to our measured sensor, in terms of chip area, quality and quantity of key seed features.
How to debug large networks is always a challenging task. Software Defined Network (SDN) offers a centralized con- trol platform where operators can statically verify network policies, instead of checking configuration files device-by-device. While such a static verification is useful, it is still not enough: due to data plane faults, packets may not be forwarded according to control plane policies, resulting in network faults at runtime. To address this issue, we present VeriDP, a tool that can continuously monitor what we call control-data plane consistency, defined as the consistency between control plane policies and data plane forwarding behaviors. We prototype VeriDP with small modifications of both hardware and software SDN switches, and show that it can achieve a verification speed of 3 μs per packet, with a false negative rate as low as 0.1%, for the Stanford backbone and Internet2 topologies. In addition, when verification fails, VeriDP can localize faulty switches with a probability as high as 96% for fat tree topologies.
Knowing which part of a program processes which parts of an input can reveal the structure of the input as well as the structure of the program. In a URL textlesspretextgreaterhttp://www.example.com/path/textless/pretextgreater, for instance, the protocol textlesspretextgreaterhttptextless/pretextgreater, the host textlesspretextgreaterwww.example.comtextless/pretextgreater, and the path textlesspretextgreaterpathtextless/pretextgreater would be handled by different functions and stored in different variables. Given a set of sample inputs, we use dynamic tainting to trace the data flow of each input character, and aggregate those input fragments that would be handled by the same function into lexical and syntactical entities. The result is a context-free grammar that reflects valid input structure. In its evaluation, our AUTOGRAM prototype automatically produced readable and structurally accurate grammars for inputs like URLs, spreadsheets or configuration files. The resulting grammars not only allow simple reverse engineering of input formats, but can also directly serve as input for test generators.
Among the various types of Role-based access control (RBAC) policies proposed in the literature, contextual policies take into account the user's location and the time at which she requests an access. The precise characterization of the context in such policies and the definition of an access decision procedure for them are non-trivial ntasks, since they have to take into account the various facets of the temporal and spatial expressions occurring in these policies. Existing approaches for modeling contextual policies do not support all the various spatio-temporal concepts and often do not provide an access decision procedure. In this paper, we propose a model-driven approach to representing and checking RBAC contextual policies. We introduce GemRBAC+CTX, an extension of a generalized conceptual model for RBAC, which contains all the concepts required to model contextual policies. We formalize these policies as constraints, using the Object Constraint Language (OCL), on the GemRBAC+CTX model, as a way to operationalize the access decision for user's requests using model-driven technologies. We show the application of GemRBAC+CTX to model the RBAC contextual policies of an application developed by HITEC Luxembourg, a provider of situational-aware information management systems for emergency scenarios. The use of GemRBAC+CTX has allowed the engineers of HITEC to define several new types of contextual policies, with a fine-grained, precise description of contexts. The preliminary experimental results show the feasibility of applying our model-driven approach for making access decisions in real systems.
MPI includes all processes in MPI\_COMM\_WORLD; this is untenable for reasons of scale, resiliency, and overhead. This paper offers a new approach, extending MPI with a new concept called Sessions, which makes two key contributions: a tighter integration with the underlying runtime system; and a scalable route to communication groups. This is a fundamental change in how we organise and address MPI processes that removes well-known scalability barriers by no longer requiring the global communicator MPI\_COMM\_WORLD.
With the increased popularity of ubiquitous computing and connectivity, the Internet of Things (IoT) also introduces new vulnerabilities and attack vectors. While secure data collection (i.e. the upward link) has been well studied in the literature, secure data dissemination (i.e. the downward link) remains an open problem. Attribute-based encryption (ABE) and outsourced-ABE has been used for secure message distribution in IoT, however, existing mechanisms suffer from extensive computation and/or privacy issues. In this paper, we explore the problem of privacy-preserving targeted broadcast in IoT. We propose two multi-cloud-based outsourced-ABE schemes, namely the parallel-cloud ABE and the chain-cloud ABE, which enable the receivers to partially outsource the computationally expensive decryption operations to the clouds, while preventing user attributes from being disclosed. In particular, the proposed solution protects three types of privacy (i.e., data, attribute and access policy privacy) by enforcing collaborations among multiple clouds. Our schemes also provide delegation verifiability that allows the receivers to verify whether the clouds have faithfully performed the outsourced operations. We extensively analyze the security guarantees of the proposed mechanisms and demonstrate the effectiveness and efficiency of our schemes with simulated resource-constrained IoT devices, which outsource operations to Amazon EC2 and Microsoft Azure.
Online conversations, such as blogs, provide rich amount of information and opinions about popular queries. Given a query, traditional blog sites return a set of conversations often consisting of thousands of comments with complex thread structure. Since the interfaces of these blog sites do not provide any overview of the data, it becomes very difficult for the user to explore and analyze such a large amount of conversational data. In this paper, we present MultiConVis, a visual text analytics system designed to support the exploration of a collection of online conversations. Our system tightly integrates NLP techniques for topic modeling and sentiment analysis with information visualizations, by considering the unique characteristics of online conversations. The resulting interface supports the user exploration, starting from a possibly large set of conversations, then narrowing down to the subset of conversations, and eventually drilling-down to the set of comments of one conversation. Our evaluations through case studies with domain experts and a formal user study with regular blog readers illustrate the potential benefits of our approach, when compared to a traditional blog reading interface.
Multipath TCP (MP-TCP) has the potential to greatly improve application performance by using multiple paths transparently. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate our algorithm Balia (balanced linked adaptation), which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new algorithm to existing MP-TCP algorithms.
Multiple-input multiple-output (MIMO) techniques have been the subject of increased attention for underwater acoustic communication for its ability to significantly improve the channel capabilities. Recently, an under-ice MIMO acoustic communication experiment was conducted in shallow water which differs from previous works in that the water column was covered by about 40 centimeters thick sea ice. In this experiment, high frequency MIMO signals centered at 10 kHz were transmitted from a two-element source array to a four-element vertical receive array at 1km range. The unique under-ice acoustic propagation environment in shallow water seems naturally separate data streams from different transducers, but there is still co-channel interference. Time reversal followed by a single channel decision feedback equalizer is used in this paper to compensate for the inter-symbol interference and co-channel interference. It is demonstrated that this simple receiver scheme is good enough to realize robust performance using fewer hydrophones (i.e. 2) without the explicit use of complex co-channel interference cancelation algorithms such as parallel interference cancelation or serial interference cancelation. Two channel estimation algorithms based on least square and least mean square are also studied for MIMO communications in this paper and their performance are compared using experimental data.
In this article, we describe a neighbour disjoint multipath (NDM) scheme that is shown to be more resilient amidst node or link failures compared to the two well-known node disjoint and edge disjoint multipath techniques. A centralised NDM was first conceptualised in our initial published work utilising the spatial diversity among multiple paths to ensure robustness against localised poor channel quality or node failures. Here, we further introduce a distributed version of our NDM algorithm adapting to the low-power and lossy network (LLN) characteristics. We implement our distributed NDM algorithm in Contiki OS on top of LOADng—a lightweight On-demand Ad hoc Distance Vector Routing protocol. We compare this implementation's performance with a standard IPv6 Routing Protocol for Low power and Lossy Networks (RPL), and also with basic LOADng, running in the Cooja simulator. Standard performance metrics such as packet delivery ratio, end-to-end latency, overhead and average routing table size are identified for the comparison. The results and observations are provided considering a few different application traffic patterns, which serve to quantify the improvements in robustness arising from NDM. The results are confirmed by experiments using a public sensor network testbed with over 100 nodes.
Large scale applications in data centers are composed of computers connected with a network. Traditional network switches cannot be flexibly controlled. Then, application developer cannot optimize network elements' behavior for improving application performance. On the other hand, Deeply Programmable Network (DPN) switches can completely analyze packet payloads and be profoundly programmed. In this paper, we focus on processing a part of application functions in network elements for improving application performance based on Deep Packet Inspection (DPI), i.e. analyzing packet payload, using DPN switches. We assume some applications as targets and implement some of functions of applications in network switches. We then present the comparison of performances with and without out method, and show that our method can significantly increase application performance.
In view of the high demand for the security of visiting data in power system, a network data security analysis method based on DPI technology was put forward in this paper, to solve the problem of security gateway judge the legality of the network data. Considering the legitimacy of the data involves data protocol and data contents, this article will filters the data from protocol matching and content detection. Using deep packet inspection (DPI) technology to screen the protocol. Using protocol analysis to detect the contents of data. This paper implements the function that allowing secure data through the gateway and blocking threat data. The example proves that the method is more effective guarantee the safety of visiting data.
Finding security vulnerabilities in the source code as early as possible is becoming more and more essential. In this respect, vulnerability prediction models have the potential to help the security assurance activities by identifying code locations that deserve the most attention. In this paper, we investigate whether prediction models behave like milk (i.e., they turn with time) or wine (i.e., the improve with time) when used to predict future vulnerabilities. Our findings indicate that the recall values are largely in favor of predictors based on older versions. However, the better recall comes at the price of much higher file inspection ratio values.
With the developing of Internet, network intrusion has become more and more common. Quickly identifying and preventing network attacks is getting increasingly more important and difficult. Machine learning techniques have already proven to be robust methods in detecting malicious activities and network threats. Ensemble-based and semi-supervised learning methods are some of the areas that receive most attention in machine learning today. However relatively little attention has been given in combining these methods. To overcome such limitations, this paper proposes a novel network anomaly detection method by using a combination of a tri-training approach with Adaboost algorithms. The bootstrap samples of tri-training are replaced by three different Adaboost algorithms to create the diversity. We run 30 iteration for every simulation to obtain the average results. Simulations indicate that our proposed semi-supervised Adaboost algorithm is reproducible and consistent over a different number of runs. It outperforms other state-of-the-art learning algorithms, even with a small part of labeled data in the training phase. Specifically, it has a very short execution time and a good balance between the detection rate as well as the false-alarm rate.
We explore the use of a new way to log into a web service, such as email or social media. Using on-demand biometrics, users sign in from a browser on a computer using just their name, which sends a request to their phone for approval. Users approve this request by authenticating on their phone using their fingerprint, which completes the login in the browser. On-demand biometrics thus replace passwords or temporary access codes found in two-step verification with the ease of use of biometrics. We present the results of an interview study on the use of on-demand biometrics with a live login backend. Participants perceived our system as convenient and fast to use and also expressed their trust in fingerprint authentication to keep their accounts safe. We motivate the design of on-demand biometrics, present an analysis of participants' use and responses around general account security and authentication, and conclude with implications for designing fast and easy cross-device authentication.
While compilers offer a fair trade-off between productivity and executable performance in single-threaded execution, their optimizations remain fragile when addressing compute-intensive code for parallel architectures with deep memory hierarchies. Moreover, these optimizations operate as black boxes, impenetrable for the user, leaving them with no alternative to time-consuming and error-prone manual optimization in cases where an imprecise cost model or a weak analysis resulted in a bad optimization decision. To address this issue, we propose a technique allowing to automatically translate an arbitrary polyhedral optimization, used internally by loop-level optimization frameworks of several modern compilers, into a sequence of comprehensible syntactic transformations as long as this optimization focuses on scheduling loop iterations. This approach opens the black box of the polyhedral frameworks enabling users to examine, refine, replay and even design complex optimizations semi-automatically in partnership with the compiler.