Biblio
Several technologies, such as WiFi, Ethernet and power-line communications (PLC), can be used to build residential and enterprise networks. These technologies often co-exist; most networks use WiFi, and buildings are readily equipped with electrical wires that can offer a capacity up to 1 Gbps with PLC. Yet, current networks do not exploit this rich diversity and often operate far below the available capacity. We design, implement, and evaluate EMPoWER, a system that exploits simultaneously several potentially-interfering mediums. It operates at layer 2.5, between the MAC and IP layers, and combines routing (to find multiple concurrent routes) and congestion control (to efficiently balance traffic across the routes). To optimize resource utilization and robustness, both components exploit the heterogeneous nature of the network. They are fair and efficient, and they operate only within the local area network, without affecting remote Internet hosts. We demonstrate the performance gains of EMPoWER, by simulations and experiments on a 22-node testbed. We show that PLC/WiFi, benefiting from the diversity offered by wireless and electrical mediums, provides significant throughput gains (up to 10x) and improves coverage, compared to multi-channel WiFi.
We consider wireless networks in which the effects of interference are determined by the SINR model. We address the question of structuring distributed communication when stations have very limited individual capabilities. In particular, nodes do not know their geographic coordinates, neighborhoods or even the size n of the network, nor can they sense collisions. Each node is equipped only with its unique name from a range \1, ..., N\. We study the following three settings and distributed algorithms for communication problems in each of them. In the uncoordinated-start case, when one node starts an execution and other nodes are awoken by receiving messages from already awoken nodes, we present a randomized broadcast algorithm which wakes up all the nodes in O(n log2 N) rounds with high probability. In the synchronized-start case, when all the nodes simultaneously start an execution, we give a randomized algorithm that computes a backbone of the network in O(Δ log7 N) rounds with high probability. Finally, in the partly-coordinated-start case, when a number of nodes start an execution together and other nodes are awoken by receiving messages from the already awoken nodes, we develop an algorithm that creates a backbone network in time O(n log2 N + Δ log7 N) with high probability.
Motivated by the impossibility of achieving fairness in secure computation [Cleve, STOC 1986], recent works study a model of fairness in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty to every other party that did not receive the output. These works show how to design protocols for secure computation with penalties that tolerate an arbitrary number of corruptions. In this work, we improve the efficiency of protocols for secure computation with penalties in a hybrid model where parties have access to the "claim-or-refund" transaction functionality. Our first improvement is for the ladder protocol of Bentov and Kumaresan (Crypto 2014) where we improve the dependence of the script complexity of the protocol (which corresponds to miner verification load and also space on the blockchain) on the number of parties from quadratic to linear (and in particular, is completely independent of the underlying function). Our second improvement is for the see-saw protocol of Kumaresan et al. (CCS 2015) where we reduce the total number of claim-or-refund transactions and also the script complexity from quadratic to linear in the number of parties.
Botnets are increasingly being used for exfiltrating sensitive data from mission-critical systems. Research has shown that botnets have become extremely sophisticated and can operate in stealth mode by minimizing their host and network footprint. In order to defeat exfiltration by modern botnets, we propose a moving target defense approach for dynamically deploying detectors across a network. Specifically, we propose several strategies based on centrality measures to periodically change the placement of detectors. Our objective is to increase the attacker's effort and likelihood of detection by creating uncertainty about the location of detectors and forcing botmasters to perform additional actions in an attempt to create detector-free paths through the network. We present metrics to evaluate the proposed strategies and an algorithm to compute a lower bound on the detection probability. We validate our approach through simulations, and results confirm that the proposed solution effectively reduces the likelihood of successful exfiltration campaigns.
Administration of access control policies is a difficult task, especially in large organizations. We consider the problem of detecting whether administrative actions can yield in policies where some security goals are compromised. In particular, we are interested in problems generated by modifications –- such as adding/deleting elements to/from the set of possible users or permissions –- of policies specified as term-rewrite systems. We propose to use rewriting techniques to compare the behaviors of the modified version and the original version of the policy. More precisely, we use narrowing to compute counter-examples to the equivalence of rewrite-based policies. We prove that our technique provides a sound and complete way to recursively enumerate the set of counter-examples, even when this set is not finite, or when a mistake of the administrator makes one or both systems non-terminating.
Network architectures and applications are becoming increasingly complex. Several approaches to automatically enforce configurations on devices, applications and services have been proposed, such as Policy-Based Network Management (PBNM). However, the management of enforced configurations in production environments (e.g. data center) is a crucial and complex task. For example, updates on firewall configuration to change a set of rules. Although this task is fundamental for complex systems, few effective solutions have been proposed for monitoring and managing enforced configurations. This work proposes a novel approach to monitor and manage enforced configurations in production environments. The main contributions of this paper are a formal model to identify/ generate traffic flows and to verify the enforced configurations; and a slim and transparent framework to perform the policy assessment. We have implemented and validated our approach in a virtual environment in order to evaluate different scenarios. The results demonstrate that the prototype is effective and has good performance, therefore our model can be effectively used to analyse several types of IT infrastructures. A further interesting result is that our approach is complementary to PBNM.
New Internet of Things (IoT) technologies such as Long Range (LoRa) are emerging which enable power efficient wireless communication over very long distances. Devices typically communicate directly to a sink node which removes the need of constructing and maintaining a complex multi-hop network. Given the fact that a wide area is covered and that all devices communicate directly to a few sink nodes a large number of nodes have to share the communication medium. LoRa provides for this reason a range of communication options (centre frequency, spreading factor, bandwidth, coding rates) from which a transmitter can choose. Many combination settings are orthogonal and provide simultaneous collision free communications. Nevertheless, there is a limit regarding the number of transmitters a LoRa system can support. In this paper we investigate the capacity limits of LoRa networks. Using experiments we develop models describing LoRa communication behaviour. We use these models to parameterise a LoRa simulation to study scalability. Our experiments show that a typical smart city deployment can support 120 nodes per 3.8 ha, which is not sufficient for future IoT deployments. LoRa networks can scale quite well, however, if they use dynamic communication parameter selection and/or multiple sinks.
In traditional programming courses, students have usually been at least partly graded using pen and paper exams. One of the problems related to such exams is that they only partially connect to the practice conducted within such courses. Testing students in a more practical environment has been constrained due to the limited resources that are needed, for example, for authentication. In this work, we study whether students in a programming course can be identified in an exam setting based solely on their typing patterns. We replicate an earlier study that indicated that keystroke analysis can be used for identifying programmers. Then, we examine how a controlled machine examination setting affects the identification accuracy, i.e. if students can be identified reliably in a machine exam based on typing profiles built with data from students' programming assignments from a course. Finally, we investigate the identification accuracy in an uncontrolled machine exam, where students can complete the exam at any time using any computer they want. Our results indicate that even though the identification accuracy deteriorates when identifying students in an exam, the accuracy is high enough to reliably identify students if the identification is not required to be exact, but top k closest matches are regarded as correct.
Face recognition has attained a greater importance in bio-metric authentication due to its non-intrusive property of identifying individuals at varying stand-off distance. Face recognition based on multi-spectral imaging has recently gained prime importance due to its ability to capture spatial and spectral information across the spectrum. Our first contribution in this paper is to use extended multi-spectral face recognition in two different age groups. The second contribution is to show empirically the performance of face recognition for two age groups. Thus, in this paper, we developed a multi-spectral imaging sensor to capture facial database for two different age groups (≤ 15years and ≥ 20years) at nine different spectral bands covering 530nm to 1000nm range. We then collected a new facial images corresponding to two different age groups comprises of 168 individuals. Extensive experimental evaluation is performed independently on two different age group databases using four different state-of-the-art face recognition algorithms. We evaluate the verification and identification rate across individual spectral bands and fused spectral band for two age groups. The obtained evaluation results shows higher recognition rate for age groups ≥ 20years than ≤ 15years, which indicates the variation in face recognition across the different age groups.
Cloud and its transactions have emerged as a major challenge. This paper aims to come up with an efficient and best possible way to transfer data in cloud computing environment. This goal is achieved with the help of Soft Computing Techniques. Of the various techniques such as fuzzy logic, genetic algorithm or neural network, the paper proposes an effective method of intrusion detection using genetic algorithm. The selection of the optimized path for the data transmission proved to be effective method in cloud computing environment. Network path optimization increases data transmission speed making intrusion in network nearly impossible. Intruders are forced to act quickly for intruding the network which is quite a tough task for them in such high speed data transmission network.
In this paper, we compare the effectiveness of Hidden Markov Models (HMMs) with that of Profile Hidden Markov Models (PHMMs), where both are trained on sequences of API calls. We compare our results to static analysis using HMMs trained on sequences of opcodes, and show that dynamic analysis achieves significantly stronger results in many cases. Furthermore, in comparing our two dynamic analysis approaches, we find that using PHMMs consistently outperforms our technique based on HMMs.
In an attempt to coerce useful information about the behavior of new malware families, threat analysts commonly force newly collected malicious software samples to run within a sandboxed environment. The main goal is to gather intelligence that can later be leveraged to detect and enumerate new malware infections within a network. Currently, most analysis environments "blindly" execute each newly collected malware sample for a predetermined amount of time (e.g., four to five minutes). However, a large majority of malware samples that are forced through sandbox execution are simply repackaged versions of previously seen (and already analyzed) malware. Consequently, a significant amount of time may be wasted in analyzing samples that do not generate new intelligence. In this paper, we propose MAXS, a novel probabilistic multi-hypothesis testing framework for scaling execution in malware analysis environments, including bare-metal execution environments. Our main goal is to automatically recognize whether a malware sample that is undergoing dynamic analysis has likely been seen before (e.g., in a "differently packed" form), and determine if we could therefore stop its execution early while avoiding loss of valuable malware intelligence (e.g., without missing DNS queries to never-before-seen malware command-and-control domains). We have tested our prototype implementation of MAXS over two large collections of malware execution traces obtained from two distinct production-level analysis environments. Our experimental results show that using MAXS we are able to reduce malware execution time by up to 50% in average, with less than 0.3% information loss. This roughly translates into the ability to double the capacity of malware sandbox environments, thus significantly optimizing the resources dedicated to malware execution and analysis. Our results are particularly important for bare-metal execution environments, in which it is not easy to leverage the economies of scale that characterize virtual-machine or emulation based malware sandboxes. For example, MAXS could be used to significantly cut the cost of bare-metal analysis environments by reducing the hardware resources needed to analyze a predetermined daily number of new malware samples.
Emerging and future HealthCare policies are fueling up an application-driven shift toward long-term monitoring of biosignals by means of embedded ultra-low power Wireless Body Sensor Networks (WBSNs). In order to break out, these applications needed the emergence of new technologies to allow the development of extremely power-efficient bio-sensing nodes. The PHIDIAS project aims at unlocking the development of ultra-low power bio-sensing WBSNs by tackling multiple and interlocking technological breakthroughs: (i) the development of new signal processing models and methods based on the recently proposed Compressive Sampling paradigm, which allows the design of energy-minimal computational architectures and analog front-ends, (ii) the efficient hardware implementation of components, both analog and digital, building upon an innovative ultra-low-power signal processing front-end, (iii) the evaluation of the global power reduction using a system wide integration of hardware and software components focused on compressed-sensing-based bio-signals analysis. PHIDIAS brought together a mixed consortium of academic and industrial research partners representing pan-European excellence in different fields impacting the energy-aware optimization of WBSNs, including experts in signal processing and digital/analog IC design. In this way, PHIDIAS pioneered a unique holistic approach, ensuring that key breakthroughs worked out in a cooperative way toward the global objective of the project.
Cross-site scripting (XSS) attacks keep plaguing the Web. Supported by most modern browsers, Content Security Policy (CSP) prescribes the browser to restrict the features and communication capabilities of code on a web page, mitigating the effects of XSS.
This paper puts a spotlight on the problem of data exfiltration in the face of CSP. We bring attention to the unsettling discord in the security community about the very goals of CSP when it comes to preventing data leaks.
As consequences of this discord, we report on insecurities in the known protection mechanisms that are based on assumptions about CSP that turn out not to hold in practice.
To illustrate the practical impact of the discord, we perform a systematic case study of data exfiltration via DNS prefetching and resource prefetching in the face of CSP.
Our study of the popular browsers demonstrates that it is often possible to exfiltrate data by both resource prefetching and DNS prefetching in the face of CSP. Further, we perform a crawl of the top 10,000 Alexa domains to report on the cohabitance of CSP and prefetching in practice. Finally, we discuss directions to control data exfiltration and, for the case study, propose measures ranging from immediate fixes for the clients to prefetching-aware extensions of CSP.
Recently, Internet-based systems need to be changed their configuration dynamically. Traditional networks have very limited ability to cope up with such frequent changes and hinder innovations management and configuration procedures. To address this issue, Software Defined Networking (SDN) has been emerging as a new network architecture that allows for more flexibility through software-enabled network control. However, the dynamism of programmable networks also faces new security challenges that demand innovative solutions. Among the widespread mechanisms of SDN security control applications, anomaly-based IDS is an extremely effective technique in detecting both known and unknown (new) attack types. In this paper, we propose an anomaly-based Intrusion Detection architecture integrated on OpenFlow Switch. The proposed system can detect and prevent a network from many attack types, especially new attack types using anomaly detection. We implement the proposed system on the FPGA technology using a Xilinx Virtex-5 xc5vtx240t device. In this FPGA-based prototype, we integrate an anomaly-based intrusion detection technique to be able to defend against many attack types and anomalous on the network traffic. The experimental results show that our system achieves a detection rate exceeding 91.81% with a 0.55% false alarms rate at maximum.
Fault diagnosis in IT environments is complicated because (i) most monitors have shared specificity (high amount of memory utilization can result from a large number of causes), (ii) it is hard to deploy and maintain enough sensors to ensure adequate coverage, and (iii) some functionality may be provided as-a-service by external parties with limited visibility and simultaneous availability of alert data. To systematically incorporate uncertainty and to be able to fuse information from multiple sources, we propose the use of Dempster-Shafer Theory (DST) of evidential reasoning for fault diagnosis and show its efficacy in the context of a distributed application.
With the increasingly pervasive role of software in society, security is becoming an important quality concern, emphasizing security by design, but it requires intensive specialization. Security in families of systems is even harder, as diverse variants of security solutions must be considered, with even different security goals per product. Furthermore, security is not a static object but a moving target, adding variability. For this, an approach to systematically address security concerns in software product lines is needed. It should consider security separate from other variability dimensions. The main challenges to realize this are: (i) expressing security and its variability, (ii) selecting the right solution, (iii) properly instantiating a solution, and (iv) verifying and validating it. In this paper, we present our research agenda towards addressing the aforementioned challenges.
Metaheuristics include a wide range of optimization algorithms. Some of them are very well known and with proven value, as they solve successfully many examples of combinatorial NP-hard problems. Some examples of Metaheuristics are Genetic Algorithms (GA), Simulated Annealing (SA) or Ant Colony Optimization (ACO). Our company is devoted to making steel and is the biggest steelmaker in the world. Combining several industrial processes to produce 84.6 million tones (public official data of 2015) involves huge effort. Metaheuristics are applied to different scenarios inside our operations to optimize different areas: logistics, production scheduling or resource assignment, saving costs and helping to reach operational excellence, critical for our survival in a globalized world. Rather than obtaining the global optimal solution, the main interest of an industrial company is to have "good solutions", close to the optimal, but within a very short response time, and this latter requirement is the main difference with respect to the traditional research approach from the academic world. Production is continuous and it cannot be stopped or wait for calculations, in addition, reducing production speed implies decreasing productivity and making the facilities less competitive. Disruptions are common events, making rescheduling imperative while foremen wait for new instructions to operate. This position paper explains the problem of the time response in our industrial environment, the solutions we have investigated and some results already achieved.
BGP is known to have many security vulnerabilities due to the very nature of its underlying assumptions of trust among independently operated networks. Most prior efforts have focused on attacks that can be addressed using traditional cryptographic techniques to ensure authentication or integrity, e.g., BGPSec and related works. Although augmenting BGP with authentication and integrity mechanisms is critical, they are, by design, far from sufficient to prevent attacks based on manipulating the complex BGP protocol itself. In this paper, we identify two serious attacks on two of the most fundamental goals of BGP—to ensure reachability and to enable ASes to pick routes available to them according to their routing policies—even in the presence of BGPSec-like mechanisms. Our key contributions are to 1 formalize a series of critical security properties, 2 experimentally validate using commodity router implementations that BGP fails to achieve those properties, 3 quantify the extent of these vulnerabilities in the Internet's AS topology, and 4 propose simple modifications to provably ensure that those properties are satisfied. Our experiments show that, using our attacks, a single malicious AS can cause thousands of other ASes to become disconnected from thousands of other ASes for arbitrarily long, while our suggested modifications almost completely eliminate such attacks.
Software Defined Internet Exchange Points (SDXes) increase the flexibility of interdomain traffic delivery on the Internet. Yet, an SDX inherently requires multiple participants to have access to a single, shared physical switch, which creates the need for an authorization mechanism to mediate this access. In this paper, we introduce a logic and mechanism called FLANC (A Formal Logic for Authorizing Network Control), which authorizes each participant to control forwarding actions on a shared switch and also allows participants to delegate forwarding actions to other participants at the switch (e.g., a trusted third party). FLANC extends "says" and "speaks for" logic that have been previously designed for operating system objects to handle expressions involving network traffic flows. We describe FLANC, explain how participants can use it to express authorization policies for realistic interdomain routing settings, and demonstrate that it is efficient enough to operate in operational settings.
While in business and private settings the disruptive impact of advanced information communication technology (ICT) have already been felt, the legal sector is now starting to face great disruptions due to such ICTs. Bits and pieces of innovations in the legal sector have been emerging for some time, affecting the performance of core functions and the legitimacy of public institutions. In this paper, we present our framework for enabling the smart government vision, particularly for the case of criminal justice systems, by unifying different isolated ICT-based solutions. Our framework, coined as Legal Logistics, supports the well-functioning of a legal system in order to streamline the innovations in these legal systems. The framework targets the exploitation of all relevant data generated by the ICT-based solutions. As will be illustrated for the Dutch criminal justice system, the framework may be used to integrate different ICT-based innovations and to gain insights about the well-functioning of the system. Furthermore, Legal Logistics can be regarded as a roadmap towards a smart and open justice.
Motivated by the impossibility of achieving fairness in secure computation [Cleve, STOC 1986], recent works study a model of fairness in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty to every other party that did not receive the output. These works show how to design protocols for secure computation with penalties that tolerate an arbitrary number of corruptions. In this work, we improve the efficiency of protocols for secure computation with penalties in a hybrid model where parties have access to the "claim-or-refund" transaction functionality. Our first improvement is for the ladder protocol of Bentov and Kumaresan (Crypto 2014) where we improve the dependence of the script complexity of the protocol (which corresponds to miner verification load and also space on the blockchain) on the number of parties from quadratic to linear (and in particular, is completely independent of the underlying function). Our second improvement is for the see-saw protocol of Kumaresan et al. (CCS 2015) where we reduce the total number of claim-or-refund transactions and also the script complexity from quadratic to linear in the number of parties. We also present a 'dual-mode' protocol that offers different guarantees depending on the number of corrupt parties: (1) when s