Biblio
The IETF has developed protocols that promote a healthy IPv4 and IPv6 co-existence. The Happy Eyeballs (HE) algorithm, for instance, prevents bad user experience in situations where IPv6 connectivity is broken. Using an active test (happy) that measures TCP connection establishment times, we evaluate the effects of the HE algorithm. The happy test measures against ALEXA top 10K websites from 80 SamKnows probes connected to dual-stacked networks representing 58 different ASes. Using a 3-years long (2013 - 2016) dataset, we show that TCP connect times to popular websites over IPv6 have considerably improved over time. As of May 2016, 18% of these websites are faster over IPv6 with 91% of the rest at most 1 ms slower. The historical trend shows that only around 1% of the TCP connect times over IPv6 were ever above the HE timer value (300 ms), which leaves around 2% chance for IPv4 to win a HE race towards these websites. As such, 99% of these websites prefer IPv6 connections more than 98% of the time. We show that although absolute TCP connect times (in ms) are not that far apart in both address families, HE with a 300 ms timer value tends to prefer slower IPv6 connections in around 90% of the cases. We show that lowering the HE timer value to 150 ms gives us a margin benefit of 10% while retaining same preference levels over IPv6.
TLS has the potential to provide strong protection against network-based attackers and mass surveillance, but many implementations take security shortcuts in order to reduce the costs of cryptographic computations and network round trips. We report the results of a nine-week study that measures the use and security impact of these shortcuts for HTTPS sites among Alexa Top Million domains. We find widespread deployment of DHE and ECDHE private value reuse, TLS session resumption, and TLS session tickets. These practices greatly reduce the protection afforded by forward secrecy: connections to 38% of Top Million HTTPS sites are vulnerable to decryption if the server is compromised up to 24 hours later, and 10% up to 30 days later, regardless of the selected cipher suite. We also investigate the practice of TLS secrets and session state being shared across domains, finding that in some cases, the theft of a single secret value can compromise connections to tens of thousands of sites. These results suggest that site operators need to better understand the tradeoffs between optimizing TLS performance and providing strong security, particularly when faced with nation-state attackers with a history of aggressive, large-scale surveillance.
The increasing complexity of cyber-attacks necessitates the design of more efficient hardware architectures for real-time Intrusion Detection Systems (IDSs). String matching is the main performance-demanding component of an IDS. An effective technique to design high-performance string matching engines is to partition the target set of strings into multiple subgroups and to use a parallel string matching hardware unit for each subgroup. This paper introduces a novel pattern grouping algorithm for heterogeneous bit-split string matching architectures. The proposed algorithm presents a reliable method to estimate the correlation between strings. The correlation factors are then used to find a preferred group for each string in a seed growing approach. Experimental results demonstrate that the proposed algorithm achieves an average of 41% reduction in memory consumption compared to the best existing approach found in the literature, while offering orders of magnitude faster execution time compared to an exhaustive search.
A key requirement for most security solutions is to provide secure cryptographic key storage in a way that will easily scale in the age of the Internet of Things. In this paper, we focus on providing such a solution based on Physical Unclonable Functions (PUFs). To this end, we focus on microelectromechanical systems (MEMS)-based gyroscopes and show via wafer-level measurements and simulations, that it is feasible to use the physical and electrical properties of these sensors for cryptographic key generation. After identifying the most promising features, we propose a novel quantization scheme to extract bit strings from the MEMS analog measurements. We provide upper and lower bounds for the minimum entropy of the derived bit strings and fully analyze the intra- and inter-class distributions across the operation range of the MEMS device. We complement these measurements via Monte-Carlo simulations based on the distributions of the parameters measured on actual devices. We also propose and evaluate a complete cryptographic key generation chain based on fuzzy extractors. We derive a full entropy 128-bit key using the obtained min-entropy estimates, requiring 1219 bits of helper data with an (authentication) failure probability of 4 . 10-7. In addition, we propose a dedicated MEMS-PUF design, which is superior to our measured sensor, in terms of chip area, quality and quantity of key seed features.
Obfuscation is a mechanism used to hinder reverse engineering of programs. To cope with the large number of obfuscated programs, especially malware, reverse engineers automate the process of deobfuscation i.e. extracting information from obfuscated programs. Deobfuscation techniques target specific obfuscation transformations, which requires reverse engineers to manually identify the transformations used by a program, in what is known as metadata recovery attack. In this paper, we present Oedipus, a Python framework that uses machine learning classifiers viz., decision trees and naive Bayes, to automate metadata recovery attacks against obfuscated programs. We evaluated Oedipus' performance using two datasets totaling 1960 unobfuscated C programs, which were used to generate 11.075 programs obfuscated using 30 configurations of 6 different obfuscation transformations. Our results empirically show the feasibility of using machine learning to implement the metadata recovery attacks with classification accuracies of 100% in some cases.
The Internet of Things (IoT) paradigm, in conjunction with the one of smart cities, is pursuing toward the concept of smart buildings, i.e., “intelligent” buildings able to receive data from a network of sensors and thus to adapt the environment. IoT sensors can monitor a wide range of environmental features such as the energy consumption inside a building at fine-grained level (e.g., for a specific wall-socket). Some smart buildings already deploy energy monitoring in order to optimize the energy use for good purposes (e.g., to save money, to reduce pollution). Unfortunately, such measurements raise a significant amount of privacy concerns. In this paper, we investigate the feasibility of recognizing the pair laptop-user (i.e., a user using her own laptop) from the energy traces produced by her laptop. We design MTPlug, a framework that achieves this goal relying on supervised machine learning techniques as pattern recognition in multivariate time series. We present a comprehensive implementation of this system and run a thorough set of experiments. In particular, we collected data by monitoring the energy consumption of two groups of laptop users, some office employees and some intruders, for a total of 27 people. We show that our system is able to build an energy profile for a laptop user with accuracy above 80%, in less than 3.5 hours of laptop usage. To the best of our knowledge, this is the first research that assesses the feasibility of laptop users profiling relying uniquely on fine-grained energy traces collected using wall-socket smart meters.
Wearable tracking devices have gained widespread usage and popularity because of the valuable services they offer, monitoring human's health parameters and, in general, assisting persons to take a better care of themselves. Nevertheless, the security risks associated with such devices can represent a concern among consumers, because of the sensitive information these devices deal with, like sleeping patterns, eating habits, heart rate and so on. In this paper, we analyse the key security and privacy features of two entry level health trackers from leading vendors (Jawbone and Fitbit), exploring possible attack vectors and vulnerabilities at several system levels. The results of the analysis show how these devices are vulnerable to several attacks (perpetrated with consumer-level devices equipped with just bluetooth and Wi-Fi) that can compromise users' data privacy and security, and eventually call the tracker vendors to raise the stakes against such attacks.
The security and typical attack behavior of Modbus/TCP industrial network communication protocol are analyzed. The data feature of traffic flow is extracted through the operation mode of the depth analysis abnormal behavior, and the intrusion detection method based on the support vector machine (SVM) is designed. The method analyzes the data characteristics of abnormal communication behavior, and constructs the feature input structure and detection system based on SVM algorithm by using the direct behavior feature selection and abnormal behavior pattern feature construction. The experimental results show that the method can effectively improve the detection rate of abnormal behavior, and enhance the safety protection function of industrial network.
Evolution in software systems is a necessary activity that occurs due to fixing bugs, adding functionality or improving system quality. Systems often need to be shown to comply with regulatory standards. Along with demonstrating compliance, an artifact, called an assurance case, is often produced to show that the system indeed satisfies the property imposed by the standard (e.g., safety, privacy, security, etc.). Since each of the system, the standard, and the assurance case can be presented as a model, we propose the extension and use of traditional model management operators to aid in the reuse of parts of the assurance case when the system undergoes an evolution. Specifically, we present a model management approach that eventually produces a partial evolved assurance case and guidelines to help the assurance engineer in completing it. We demonstrate how our approach works on an automotive subsystem regulated by the ISO 26262 standard.
We present essential concepts of a model-based testing framework for probabilistic systems with continuous time. Markov automata are used as an underlying model. Key result of the work is the solid core of a probabilistic test theory, that incorporates real-time stochastic behaviour. We connect ioco theory and hypothesis testing to infer about trace probabilities. We show that our conformance relation conservatively extends ioco and discuss the meaning of quiescence in the presence of exponentially distributed time delays.
Eddy is a privacy requirements specification language that privacy analysts can use to express requirements over data practices; to collect, use, transfer and retain personal and technical information. The language uses a simple SQL-like syntax to express whether an action is permitted or prohibited, and to restrict those statements to particular data subjects and purposes. Eddy also supports the ability to express modifications on data, including perturbation, data append, and redaction. The Eddy specifications are compiled into Description Logic to automatically detect conflicting requirements and to trace data flows within and across specifications. Conflicts are highlighted, showing which rules are in conflict (expressing prohibitions and rights to perform the same action on equivalent interpretations of the same data, data subjects, or purposes), and what definitions caused the rules to conflict. Each specification can describe an organization's data practices, or the data practices of specific components in a software architecture.
Presented at NSA Science of Security Quarterly Meeting, July 2016.
Presented at the NSA Science of Security Quarterly Meeting, November 2016.
MPI includes all processes in MPI\_COMM\_WORLD; this is untenable for reasons of scale, resiliency, and overhead. This paper offers a new approach, extending MPI with a new concept called Sessions, which makes two key contributions: a tighter integration with the underlying runtime system; and a scalable route to communication groups. This is a fundamental change in how we organise and address MPI processes that removes well-known scalability barriers by no longer requiring the global communicator MPI\_COMM\_WORLD.
In the past three years, Emotion Recognition in the Wild (EmotiW) Grand Challenge has drawn more and more attention due to its huge potential applications. In the fourth challenge, aimed at the task of video based emotion recognition, we propose a multi-clue emotion fusion (MCEF) framework by modeling human emotion from three mutually complementary sources, facial appearance texture, facial action, and audio. To extract high-level emotion features from sequential face images, we employ a CNN-RNN architecture, where face image from each frame is first fed into the fine-tuned VGG-Face network to extract face feature, and then the features of all frames are sequentially traversed in a bidirectional RNN so as to capture dynamic changes of facial textures. To attain more accurate facial actions, a facial landmark trajectory model is proposed to explicitly learn emotion variations of facial components. Further, audio signals are also modeled in a CNN framework by extracting low-level energy features from segmented audio clips and then stacking them as an image-like map. Finally, we fuse the results generated from three clues to boost the performance of emotion recognition. Our proposed MCEF achieves an overall accuracy of 56.66% with a large improvement of 16.19% with respect to the baseline.
India being digitized through digital India, the most basic unique identity for each individual is biometrics. Since India is the second most populous nation, the database that has to be maintained is surplus. Shielding those information by using the present techniques has been questioned. This contravene problem can be overcome by using cryptographic algorithms in accumulation to biometrics. Hence proposed system is developed by combining multimodal biometric (Fingerprint, Retina, Finger vein) with cryptographic algorithm with Genuine Acceptance Rate of 94%, False Acceptance Rate of 1.46%, and False Rejection Rate of 1.07%.
Additive manufacturing, also known as 3D printing, has been increasingly applied to fabricate highly intellectual property (IP) sensitive products. However, the related IP protection issues in 3D printers are still largely underexplored. On the other hand, smartphones are equipped with rich onboard sensors and have been applied to pervasive mobile surveillance in many applications. These facts raise one critical question: is it possible that smartphones access the side-channel signals of 3D printer and then hack the IP information? To answer this, we perform an end-to-end study on exploring smartphone-based side-channel attacks against 3D printers. Specifically, we formulate the problem of the IP side-channel attack in 3D printing. Then, we investigate the possible acoustic and magnetic side-channel attacks using the smartphone built-in sensors. Moreover, we explore a magnetic-enhanced side-channel attack model to accurately deduce the vital directional operations of 3D printer. Experimental results show that by exploiting the side-channel signals collected by smartphones, we can successfully reconstruct the physical prints and their G-code with Mean Tendency Error of 5.87% on regular designs and 9.67% on complex designs, respectively. Our study demonstrates this new and practical smartphone-based side channel attack on compromising IP information during 3D printing.
Today's applications asking for finding spatial protests nearest to a predefined area in the meantime fulfill limitation of keywords. Best answer for such questions depends on the IR2-tree, which has some inadequacies that truly affect system s efficiency. To defeat those inadequacies another access strategy is produced called the Spatial-inverted Index (SI) that extends the modified file to adapt to multidimensional information, and accompanies calculations that can answer closest neighbor queries with keywords continuously. This new technique SI is produced broadens the capacities of routine modified record makes do with multidimensional information, alongside the arrangement of using so as to move reach queries replied SI results to calculation which tackles the issue continuously.
In this article, we describe a neighbour disjoint multipath (NDM) scheme that is shown to be more resilient amidst node or link failures compared to the two well-known node disjoint and edge disjoint multipath techniques. A centralised NDM was first conceptualised in our initial published work utilising the spatial diversity among multiple paths to ensure robustness against localised poor channel quality or node failures. Here, we further introduce a distributed version of our NDM algorithm adapting to the low-power and lossy network (LLN) characteristics. We implement our distributed NDM algorithm in Contiki OS on top of LOADng—a lightweight On-demand Ad hoc Distance Vector Routing protocol. We compare this implementation's performance with a standard IPv6 Routing Protocol for Low power and Lossy Networks (RPL), and also with basic LOADng, running in the Cooja simulator. Standard performance metrics such as packet delivery ratio, end-to-end latency, overhead and average routing table size are identified for the comparison. The results and observations are provided considering a few different application traffic patterns, which serve to quantify the improvements in robustness arising from NDM. The results are confirmed by experiments using a public sensor network testbed with over 100 nodes.