Biblio
Existing security mechanisms for managing the Internet infrastructural resources like IP addresses, AS numbers, BGP advertisements and DNS mappings rely on a Public Key Infrastructure (PKI) that can be potentially compromised by state actors and Advanced Persistent Threats (APTs). Ideally the Internet infrastructure needs a distributed and tamper-resistant resource management framework which cannot be subverted by any single entity. A secure, distributed ledger enables such a mechanism and the blockchain is the best known example of distributed ledgers. In this paper, we propose the use of a blockchain based mechanism to secure the Internet BGP and DNS infrastructure. While the blockchain has scaling issues to be overcome, the key advantages of such an approach include the elimination of any PKI-like root of trust, a verifiable and distributed transaction history log, multi-signature based authorizations for enhanced security, easy extensibility and scriptable programmability to secure new types of Internet resources and potential for a built in cryptocurrency. A tamper resistant DNS infrastructure also ensures that it is not possible for the application level PKI to spoof HTTPS traffic.
This paper describes an approach for detecting the presence of domain name system (DNS) tunnels in network traffic. DNS tunneling is a common technique hackers use to establish command and control nodes and to exfiltrate data from networks. To generate the training data sufficient to build models to detect DNS tunneling activity, a penetration testing effort was employed. We extracted features from this data and trained random forest classifiers to distinguish normal DNS activity from tunneling activity. The classifiers successfully detected the presence of tunnels we trained on, and four other types of tunnels that were not a part of the training set.
Consensus algorithms provide strategies to solve problems in a distributed system with the added constraint that data can only be shared between adjacent computing nodes. We find these algorithms in applications for wireless and sensor networks, spectrum sensing for cognitive radio, even for some IoT services. However, consensus-based applications are not resilient to compromised nodes sending falsified data to their neighbors, i.e. they can be the target of Byzantine attacks. Several solutions have been proposed in the literature inspired from reputation based systems, outlier detection or model-based fault detection techniques in process control. We have reviewed some of these solutions, and propose two mitigation techniques to protect the consensus-based Network Intrusion Detection System in [1]. We analyze several implementation issues such as computational overhead, fine tuning of the solution parameters, impacts on the convergence of the consensus phase, accuracy of the intrusion detection system.
This work presents a systematic analysis of symmetric encryption modes for SSH that are in use on the Internet, providing deployment statistics, new attacks, and security proofs for widely used modes. We report deployment statistics based on two Internet-wide scans of SSH servers conducted in late 2015 and early 2016. Dropbear and OpenSSH implementations dominate in our scans. From our first scan, we found 130,980 OpenSSH servers that are still vulnerable to the CBC-mode-specific attack of Albrecht et al. (IEEE S&P 2009), while we found a further 20,000 OpenSSH servers that are vulnerable to a new attack on CBC-mode that bypasses the counter-measures introduced in OpenSSH 5.2 to defeat the attack of Albrecht et al. At the same time, 886,449 Dropbear servers in our first scan are vulnerable to a variant of the original CBC-mode attack. On the positive side, we provide formal security analyses for other popular SSH encryption modes, namely ChaCha20-Poly1305, generic Encrypt-then-MAC, and AES-GCM. Our proofs hold for detailed pseudo-code descriptions of these algorithms as implemented in OpenSSH. Our proofs use a corrected and extended version of the "fragmented decryption" security model that was specifically developed for the SSH setting by Boldyreva et al. (Eurocrypt 2012). These proofs provide strong confidentiality and integrity guarantees for these alternatives to CBC-mode encryption in SSH. However, we also show that these alternatives do not meet additional, desirable notions of security (boundary-hiding under passive and active attacks, and denial-of-service resistance) that were formalised by Boldyreva et al.
Two recent proposals by Bernstein and Pornin emphasize the use of deterministic signatures in DSA and its elliptic curve-based variants. Deterministic signatures derive the required ephemeral key value in a deterministic manner from the message to be signed and the secret key instead of using random number generators. The goal is to prevent severe security issues, such as the straight-forward secret key recovery from low quality random numbers. Recent developments have raised skepticism whether e.g. embedded or pervasive devices are able to generate randomness of sufficient quality. The main concerns stem from individual implementations lacking sufficient entropy source and standardized methods for random number generation with suspected back doors. While we support the goal of deterministic signatures, we are concerned about the fact that this has a significant influence on side-channel security of implementations. Specifically, attackers will be able to mount differential side-channel attacks on the additional use of the secret key in a cryptographic hash function to derive the deterministic ephemeral key. Previously, only a simple integer arithmetic function to generate the second signature parameter had to be protected, which is rather straight-forward. Hash functions are significantly more difficult to protect. In this contribution, we systematically explain how deterministic signatures introduce this new side-channel vulnerability.
Smart environments and security systems require automatic detection of human behaviors including approaching to or departing from an object. Existing human motion detection systems usually require human beings to carry special devices, which limits their applications. In this paper, we present a system called APID to detect arm reaching by analyzing backscatter communication signals from a passive RFID tag on the object. APID does not require human beings to carry any device. The idea is based on the influence of human movements to the vibration of backscattered tag signals. APID is compatible with commodity off-the-shelf devices and the EPCglobal Class-1 Generation-2 protocol. In APID an commercial RFID reader continuously queries tags through emitting RF signals and tags simply respond with their IDs. A USRP monitor passively analyzes the communication signals and reports the approach and departure behaviors. We have implemented the APID system for both single-object and multi-object scenarios in both horizontal and vertical deployment modes. The experimental results show that APID can achieve high detection accuracy.
Sego is a hypervisor-based system that gives strong privacy and integrity guarantees to trusted applications, even when the guest operating system is compromised or hostile. Sego verifies operating system services, like the file system, instead of replacing them. By associating trusted metadata with user data across all system devices, Sego verifies system services more efficiently than previous systems, especially services that depend on data contents. We extensively evaluate Sego's performance on real workloads and implement a kernel fault injector to validate Sego's file system-agnostic crash consistency and recovery protocol.
Mobile devices offer access to our digital lives and thus need to be protected against the risk of unauthorized physical access by applying strong authentication, which in turn adversely affects usability. The actual risk, however, depends on dynamic factors like day and time. In this paper we discuss the idea of using location-based risk assessment in combination with multi-modal biometrics to adjust the level of authentication necessary to the situational risk of unauthorized access.
This last decade has witnessed a wide adoption of connected mobile devices able to capture the context of their owners from embedded sensors (GPS, Wi-Fi, Bluetooth, accelerometers). The advent of mobile and pervasive computing has enabled rich social and contextual applications, but the use of such technologies raises severe privacy issues and challenges. The privacy threats come from diverse adversaries, ranging from curious service providers and other users of the same service to eavesdroppers and curious applications running on the device. The information that can be collected from mobile device owners includes their locations, their social relationships, and their current activity. All of this, once analyzed and combined together through inference, can be very telling about the users' private lives. In this talk, we will describe privacy threats in mobile and pervasive networks. We will also show how to quantify the privacy of the users of such networks and explain how information on co-location can be taken into account. We will describe the role that privacy enhancing technologies (PETs) can play and describe some of them. We will also explain how to prevent apps from sifting too many personal data under Android. We will conclude by mentioning the privacy and security challenges raised by the quantified self and digital medicine
Self-disclosure is rewarding and provides significant benefits for individuals, but it also involves risks, especially in social media settings. We conducted an online experiment to study the relationship between content intimacy and willingness to self-disclose in social media, and how identification (real name vs. anonymous) and audience type (social ties vs. people nearby) moderate that relationship. Content intimacy is known to regulate self-disclosure in face-to-face communication: people self-disclose less as content intimacy increases. We show that such regulation persists in online social media settings. Further, although anonymity and an audience of social ties are both known to increase self-disclosure, it is unclear whether they (1) increase self-disclosure baseline for content of all intimacy levels, or (2) weaken intimacy's regulation effect, making people more willing to disclose intimate content. We show that intimacy always regulates self-disclosure, regardless of settings. We also show that anonymity mainly increases self-disclosure baseline and (sometimes) weakens the regulation. On the other hand, an audience of social ties increases the baseline but strengthens the regulation. Finally, we demonstrate that anonymity has a more salient effect on content of negative valence.The results are critical to understanding the dynamics and opportunities of self-disclosure in social media services that vary levels of identification and types of audience.
We consider how the I-V characteristics of emerging transistors (particularly those sponsored by STARnet) might be employed to enhance hardware security. An emphasis of this work is to move beyond hardware implementations of physically unclonable functions (PUFs) and random num- ber generators (RNGs). We highlight how new devices (i) may enable more sophisticated logic obfuscation for IP protection, (ii) could help to prevent fault injection attacks, (iii) prevent differential power analysis in lightweight cryptographic systems, etc.
Hardware Trojan detection has emerged as a critical challenge to ensure security and trustworthiness of integrated circuits. A vast majority of research efforts in this area has utilized side-channel analysis for Trojan detection. Functional test generation for logic testing is a promising alternative but it may not be helpful if a Trojan cannot be fully activated or the Trojan effect cannot be propagated to the observable outputs. Side-channel analysis, on the other hand, can achieve significantly higher detection coverage for Trojans of all types/sizes, since it does not require activation/propagation of an unknown Trojan. However, they have often limited effectiveness due to poor detection sensitivity under large process variations and small Trojan footprint in side-channel signature. In this paper, we address this critical problem through a novel side-channel-aware test generation approach, based on a concept of Multiple Excitation of Rare Switching (MERS), that can significantly increase Trojan detection sensitivity. The paper makes several important contributions: i) it presents in detail the statistical test generation method, which can generate high-quality testset for creating high relative activity in arbitrary Trojan instances; ii) it analyzes the effectiveness of generated testset in terms of Trojan coverage; and iii) it describes two judicious reordering methods can further tune the testset and greatly improve the side channel sensitivity. Simulation results demonstrate that the tests generated by MERS can significantly increase the Trojans sensitivity, thereby making Trojan detection effective using side-channel analysis.
The World Wide Web has become the most common platform for building applications and delivering content. Yet despite years of research, the web continues to face severe security challenges related to data integrity and confidentiality. Rather than continuing the exploit-and-patch cycle, we propose addressing these challenges at an architectural level, by supplementing the web's existing connection-based and server-based security models with a new approach: content-based security. With this approach, content is directly signed and encrypted at rest, enabling it to be delivered via any path and then validated by the browser. We explore how this new architectural approach can be applied to the web and analyze its security benefits. We then discuss a broad research agenda to realize this vision and the challenges that must be overcome.
This paper introduces a design and implementation of a security scheme for the Internet of Things (IoT) based on ECQV Implicit Certificates and Datagram Transport Layer Security (DTLS) protocol. In this proposed security scheme, Elliptic curve cryptography based ECQV implicit certificate plays a key role allowing mutual authentication and key establishment between two resource-constrained IoT devices. We present how IoT devices get ECQV implicit certificates and use them for authenticated key exchange in DTLS. An evaluation of execution time of the implementation is also conducted to assess the efficiency of the solution.
Large scale biomedical research projects involve analysis of huge amount of genomic data which is owned by different data owners. The collection and storing of genomic data is sometimes beyond the capability of a sole organization. Genomic data sharing is a feasible solution to overcome this problem. These scenarios can be generalized into the problem of aggregating data distributed among multiple databases and owned by different data owners. However, we should guarantee that an adversary cannot learn anything about the data or the individual contribution of each party towards the final output of the computation. In this paper, we propose a practical solution for secure sharing and computation of genomic data. We adopt the Paillier cryptosystem and the order preserving encryption to securely execute the count query and the ranked query. Experimental results demonstrate that the computation time is realistic enough to make our system adoptable in the real world.
Mobile ad hoc network is one of the popular network technology used for rapid deployment in critical situations. Because the nature of network is ad hoc therefore a number of issues exist. In order to investigate the security in mobile ad hoc network a number of research articles are explored and it is observed that most of the attacks are deployed on the basis of poor routing methodology. For providing the security in the ad hoc networks an opinion based trust model is proposed which is working on the basis of the network properties. In this model two techniques are used one is trust calculation that helps in finding most trustworthy node and other is opinion evaluation by which most secure route to the destination is obtained. By the experimental outcomes the results are compared with the traditional approach of trust based security. According to the obtained results the performance of the network becomes efficient in all the evaluated parameters as compared to the traditional technique. Thus proposed model is more adoptable for secure routing in MANET.
In this special session, members of the ACM Joint Task Force on Cyber Education to Develop Undergraduate Curricular Guidance will provide an overview of the task force mission, objectives, and work plan. After the overview, task force members will engage session participants in the curricular development process.
When people utilize social applications and services, their privacy suffers a potential serious threat. In this article, we present a novel, robust, and effective de-anonymization attack to mobility trace data and social data. First, we design a Unified Similarity (US) measurement, which takes account of local and global structural characteristics of data, information obtained from auxiliary data, and knowledge inherited from ongoing de-anonymization results. By analyzing the measurement on real datasets, we find that some data can potentially be de-anonymized accurately and the other can be de-anonymized in a coarse granularity. Utilizing this property, we present a US-based De-Anonymization (DA) framework, which iteratively de-anonymizes data with accuracy guarantee. Then, to de-anonymize large-scale data without knowledge of the overlap size between the anonymized data and the auxiliary data, we generalize DA to an Adaptive De-Anonymization (ADA) framework. By smartly working on two core matching subgraphs, ADA achieves high de-anonymization accuracy and reduces computational overhead. Finally, we examine the presented de-anonymization attack on three well-known mobility traces: St Andrews, Infocom06, and Smallblue, and three social datasets: ArnetMiner, Google+, and Facebook. The experimental results demonstrate that the presented de-anonymization framework is very effective and robust to noise. The source code and employed datasets are now publicly available at SecGraph [2015].
Attack graphs are a powerful modeling technique with which to explore the attack surface of a system. However, they can be difficult to generate due to the exponential growth of the state space, often times making exhaustive search impractical. This paper discusses an approach for generating large attack graphs with an emphasis on scalable generation over a distributed system. First, a serial algorithm is presented, highlighting bottlenecks and opportunities to exploit inherent concurrency in the generation process. Then a strategy to parallelize this process is presented. Finally, we discuss plans for future work to implement the parallel algorithm using a hybrid distributed/shared memory programming model on a heterogeneous compute node cluster.
Poster presented at the 2017 Science of Security UIUC Lablet Summer Internship Poster Session held on July 27, 2017 in Urbana, IL.
By connecting devices, people, vehicles and infrastructures everywhere in a city, governments and their partners can improve community wellbeing and other economic and financial aspects (e.g., cost and energy savings). Nonetheless, smart cities are complex ecosystems that comprise many different stakeholders (network operators, managed service providers, logistic centers...) who must work together to provide the best services and unlock the commercial potential of the IoT. This is one of the major challenges that faces today's smart city movement, and more generally the IoT as a whole. Indeed, while new smart connected objects hit the market every day, they mostly feed "vertical silos" (e.g., vertical apps, siloed apps...) that are closed to the rest of the IoT, thus hampering developers to produce new added value across multiple platforms. Within this context, the contribution of this paper is twofold: (i) present the EU vision and ongoing activities to overcome the problem of vertical silos; (ii) introduce recent IoT standards used as part of a recent Horizon 2020 IoT project to address this problem. The implementation of those standards for enhanced sporting event management in a smart city/government context (FIFA World Cup 2022) is developed, presented, and evaluated as a proof-of-concept.
In this workshop, participants coming from a variety of disciplinary backgrounds and countries–-China, South Korea, EU, and US–-will present their country's cyber security initiatives and challenges. Following the presentations, participants will discuss current trends, lessons learned in implementing the initiatives, and international collaboration. The workshop will culminate in the setting an agenda for future collaborative studies in cyber security.
Internet of Things (IoT) have been connecting the physical world seamlessly and provides tremendous opportunities to a wide range of applications. However, potential risks exist when IoT system collects local sensor data and uploads to the Cloud. The private data leakage can be severe with curious database administrator or malicious hackers who compromise the Cloud. In this demo, we solve this problem of guaranteeing the user data privacy and security using compressive sensing based cryptographic method. We present CScrypt, a compressive-sensing-based encryption engine for the Cloud-enabled IoT systems to secure the interaction between the IoT devices and the Cloud. Our system exploits the fact that each individual's biometric data can be trained to a unique dictionary which can be used as an encryption key meanwhile to compress the original data. We will demonstrate a functioning prototype of our system using live data stream when attending the conference.
Code coverage is a widely used measure to determine how thoroughly an application is tested. There are many tools available for different languages. However, to the best of our knowledge, most of them focus on unit testing and ignore end-to-end tests with ui- or web tests. Furthermore, there is no support for determining code coverage of transcompiled cross-platform applications. This kind of application is written in one language, but compiled to and executed in a different programming language. Besides, it may run on a different platform. In this paper, we propose a new code coverage testing method that calculates the code coverage of any kind of test (unit-, integration- or ui-/web-test) for any type of (transcompiled) applications (desktop, web or mobile application). Developers obtain information about which parts of the source code are uncovered by tests. The basis of our approach is generic and may be applied in numerous programming languages based on an abstract syntax tree. We present our approach for any-kind-applications developed in Java and evaluate our tool on a web application created with Google Web Toolkit, on standard desktop applications, and on some small Java applications that use the Swing library to create user interfaces. Our results show that our tool is able to judge the code coverage of any kind of test. In particular, our tool is independent of the unit- or ui-/web test-framework in use. The runtime performance is promising although it is not as fast as already existing tools in the area of unit-testing.
The validation of simulation models (e.g., of electronic control units for vehicles) in industry is becoming increasingly challenging due to their growing complexity. To systematically assess the quality of such models, software metrics seem to be promising. In this paper we explore the use of software metrics and outlier analysis as a means to assess the quality of model-based software. More specifically, we investigate how results from regression analysis applied to measurement data received from size and complexity metrics can be mapped to software quality. Using the moving averages approach, models were fit to data received from over 65,000 software revisions for 71 simulation models that represent different electronic control units of real premium vehicles. Consecutive investigations using studentized deleted residuals and Cook’s Distance revealed outliers among the measurements. From these outliers we identified a subset, which provides meaningful information (anomalies) by comparing outlier scores with expert opinions. Eight engineers were interviewed separately for outlier impact on software quality. Findings were validated in consecutive workshops. The results show correlations between outliers and their impact on four of the considered quality characteristics. They also demonstrate the applicability of this approach in industry.