Biblio
A number of small Open Source projects let independent providers measure different aspects of their quality that would otherwise be hard to see. This paper describes this observation as the pattern Quality Attestation. Quality Attestation belongs to a family of Open Source patterns written by various authors.
Security of embedded devices is a timely and important issue, due to the proliferation of these devices into numerous and diverse settings, as well as their growing popularity as attack targets, especially, via remote malware infestations. One important defense mechanism is remote attestation, whereby a trusted, and possibly remote, party (verifier) checks the internal state of an untrusted, and potentially compromised, device (prover). Despite much prior work, remote attestation remains a vibrant research topic. However, most attestation schemes naturally focus on the scenario where the verifier is trusted and the prover is not. The opposite setting–-where the prover is benign, and the verifier is malicious–-has been side-stepped. To this end, this paper considers the issue of prover security, including: verifier impersonation, denial-of-service (DoS) and replay attacks, all of which result in unauthorized invocation of attestation functionality on the prover. We argue that protection of the prover from these attacks must be treated as an important component of any remote attestation method. We formulate a new roaming adversary model for this scenario and present the trade-offs involved in countering this threat. We also identify new features and methods needed to protect the prover with minimal additional requirements.
Anti-tampering is a form of software protection conceived to detect and avoid the execution of tampered programs. Tamper detection assesses programs' integrity with load or execution-time checks. Avoidance reacts to tampered programs by stopping or rendering them unusable. General purpose reactions (such as halting the execution) stand out like a lighthouse in the code and are quite easy to defeat by an attacker. More sophisticated reactions, which degrade the user experience or the quality of service, are less easy to locate and remove but are too tangled with the program's business logic, and are thus difficult to automate by a general purpose protection tool. In the present paper, we propose a novel approach to anti-tampering that (i) fully automatically applies to a target program, (ii) uses Remote Attestation for detection purposes and (iii) adopts a server-side reaction that is difficult to block by an attacker. By means of Client/Server Code Splitting, a crucial part of the program is removed from the client and executed on a remote trusted server in sync with the client. If a client program provides evidences of its integrity, the part moved to the server is executed. Otherwise, a server-side reaction logic may (temporarily or definitely) decide to stop serving it. Therefore, a tampered client application can not continue its execution. We assessed our automatic protection tool on a case study Android application. Experimental results show that all the original and tampered executions are correctly detected, reactions are promptly applied, and execution overhead is on an acceptable level.
Large numbers of smart connected devices, also named as the Internet of Things (IoT), are permeating our environments (homes, factories, cars, and also our body - with wearable devices) to collect data and act on the insight derived. Ensuring software integrity (including OS, apps, and configurations) on such smart devices is then essential to guarantee both privacy and safety. A key mechanism to protect the software integrity of these devices is remote attestation: A process that allows a remote verifier to validate the integrity of the software of a device. This process usually makes use of a signed hash value of the actual device's software, generated by dedicated hardware. While individual device attestation is a well-established technique, to date integrity verification of a very large number of devices remains an open problem, due to scalability issues. In this paper, we present SANA, the first secure and scalable protocol for efficient attestation of large sets of devices that works under realistic assumptions. SANA relies on a novel signature scheme to allow anyone to publicly verify a collective attestation in constant time and space, for virtually an unlimited number of devices. We substantially improve existing swarm attestation schemes by supporting a realistic trust model where: (1) only the targeted devices are required to implement attestation; (2) compromising any device does not harm others; and (3) all aggregators can be untrusted. We implemented SANA and demonstrated its efficiency on tiny sensor devices. Furthermore, we simulated SANA at large scale, to assess its scalability. Our results show that SANA can provide efficient attestation of networks of 1,000,000 devices, in only 2.5 seconds.
Proactive security reviews and test efforts are a necessary component of the software development lifecycle. Resource limitations often preclude reviewing the entire code
base. Making informed decisions on what code to review can improve a team’s ability to find and remove vulnerabilities. Risk-based attack surface approximation (RASA) is a technique that uses crash dump stack traces to predict what code may contain exploitable vulnerabilities. The goal of this research is to help software development teams prioritize security efforts by the efficient development of a risk-based attack surface approximation. We explore the use of RASA using Mozilla Firefox and Microsoft Windows stack traces from crash dumps. We create RASA at the file level for Firefox, in which the 15.8% of the files that were part of the approximation contained 73.6% of the vulnerabilities seen for the product. We also explore the effect of random sampling of crashes on the approximation, as it may be impractical for organizations to store and process every crash received. We find that 10-fold random sampling of crashes at a rate of 10% resulted in 3% less vulnerabilities identified than using the entire set of stack traces for Mozilla Firefox. Sampling crashes in Windows 8.1 at a rate of 40% resulted in insignificant differences in vulnerability and file coverage as compared to a rate of 100%.
Android applications pose security and privacy risks for end-users. These risks are often quantified by performing dynamic analysis and permission analysis of the Android applications after release. Prediction of security and privacy risks associated with Android applications at early stages of application development, e.g. when the developer (s) are
writing the code of the application, might help Android application developers in releasing applications to end-users that have less security and privacy risk. The goal of this paper
is to aid Android application developers in assessing the security and privacy risk associated with Android applications by using static code metrics as predictors. In our paper, we consider security and privacy risk of Android application as how susceptible the application is to leaking private information of end-users and to releasing vulnerabilities. We investigate how effectively static code metrics that are extracted from the source code of Android applications, can be used to predict security and privacy risk of Android applications. We collected 21 static code metrics of 1,407 Android applications, and use the collected static code metrics to predict security and privacy risk of the applications. As the oracle of security and privacy risk, we used Androrisk, a tool that quantifies the amount of security and privacy risk of an Android application using analysis of Android permissions and dynamic analysis. To accomplish our goal, we used statistical learners such as, radial-based support vector machine (r-SVM). For r-SVM, we observe a precision of 0.83. Findings from our paper suggest that with proper selection of static code metrics, r-SVM can be used effectively to predict security and privacy risk of Android applications
Scientific advancement is fueled by solid fundamental research, followed by replication, meta-analysis, and theory building. To support such advancement, researchers and government agencies have been working towards a "science of security". As in other sciences, security science requires high-quality fundamental research addressing important problems and reporting approaches that capture the information necessary for replication, meta-analysis, and theory building. The goal of this paper is to aid security researchers in establishing a baseline of the state of scientific reporting in security through an analysis of indicators of scientific research as reported in top security conferences, specifically the 2015 ACM CCS and 2016 IEEE S&P proceedings. To conduct this analysis, we employed a series of rubrics to analyze the completeness of information reported in papers relative to the type of evaluation used (e.g. empirical study, proof, discussion). Our findings indicated some important information is often missing from papers, including explicit documentation of research objectives and the threats to validity. Our findings show a relatively small number of replications reported in the literature. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.
Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.
Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar.
Method: Two phishing detection experiments were conducted. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.
Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the domain highlighting.
Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.
Application: Potential applications include development of phishing-prevention training incorporating domain highlighting with other methods to help users identify phishing webpages.
When encountering a packet for which it has no matching forwarding rule, a software-defined networking (SDN) switch requests an appropriate rule from its controller; this request delays the routing of the flow until the controller responds. We show that this delay gives rise to a timing side channel in which an attacker can test for the recent occurrence of a target flow by judiciously probing the switch with forged flows and using the delays they encounter to discern whether covering rules were previously installed in the switch. We develop a Markov model of an SDN switch to permit the attacker to select the best probe (or probes) to infer whether a target flow has recently occurred. Our model captures practical challenges related to rule evictions to make room for other rules; rule timeouts due to inactivity; the presence of multiple rules that apply to overlapping sets of flows; and rule priorities. We show that our model enables detection of target flows with considerable accuracy in many cases.
Enterprise networks today have highly diverse correctness requirements and relatively common performance objectives. As a result, preferred abstractions for enterprise networks are those which allow matching security and correctness specifications, while transparently managing performance. Existing SDN network management architectures, however, bundle correctness and performance as a single abstraction. We argue that this creates an SDN ecosystem that is unnecessarily hard to build, maintain and evolve. We advocate a separation of the diverse correctness abstractions from generic performance optimization, to enable easier evolution of SDN controllers and platforms. We propose Oreo, a first step towards a common and relatively transparent performance optimization layer for SDN. Oreo performs the optimization by first building a model that describes every flow in the network, and then performing network-wide, multi-objective optimization based on this model without disrupting higher level security and correctness.
Authors: Santhosh Prabhu, Mo Dong, Tong Meng, P. Brighten Godfrey, and Matthew Caesar
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
Attack graphs used in network security analysis are analyzed to determine sequences of exploits that lead to successful acquisition of privileges or data at critical assets. An attack graph edge corresponds to a vulnerability, tacitly assuming a connection exists and tacitly assuming the vulnerability is known to exist. In this paper we explore use of uncertain graphs to extend the paradigm to include lack of certainty in connection and/or existence of a vulnerability. We extend the standard notion of uncertain graph (where the existence of each edge is probabilistically independent) however, as signicant correlations on edge existence probabilities exist in practice, owing to common underlying causes for dis-connectivity and/or presence of vulnerabilities. Our extension describes each edge probability as a Boolean expression of independent indicator random variables. This paper (i) shows that this formalism is maximally descriptive in the sense that it can describe any joint probability distribution function of edge existence, (ii) shows that when these Boolean expressions are monotone then we can easily perform uncertainty analysis of edge probabilities, and (iii) uses these results to model a partial attack graph of the Stuxnet worm and a small enterprise network and to answer important security-related questions in a probabilistic manner.
Poster presented at the Symposium and Bootcamp in the Science of Security in Hanover, MD, April 4-5, 2017.
A key use of software-defined networking is to enable scale-out of network data plane elements. Naively scaling networking elements, however, can cause incorrect security responses. For example, we show that an IDS system which operates correctly as a single network element can erroneously and permanently block hosts when it is replicated. Similarly, a scaled-out firewall can incorrectly block hosts.
In this paper, we provide a system, COCONUT, for seamless scale-out of network forwarding elements; that is, an SDN application programmer can program to what functionally appears to be a single forwarding element, but which may be replicated behind the scenes. To do this, we identify the key property for seamless scale out, weak causality, and guarantee it through a practical and scalable implementation of vector clocks in the data plane. We formally prove that COCONUT enables seamless scale out of networking elements, i.e., the user-perceived behavior of any COCONUT element implemented with a distributed set of concurrent replicas is provably indistinguishable from its singleton implementation. Finally, we build a prototype of COCONUT and experimentally demonstrate its correct behavior. We also show that its abstraction enables a more efficient implementation of seamless scale-out compared to a naive baseline.
This work was funded by the SoS lablet at the University of Illinois at Urbana-Champaign.
Detection of previously unknown attacks and malicious messages is a challenging problem faced by modern network intrusion detection systems. Anomaly-based solutions, despite being able to detect unknown attacks, have not been used often in practice due to their high false positive rate, and because they provide little actionable information to the security officer in case of an alert. In this paper we focus on intrusion detection in industrial control systems networks and we propose an innovative, practical and semantics-aware framework for anomaly detection. The network communication model and alerts generated by our framework are userunderstandable, making them much easier to manage. At the same time the framework exhibits an excellent tradeoff between detection rate and false positive rate, which we show by comparing it with two existing payload-based anomaly detection methods on several ICS datasets.
In-vehicle network security is becoming a major concern for the automotive industry. Although there is significant research done in this area, there is still a significant gap between research and what is actually applied in practice. Controller area network (CAN) gains the most concern of community but little attention is given to FlexRay. Many signs indicate the approaching end of CAN usage and starting with other promising technologies. FlexRay is considered one of the main players in the near future. We believe that migration era is near enough to change our mindset in order to supply industry with complete and mature security proposals with FlexRay. This changing mindset is important to fix the lagging issue appeared in CAN between research and industry. Then, we provide a complete migration of CAN authentication protocol towards FlexRay shows the availability of the protocol over different technologies.
OpenFlow, as the prevailing technique for Software-Defined Networks (SDNs), introduces significant programmability, granularity, and flexibility for many network applications to effectively manage and process network flows. However, because OpenFlow attempts to keep the SDN data plane simple and efficient, it focuses solely on L2/L3 network transport and consequently lacks the fundamental ability of stateful forwarding for the data plane. Also, OpenFlow provides a very limited access to connection-level information in the SDN controller. In particular, for any network access management applications on SDNs that require comprehensive network state information, these inherent limitations of OpenFlow pose significant challenges in supporting network services. To address these challenges, we propose an innovative connection tracking framework called STATEMON that introduces a global state-awareness to provide better access control in SDNs. STATEMON is based on a lightweight extension of OpenFlow for programming the stateful SDN data plane, while keeping the underlying network devices as simple as possible. To demonstrate the practicality and feasibility of STATEMON, we implement and evaluate a stateful network firewall and port knocking applications for SDNs, using the APIs provided by STATEMON. Our evaluations show that STATEMON introduces minimal message exchanges for monitoring active connections in SDNs with manageable overhead (3.27% throughput degradation).
Connection setup in software-defined networks (SDN) requires considerable amounts of processing, communication, and memory resources. Attackers can target SDN controllers with simple attacks to cause denial of service. We proposed a defense mechanism based on a proof-of-work protocol. The key characteristics of this protocol, namely its one-way operation, its requirement for freshness in proofs of work, its adjustable difficulty, its ability to work with multiple network providers, and its use of existing TCP/IP header fields, ensure that this approach can be used in practice.
Security vulnerability assessment is an important process that must be conducted against any system before the deployment, and emerging technologies are no exceptions. Software-Defined Networking (SDN) has aggressively evolved in the past few years and is now almost at the early adoption stage. At this stage, the attack surface of SDN should be thoroughly investigated and assessed in order to mitigate possible security breaches against SDN. Inspired by the necessity, we reveal three attack scenarios that leverage SDN application to attack SDNs, and test the attack scenarios against three of the most popular SDN controllers available today. In addition, we discuss the possible defense mechanisms against such application-originated attacks.
In 2012, two academic groups reported having computed the RSA private keys for 0.5% of HTTPS hosts on the internet, and traced the underlying issue to widespread random number generation failures on networked devices. The vulnerability was reported to dozens of vendors, several of whom responded with security advisories, and the Linux kernel was patched to fix a boottime entropy hole that contributed to the failures. In this paper, we measure the actions taken by vendors and end users over time in response to the original disclosure. We analyzed public internet-wide TLS scans performed between July 2010 and May 2016 and extracted 81 million distinct RSA keys. We then computed the pairwise common divisors for the entire set in order to factor over 313,000 keys vulnerable to the aw, and fingerprinted implementations to study patching behavior over time across vendors. We find that many vendors appear to have never produced a patch, and observed little to no patching behavior by end users of affected devices. The number of vulnerable hosts increased in the years after notification and public disclosure, and several newly vulnerable implementations have appeared since 2012. Vendor notification, positive vendor responses, and even vendor-produced public security advisories appear to have little correlation with end-user security.