Biblio
Many popular online social networks, such as Twitter, Tum-blr, and Sina Weibo, adopt too simple privacy models to satisfy users’diverse needs for privacy protection. In platforms with no (i.e., completely open) or binary (i.e., “public” and “friends-only”) access con-trol, users cannot control the dissemination boundary of the contentthey share. For instance, on Twitter, tweets in “public” accounts areaccessible to everyone including search engines, while tweets in “pro-tected” accounts are visible toallthe followers. In this work, we presentArcanato enable fine-grained access control for social network content sharing. In particular, we target the Twitter platform and intro-duce the “private tweet” function, which allows users to disseminateparticular tweets to designated group(s) of followers. Arcana employsCiphertext-Policy Attribute-based Encryption (CP-ABE) to implement social circle detection and private tweet encryption so that access-controlled tweets are only readable by designated recipients. To bestealthy, Arcana further embeds the protected content as digital water-marks in image tweets. We have implemented the Arcana prototype asa Chrome browser plug-in, and demonstrated its flexibility and effec-tiveness. Different from existing approaches that require trusted third-parties or additional server/broker/mediator, Arcana is light-weight andcompletely transparent to Twitter – all the communications, includingkey distribution and private tweet dissemination, are exchanged as Twit-ter messages. Therefore, with small API modifications, Arcana could beeasily ported to other online social networking platforms to support fine-grained access control.
Security practitioners use the attack surface of software systems to prioritize areas of systems to test and analyze. To date, approaches for predicting which code artifacts are vulnerable have utilized a binary classification of code as vulnerable or not vulnerable. To better understand the strengths and weaknesses of vulnerability prediction approaches, vulnerability datasets with classification and severity data are needed. The goal of this paper is to help researchers and practitioners make security effort prioritization decisions by evaluating which classifications and severities of vulnerabilities are on an attack surface approximated using crash dump stack traces. In this work, we use crash dump stack traces to approximate the attack surface of Mozilla Firefox. We then generate a dataset of 271 vulnerable files in Firefox, classified using the Common Weakness Enumeration (CWE) system. We use these files as an oracle for the evaluation of the attack surface generated using crash data. In the Firefox vulnerability dataset, 14 different classifications of vulnerabilities appeared at least once. In our study, 85.3%
of vulnerable files were on the attack surface generated using crash data. We found no difference between the severity of vulnerabilities found on the attack surface generated using crash data and vulnerabilities not occurring on the attack surface. Additionally, we discuss lessons learned during the development of this vulnerability dataset.
Fundamentality is the central conceptual component of discussions concerning the emergence. Most obviously, contemporary uses of the term "emergence" vary according to their users' views of fundamentality. This chapter provides a general characterization of fundamentality, explaining the challenges faced by the anti‐emergentist versions of fundamentalism. It discusses the limitations of one prominent account of ontological fundamentality, physicalism. Although physicalism does not present a viable alternative to emergentism, this does not mean that emergentists can declare victory. Completeness is essential to arguments against the possibility of strongly emergent properties. Three interlocking concepts: causation, completeness, and reality, are not straightforwardly scientific in nature, but are, instead, metaphysical, or at least conceptual. Scientific models are intended to provide guidance with respect to explanations and predictions of emergent properties or to offer possible interventions that would allow control over those properties.
Poor time predictability of multicore processors has been a long-standing challenge in the realtime systems community. In this paper, we make a case that a fundamental problem that prevents efficient and predictable real-time computing on multicore is the lack of a proper memory abstraction to express memory criticality, which cuts across various layers of the system: the application, OS, and hardware. We, therefore, propose a new holistic resource management approach driven by a new memory abstraction, which we call Deterministic Memory. The key characteristic of deterministic memory is that the platform–the OS and hardware–guarantees small and tightly bounded worst-case memory access timing. In contrast, we call the conventional memory abstraction as best-effort memory in which only highly pessimistic worst-case bounds can be achieved. We propose to utilize both abstractions to achieve high time predictability but without significantly sacrificing performance. We present deterministic memory-aware OS and architecture designs, including OS-level page allocator, hardware-level cache, and DRAM controller designs. We implement the proposed OS and architecture extensions on Linux and gem5 simulator. Our evaluation results, using a set of synthetic and real-world benchmarks, demonstrate the feasibility and effectiveness of our approach.
System administrators are slowly coming to accept that nearly all systems are vulnerable and many should be assumed to be compromised. Rather than preventing all vulnerabilities in complex systems, the approach is changing to protecting systems under the assumption that they are already under attack.
Administrators do not know all the latent vulnerabilities in the systems they are charged with protecting. This work builds on prior approaches that assume more a priori knowledge. [5]. Additionally, prior research does not necessarily guide administrators to gracefully degrade systems in response to threats [4]. Sophisticated attackers with high levels of resources, like advanced persistent threats (APTs), might use zero day exploits against novel vulnerabilities or be slow and stealthy to evade initial lines of detection.
However, defenders often have some knowledge of where attackers are. Additionally, it is possible to reasonably bound attacker resourcing. Exploits have a cost to create [1], and even the most sophisticated attacks use limited number of zero day exploits [3].
However, defenders need a way to reason about and react to the impact of an attacker with existing presence in a system. It may not be possible to maintain one hundred percent of the system's original utility; instead, the attacker might need to gracefully degrade the system, trading off some functional utility to keep an attacker away from the most critical functionality.
We propose a method to "think like an attacker" to evaluate architectures and alternatives in response to knowledge of attacker presence. For each considered alternative architecture, our approach determines the types of exploits an attacker would need to achieve particular attacks using the Datalog declarative logic programming language in a fashion that draws adapts others' prior work [2][4]. With knowledge of how difficult particular exploits are to create, we can approximate the cost to an attacker of a particular attack trace. A bounded search of traces within a limited cost provides a set of hypothetical attacks for a given architecture. These attacks have varying impacts to the system's ability to achieve its functions. Using this knowledge, our approach outputs an architectural alternative that optimally balances keeping an attacker away from critical functionality while preserving that functionality. In the process, it provides evidence in the form of hypothetical attack traces that can be used to explain the reasoning.
This thinking enables a defender to reason about how potential defensive tactics could close off avenues of attack or perhaps enable an ongoing attack. By thinking at the level of architecture, we avoid assumptions of knowledge of specific vulnerabilities. This enables reasoning in a highly uncertain domain.
We applied this to several small systems at varying levels of abstraction. These systems were chosen as exemplars of various "best practices" to see if the approach could quantitatively validate the underpinnings of general rules of thumb like using perimeter security or trading off resilience for security. Ultimately, our approach successfully places architectural components in places that correspond with current best practices and would be reasonable to system architects. In the process of applying the approach at different levels of abstraction, we were able to fine tune our understanding attacker movement through systems in a way that provides security-appropriate architectures despite poor knowledge of latent vulnerabilities; the result of the fine-tuning is a more granular way to understand and evaluate attacker movement in systems.
Future work will explore ways to enhance performance to this approach so it can provide real time planning to gracefully degrade systems as attacker knowledge is discovered. Additionally, we plan to explore ways to enhance expressiveness to the approach to address additional security related concerns; these might include aspects like timing and further levels of uncertainty.
Use of multi-objective probabilistic planning to synthesize behavior of CPSs can play an important role in engineering systems that must self-optimize for multiple quality objectives and operate under uncertainty. However, the reasoning behind automated planning is opaque to end-users. They may not understand why a particular behavior is generated, and therefore not be able to calibrate their confidence in the systems working properly. To address this problem, we propose a method to automatically generate verbal explanation of multi-objective probabilistic planning, that explains why a particular behavior is generated on the basis of the optimization objectives. Our explanation method involves describing objective values of a generated behavior and explaining any tradeoff made to reconcile competing objectives. We contribute: (i) an explainable planning representation that facilitates explanation generation, and (ii) an algorithm for generating contrastive justification as explanation for why a generated behavior is best with respect to the planning objectives. We demonstrate our approach on a mobile robot case study.
Much research has been devoted to better understanding adversarial examples, which are specially crafted inputs to machine-learning models that are perceptually similar to benign inputs, but are classified differently (i.e., misclassified). Both algorithms that create adversarial examples and strategies for defending against adversarial examples typically use Lp-norms to measure the perceptual similarity between an adversarial input and its benign original. Prior work has already shown, however, that two images need not be close to each other as measured by an Lp-norm to be perceptually similar. In this work, we show that nearness according to an Lp-norm is not just unnecessary for perceptual similarity, but is also insufficient. Specifically, focusing on datasets (CIFAR10 and MNIST), Lp-norms, and thresholds used in prior work, we show through online user studies that “adversarial examples” that are closer to their benign counterparts than required by commonly used Lpnorm thresholds can nevertheless be perceptually distinct to humans from the corresponding benign examples. Namely, the perceptual distance between two images that are “near” each other according to an Lp-norm can be high enough that participants frequently classify the two images as representing different objects or digits. Combined with prior work, we thus demonstrate that nearness of inputs as measured by Lp-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples. We propose and discuss alternative similarity metrics to stimulate future research in the area.
Abstract—In this work, we study the problem of keeping the objective functions of individual agents "-differentially private in cloud-based distributed optimization, where agents are subject to global constraints and seek to minimize local objective functions. The communication architecture between agents is cloud-based – instead of communicating directly with each other, they oordinate by sharing states through a trusted cloud computer. In this problem, the difficulty is twofold: the objective functions are used repeatedly in every iteration, and the influence of erturbing them extends to other agents and lasts over time. To solve the problem, we analyze the propagation of perturbations on objective functions over time, and derive an upper bound on them. With the upper bound, we design a noise-adding mechanism that randomizes the cloudbased distributed optimization algorithm to keep the individual objective functions "-differentially private. In addition, we study the trade-off between the privacy of objective functions and the performance of the new cloud-based distributed optimization algorithm with noise. We present simulation results to numerically verify the theoretical results presented.
Security researchers do not have sufficient example systems for conducting research on advanced persistent threats, and companies and agencies that experience attacks in the wild are reluctant to release detailed information that can be examined. In this paper, we describe an Advanced Persistent Threat Exemplar that is intended to provide a real-world attack scenario with sufficient complexity for reasoning about defensive system adaptation, while not containing so much information as to be too complex. It draws from actual published attacks and experiences as a security engineer by the authors.
Platform as a Service (PaaS) provides middleware resources to cloud customers. As demand for PaaS services increases, so do concerns about the security of PaaS. This paper discusses principal PaaS security and integrity requirements, and vulnerabilities and the corresponding countermeasures. We consider three core cloud elements—multi-tenancy, isolation, and virtualization and how they relate to PaaS services and security trends and concerns such as user and resource isolation, side-channel vulnerabilities in multi-tenant environments, and protection of sensitive data.
Cyber-attacks and breaches are often detected too late to avoid damage. While "classical" reactive cyber defenses usually work only if we have some prior knowledge about the attack methods and "allowable" patterns, properly constructed redundancy-based anomaly detectors can be more robust and often able to detect even zero day attacks. They are a step toward an oracle that uses knowable behavior of a healthy system to identify abnormalities. In the world of Internet of Things (IoT), security, and anomalous behavior of sensors and other IoT components, will be orders of magnitude more difficult unless we make those elements security aware from the start. In this article we examine the ability of redundancy-based anomaly detectors to recognize some high-risk and difficult to detect attacks on web servers---a likely management interface for many IoT stand-alone elements. In real life, it has taken long, a number of years in some cases, to identify some of the vulnerabilities and related attacks. We discuss practical relevance of the approach in the context of providing high-assurance Web-services that may belong to autonomous IoT applications and devices.
Software as a Service (SaaS) is the most prevalent service delivery mode for cloud systems. This paper surveys common security vulnerabilities and corresponding countermeasures for SaaS. It is primarily focused on the work published in the last five years. We observe current SaaS security trends and a lack of sufficiently broad and robust countermeasures in some of the SaaS security area such as Identity and Access management due to the growth of SaaS applications.
We have developed a model for securing data-flow based application chains. We have imple- mented the model in the form of an add-on package for the scientific workflow system called Kepler. Our Security Analysis Package (SAP) leverages Kepler's Provenance Recorder (PR). SAP secures data flows from external input-based attacks, from access to unauthorized exter- nal sites, and from data integrity issues. It is not a surprise that cost of real-time security is a certain amount of run-time overhead. About half of the overhead appears to come from the use of the Kepler PR and the other half from security function added by SAP.
This paper presents an approach for securing software application chains in cloud environments. We use the concept of workflow management systems to explain the model. Our prototype is based on the Kepler scientific workflow system enhanced with security analytics package.
Proactive security reviews and test efforts are a necessary component of the software development lifecycle. Resource limitations often preclude reviewing the entire code base. Making informed decisions on what code to review can improve a team's ability to find and remove vulnerabilities. Risk-based attack surface approximation (RASA) is a technique that uses crash dump stack traces to predict what code may contain exploitable vulnerabilities. The goal of this research is to help software development teams prioritize security efforts by the efficient development of a risk-based attack surface approximation. We explore the use of RASA using Mozilla Firefox and Microsoft Windows stack traces from crash dumps. We create RASA at the file level for Firefox, in which the 15.8% of the files that were part of the approximation contained 73.6% of the vulnerabilities seen for the product. We also explore the effect of random sampling of crashes on the approximation, as it may be impractical for organizations to store and process every crash received. We find that 10-fold random sampling of crashes at a rate of 10% resulted in 3% less vulnerabilities identified than using the entire set of stack traces for Mozilla Firefox. Sampling crashes in Windows 8.1 at a rate of 40% resulted in insignificant differences in vulnerability and file coverage as compared to a rate of 100%.
Massively Open Online Courses (MOOCs) provide a unique opportunity to reach out to students who would not normally be reached by alleviating the need to be physically present in the classroom. However, teaching software security coursework outside of a classroom setting can be challenging. What are the challenges when converting security material from an on-campus course to the MOOC format? The goal of this research is to assist educators in constructing software security coursework by providing a comparison of classroom courses and MOOCs. In this work, we compare demographic information, student motivations, and student results from an on-campus software security course and a MOOC version of the same course. We found that the two populations of students differed, with the MOOC reaching a more diverse set of students than the on-campus course. We found that students in the on-campus course had higher quiz scores, on average, than students in the MOOC. Finally, we document our experience running the courses and what we would do differently to assist future educators constructing similar MOOC's.
Security requirements around software systems have become more stringent as society becomes more interconnected via the Internet. New ways of prioritizing security efforts are needed so security professionals can use their time effectively to find security vulnerabilities or prevent them from occurring in the first place. The goal of this work is to help software development teams prioritize security efforts by approximating the attack surface of a software system via stack trace analysis. Automated attack surface approximation is a technique that uses crash dump stack traces to predict what code may contain exploitable vulnerabilities. If a code entity (a binary, file or function) appears on stack traces, then Attack Surface Approximation (ASA) considers that code entity is on the attack surface of the software system. We also explore whether number of appearances of code on stack traces correlates with where security vulnerabilities are found. To date, feasibility studies of ASA have been performed on Windows 8 and 8.1, and Mozilla Firefox. The results from these studies indicate that ASA may be useful for practitioners trying to secure their software systems. We are now working towards establishing the ground truth of what the attack surface of software systems is, along with looking at how ASA could change over time, among other metrics.
Even though build automation tools help to reduce errors and rapid releases of software changes, use of build automation tools is not widespread amongst software practitioners. Software practitioners perceive build automation tools as complex, which can hinder the adoption of these tools. How well founded such perception is, can be determined by systematic exploration of adoption factors that influence usage of build automation tools. The goal of this paper is to aid software practitioners in increasing their usage of build automation tools by identifying the adoption factors that influence usage of these tools. We conducted a survey to empirically identify the adoption factors that influence usage of build automation tools. We obtained survey responses from 268 software professionals who work at NestedApps, Red Hat, as well as contribute to open source software. We observe that adoption factors related to complexity do not have the strongest influence on usage of build automation tools. Instead, we observe compatibility-related adoption factors, such as adjustment with existing tools, and adjustment with practitioner's existing workflow, to have influence on usage of build automation tools with greater importance. Findings from our paper suggest that usage of build automation tools might increase if: build automation tools fit well with practitioners' existing workflow and tool usage; and usage of build automation tools are made more visible among practitioners' peers.
Android applications pose security and privacy risks for end-users. These risks are often quantified by performing dynamic analysis and permission analysis of the Android applications after release. Prediction of security and privacy risks associated with Android applications at early stages of application development, e.g. when the developer (s) are writing the code of the application, might help Android application developers in releasing applications to end-users that have less security and privacy risk. The goal of this paper is to aid Android application developers in assessing the security and privacy risk associated with Android applications by using static code metrics as predictors. In our paper, we consider security and privacy risk of Android application as how susceptible the application is to leaking private information of end-users and to releasing vulnerabilities. We investigate how effectively static code metrics that are extracted from the source code of Android applications, can be used to predict security and privacy risk of Android applications. We collected 21 static code metrics of 1,407 Android applications, and use the collected static code metrics to predict security and privacy risk of the applications. As the oracle of security and privacy risk, we used Androrisk, a tool that quantifies the amount of security and privacy risk of an Android application using analysis of Android permissions and dynamic analysis. To accomplish our goal, we used statistical learners such as, radial-based support vector machine (r-SVM). For r-SVM, we observe a precision of 0.83. Findings from our paper suggest that with proper selection of static code metrics, r-SVM can be used effectively to predict security and privacy risk of Android applications.
Intrusion Detection Systems (IDSs) are crucial security mechanisms widely deployed for critical network protection. However, conventional IDSs become incompetent due to the rapid growth in network size and the sophistication of large scale attacks. To mitigate this problem, Collaborative IDSs (CIDSs) have been proposed in literature. In CIDSs, a number of IDSs exchange their intrusion alerts and other relevant data so as to achieve better intrusion detection performance. Nevertheless, the required information exchange may result in privacy leakage, especially when these IDSs belong to different self-interested organizations. In order to obtain a quantitative understanding of the fundamental tradeoff between the intrusion detection accuracy and the organizations' privacy, a repeated two-layer single-leader multi-follower game is proposed in this work. Based on our game-theoretic analysis, we are able to derive the expected behaviors of both the attacker and the IDSs and obtain the utility-privacy tradeoff curve. In addition, the existence of Nash equilibrium (NE) is proved and an asynchronous dynamic update algorithm is proposed to compute the optimal collaboration strategies of IDSs. Finally, simulation results are shown to validate the analysis.
Recent years have seen significant advancement in the field of formal network verification. Tools have been proposed for offline data plane verification, real-time data plane verification and configuration verification under arbitrary, but static sets of failures. However, due to the fundamental limitation of not treating the network as an evolving system, current verification platforms have significant constraints in terms of scope. In real-world networks, correctness policies may be violated only through a particular combination of environment events and protocol actions, possibly in a non-deterministic sequence. Moreover, correctness specifications themselves may often correlate multiple data plane states, particularly when dynamic data plane elements are present. Tools in existence today are not capable of reasoning about all the possible network events, and all the subsequent execution paths that are enabled by those events. We propose Plankton, a verification platform for identifying undesirable evolutions of networks. By combining symbolic modeling of data plane and control plane with explicit state exploration, Plankton
performs a goal-directed search on a finite-state transition system that captures the behavior of the network as well as the various events that can influence it. In this way, Plankton can automatically find policy violations that can occur due to a sequence of network events, starting from the current state. Initial experiments have successfully predicted scenarios like BGP Wedgies.
Information security can benefit from real-time cyber threat indicator sharing, in which companies and government agencies share their knowledge of emerging cyberattacks to benefit their sector and society at large. As attacks become increasingly sophisticated by exploiting behavioral dimensions of human computer operators, there is an increased risk to systems that store personal information. In addition, risk increases as individuals blur the boundaries between workplace and home computing (e.g., using workplace computers for personal reasons). This paper describes an architecture to leverage individual perceptions of privacy risk to compute privacy risk scores over cyber threat indicator data. Unlike security risk, which is a risk to a particular system, privacy risk concerns an individual's personal information being accessed and exploited. The architecture integrates tools to extract information entities from textual threat reports expressed in the STIX format and privacy risk estimates computed using factorial vignettes to survey individual risk perceptions. The architecture aims to optimize for scalability and adaptability to achieve real-time risk scoring.
Transparent environments and social-coding platforms as GitHub help developers to stay abreast of changes during the development and maintenance phase of a project. Especially, notification feeds can help developers to learn about relevant changes in other projects. Unfortunately, transparent environments can quickly overwhelm developers with too many notifications, such that they loose the important ones in a sea of noise. Complementing existing prioritization and filtering strategies based on binary compatibility and code ownership, we develop an anomaly-detection mechanism to identify unusual commits in a repository, that stand out with respect to other changes in the same repository or by the same developer. Among others, we detect exceptionally large commits, commits at unusual times, and commits touching rarely changed file types given the characteristics of a particular repository or developer. We automatically flag unusual commits on GitHub through a browser plugin. In an interactive survey with 173 active GitHub users, rating commits in a project of their interest, we found that, though our unusual score is only a weak predictor of whether developers want to be notified about a commit, information about unusual characteristics of a commit change how developers regard commits. Our anomaly-detection mechanism is a building block for scaling transparent environments.
Certification schemes exist to regulate software systems and prevent them from being deployed before they are judged fit to use. However, practitioners are often unsatisfied with the efficiency of certification standards and processes. In this study, we analyzed two certification standards, Common Criteria and DO-178C, and collected insights from literature and from interviews with subject-matter experts to identify concepts affecting the efficiency of certification processes. Our results show that evaluation time, reusability of evaluation artifacts, and composition of systems and certified artifacts are barriers to achieve efficient certification.