Biblio
Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. It has been shown that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. However, understanding its interworking with real networks is still an unexplored challenge. In this paper, we investigate this in practice. We identify the general functional parameters that can be controlled, and by means of extensive experimentation, we tune these parameters and analyze the trade-offs between them, aiming at reducing false positives, overhead, and detection time. The traces we collected help us to understand when and why the active probing fails, and let us present countermeasures to prevent it.
Abnormal crowd behavior detection is an important research issue in video processing and computer vision. In this paper we introduce a novel method to detect abnormal crowd behaviors in video surveillance based on interest points. A complex network-based algorithm is used to detect interest points and extract the global texture features in scenarios. The performance of the proposed method is evaluated on publicly available datasets. We present a detailed analysis of the characteristics of the crowd behavior in different density crowd scenes. The analysis of crowd behavior features and simulation results are also demonstrated to illustrate the effectiveness of our proposed method.
Smart grids, where cyber infrastructure is used to make power distribution more dependable and efficient, are prime examples of modern infrastructure systems. The cyber infrastructure provides monitoring and decision support intended to increase the dependability and efficiency of the system. This comes at the cost of vulnerability to accidental failures and malicious attacks, due to the greater extent of virtual and physical interconnection. Any failure can propagate more quickly and extensively, and as such, the net result could be lowered reliability. In this paper, we describe metrics for assessment of two phases of smart grid operation: the duration before a failure occurs, and the recovery phase after an inevitable failure. The former is characterized by reliability, which we determine based on information about cascading failures. The latter is quantified using resilience, which can in turn facilitate comparison of recovery strategies. We illustrate the application of these metrics to a smart grid based on the IEEE 9-bus test system.
This paper investigates the vulnerability of power grids based on the complex networks combining the information entropy. The difference of current directions for a link is considered, and it is characterized by the information entropy. By combining the information entropy, the electrical betweenness is improved to evaluate the vulnerability of power grids. Attacking the link based on the largest electrical betweenness with the information can get the larger size of the largest cluster and the lower lost of loads, compared with the electrical betweenness without the information entropy. Finally, IEEE 118 bus system is tested to validate the effectiveness of the novel index to characterize the the vulnerability of power grids.
Most web applications have critical bugs (faults) affecting their security, which makes them vulnerable to attacks by hackers and organized crime. To prevent these security problems from occurring it is of utmost importance to understand the typical software faults. This paper contributes to this body of knowledge by presenting a field study on two of the most widely spread and critical web application vulnerabilities: SQL Injection and XSS. It analyzes the source code of security patches of widely used web applications written in weak and strong typed languages. Results show that only a small subset of software fault types, affecting a restricted collection of statements, is related to security. To understand how these vulnerabilities are really exploited by hackers, this paper also presents an analysis of the source code of the scripts used to attack them. The outcomes of this study can be used to train software developers and code inspectors in the detection of such faults and are also the foundation for the research of realistic vulnerability and attack injectors that can be used to assess security mechanisms, such as intrusion detection systems, vulnerability scanners, and static code analyzers.
Byzantine fault tolerance has been intensively studied over the past decade as a way to enhance the intrusion resilience of computer systems. However, state-machine-based Byzantine fault tolerance algorithms require deterministic application processing and sequential execution of totally ordered requests. One way of increasing the practicality of Byzantine fault tolerance is to exploit the application semantics, which we refer to as application-aware Byzantine fault tolerance. Application-aware Byzantine fault tolerance makes it possible to facilitate concurrent processing of requests, to minimize the use of Byzantine agreement, and to identify and control replica nondeterminism. In this paper, we provide an overview of recent works on application-aware Byzantine fault tolerance techniques. We elaborate the need for exploiting application semantics for Byzantine fault tolerance and the benefits of doing so, provide a classification of various approaches to application-aware Byzantine fault tolerance, and outline the mechanisms used in achieving application-aware Byzantine fault tolerance according to our classification.
Byzantine fault tolerance has been intensively studied over the past decade as a way to enhance the intrusion resilience of computer systems. However, state-machine-based Byzantine fault tolerance algorithms require deterministic application processing and sequential execution of totally ordered requests. One way of increasing the practicality of Byzantine fault tolerance is to exploit the application semantics, which we refer to as application-aware Byzantine fault tolerance. Application-aware Byzantine fault tolerance makes it possible to facilitate concurrent processing of requests, to minimize the use of Byzantine agreement, and to identify and control replica nondeterminism. In this paper, we provide an overview of recent works on application-aware Byzantine fault tolerance techniques. We elaborate the need for exploiting application semantics for Byzantine fault tolerance and the benefits of doing so, provide a classification of various approaches to application-aware Byzantine fault tolerance, and outline the mechanisms used in achieving application-aware Byzantine fault tolerance according to our classification.
Outsourcing spatial databases to the cloud provides an economical and flexible way for data owners to deliver spatial data to users of location-based services. However, in the database outsourcing paradigm, the third-party service provider is not always trustworthy, therefore, ensuring spatial query integrity is critical. In this paper, we propose an efficient road network k-nearest-neighbor query verification technique which utilizes the network Voronoi diagram and neighbors to prove the integrity of query results. Unlike previous work that verifies k-nearest-neighbor results in the Euclidean space, our approach needs to verify both the distances and the shortest paths from the query point to its kNN results on the road network. We evaluate our approach on real-world road networks together with both real and synthetic points of interest datasets. Our experiments run on Google Android mobile devices which communicate with the service provider through wireless connections. The experiment results show that our approach leads to compact verification objects (VO) and the verification algorithm on mobile devices is efficient, especially for queries with low selectivity.
An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each operation's result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations.
This paper presents a generic method, using a simple extension to a ML-like functional programming language we call λ• (lambda-auth), with which one can program authenticated operations over any data structure defined by standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual and it is compiled to code to be run by the prover and verifier. Using a formalization of λ• we prove that all well-typed λ• programs result in code that is secure under the standard cryptographic assumption of collision-resistant hash functions. We have implemented λ• as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, red-black+ trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the hand-optimized data structures developed previously.
This research focuses on hyper visor security from holistic perspective. It centers on hyper visor architecture - the organization of the various subsystems which collectively compromise a virtualization platform. It holds that the path to a secure hyper visor begins with a big-picture focus on architecture. Unfortunately, little research has been conducted with this perspective. This study investigates the impact of monolithic and micro kernel hyper visor architectures on the size and scope of the attack surface. Six architectural features are compared: management API, monitoring interface, hyper calls, interrupts, networking, and I/O. These subsystems are core hyper visor components which could be used as attack vectors. Specific examples and three leading hyper visor platforms are referenced (ESXi for monolithic architecture; Xen and Hyper-V for micro architecture). The results describe the relative strengths and vulnerabilities of both types of architectures. It is concluded that neither design is more secure, since both incorporate security tradeoffs in core processes.
In this paper, we propose an adaptive specification-based intrusion detection system (IDS) for detecting malicious unmanned air vehicles (UAVs) in an airborne system in which continuity of operation is of the utmost importance. An IDS audits UAVs in a distributed system to determine if the UAVs are functioning normally or are operating under malicious attacks. We investigate the impact of reckless, random, and opportunistic attacker behaviors (modes which many historical cyber attacks have used) on the effectiveness of our behavior rule-based UAV IDS (BRUIDS) which bases its audit on behavior rules to quickly assess the survivability of the UAV facing malicious attacks. Through a comparative analysis with the multiagent system/ant-colony clustering model, we demonstrate a high detection accuracy of BRUIDS for compliant performance. By adjusting the detection strength, BRUIDS can effectively trade higher false positives for lower false negatives to cope with more sophisticated random and opportunistic attackers to support ultrasafe and secure UAV applications.
Signals intelligence analysts play a critical role in the United States government by providing information regarding potential national security threats to government leaders. Analysts perform complex decision-making tasks that involve gathering, sorting, and analyzing information. The current study evaluated how individual differences and training influence performance on an Internet search-based medical diagnosis task designed to simulate a signals analyst task. The implemented training emphasized the extraction and organization of relevant information and deductive reasoning. The individual differences of interest included working memory capacity and previous experience with elements of the task, specifically health literacy, prior experience using the Internet, and prior experience conducting Internet searches. Preliminary results indicated that the implemented training did not significantly affect performance, however, working memory significantly predicted performance on the implemented task. These results support previous research and provide additional evidence that working memory capacity influences performance on cognitively complex decision-making tasks, whereas experience with elements of the task may not. These findings suggest that working memory capacity should be considered when screening individuals for signals intelligence positions. Future research should aim to generalize these findings within a broader sample, and ideally utilize a task that directly replicates those performed by signals analysts.
Central to the secure operation of a public key infrastructure (PKI) is the ability to revoke certificates. While much of users' security rests on this process taking place quickly, in practice, revocation typically requires a human to decide to reissue a new certificate and revoke the old one. Thus, having a proper understanding of how often systems administrators reissue and revoke certificates is crucial to understanding the integrity of a PKI. Unfortunately, this is typically difficult to measure: while it is relatively easy to determine when a certificate is revoked, it is difficult to determine whether and when an administrator should have revoked.
In this paper, we use a recent widespread security vulnerability as a natural experiment. Publicly announced in April 2014, the Heartbleed OpenSSL bug, potentially (and undetectably) revealed servers' private keys. Administrators of servers that were susceptible to Heartbleed should have revoked their certificates and reissued new ones, ideally as soon as the vulnerability was publicly announced.
Using a set of all certificates advertised by the Alexa Top 1 Million domains over a period of six months, we explore the patterns of reissuing and revoking certificates in the wake of Heartbleed. We find that over 73% of vulnerable certificates had yet to be reissued and over 87% had yet to be revoked three weeks after Heartbleed was disclosed. Moreover, our results show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication.
Security subsystems are often designed with flawed assumptions arising from system designers' faulty mental models. Designers tend to assume that users behave according to some textbook ideal, and to consider each potential exposure/interface in isolation. However, fieldwork continually shows that even well-intentioned users often depart from this ideal and circumvent controls in order to perform daily work tasks, and that "incorrect" user behaviors can create unexpected links between otherwise "independent" interfaces. When it comes to security features and parameters, designers try to find the choices that optimize security utility–-except these flawed assumptions give rise to an incorrect curve, and lead to choices that actually make security worse, in practice. We propose that improving this situation requires giving designers more accurate models of real user behavior and how it influences aggregate system security. Agent-based modeling can be a fruitful first step here. In this paper, we study a particular instance of this problem, propose user-centric techniques designed to strengthen the security of systems while simultaneously improving the usability of them, and propose further directions of inquiry.
Dynamic taint analysis and forward symbolic execution are quickly becoming staple techniques in security analyses. Example applications of dynamic taint analysis and forward symbolic execution include malware analysis, input filter generation, test case generation, and vulnerability discovery. Despite the widespread usage of these two techniques, there has been little effort to formally define the algorithms and summarize the critical issues that arise when these techniques are used in typical security contexts. The contributions of this paper are two-fold. First, we precisely describe the algorithms for dynamic taint analysis and forward symbolic execution as extensions to the run-time semantics of a general language. Second, we highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context.
Security issues in computer networks have focused on attacks on end systems and the control plane. An entirely new class of emerging network attacks aims at the data plane of the network. Data plane forwarding in network routers has traditionally been implemented with custom-logic hardware, but recent router designs increasingly use software-programmable network processors for packet forwarding. These general-purpose processing devices exhibit software vulnerabilities and are susceptible to attacks. We demonstrate-to our knowledge the first-practical attack that exploits a vulnerability in packet processing software to launch a devastating denial-of-service attack from within the network infrastructure. This attack uses only a single attack packet to consume the full link bandwidth of the router's outgoing link. We also present a hardware-based defense mechanism that can detect situations where malicious packets try to change the operation of the network processor. Using a hardware monitor, our NetFPGA-based prototype system checks every instruction executed by the network processor and can detect deviations from correct processing within four clock cycles. A recovery system can restore the network processor to a safe state within six cycles. This high-speed detection and recovery system can ensure that network processors can be protected effectively and efficiently from this new class of attacks.
This paper examines security faults/vulnerabilities reported for Fedora. Results indicate that, at least in some situations, fault roughly constant may be used to guide estimation of residual vulnerabilities in an already released product, as well as possibly guide testing of the next version of the product.
Access Control Policies (ACPs) evolve. Understanding the trends and evolution patterns of ACPs could provide guidance about the reliability and maintenance of ACPs. Our research goal is to help policy authors improve the quality of ACP evolution based on the understanding of trends and evolution patterns in ACPs We performed an empirical study by analyzing the ACP changes over time for two systems: Security Enhanced Linux (SELinux), and an open-source virtual computing platform (VCL). We measured trends in terms of the number of policy lines and lines of code (LOC), respectively. We observed evolution patterns. For example, an evolution pattern st1 → st2 says that st1 (e.g., "read") evolves into st2 (e.g., "read" and "write"). This pattern indicates that policy authors add "write" permission in addition to existing "read" permission. We found that some of evolution patterns appear to occur more frequently.
This paper is a proposal for a poster. In it we describe a medical device security approach that researchers at Fraunhofer used to analyze different kinds of medical devices for security vulnerabilities. These medical devices were provided to Fraunhofer by a medical device manufacturer whose name we cannot disclose due to non-disclosure agreements.
Low-latency anonymity systems such as Tor rely on intermediate relays to forward user traffic; these relays, however, are often unreliable, resulting in a degraded user experience. Worse yet, malicious relays may introduce deliberate failures in a strategic manner in order to increase their chance of compromising anonymity. In this paper we propose using a reputation metric that can profile the reliability of relays in an anonymity system based on users' past experience. The two main challenges in building a reputation-based system for an anonymity system are: first, malicious participants can strategically oscillate between good and malicious nature to evade detection, and second, an observed failure in an anonymous communication cannot be uniquely attributed to a single relay. Our proposed framework addresses the former challenge by using a proportional-integral-derivative (PID) controller-based reputation metric that ensures malicious relays adopting time-varying strategic behavior obtain low reputation scores over time, and the latter by introducing a filtering scheme based on the evaluated reputation score to effectively discard relays mounting attacks. We collect data from the live Tor network and perform simulations to validate the proposed reputation-based filtering scheme. We show that an attacker does not gain any significant benefit by performing deliberate failures in the presence of the proposed reputation framework.
Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global perspective for reasoning about security adaptations in the context of other business goals. In this paper, we present an approach, based on this idea, for combating denial-of-service (DoS) attacks. Our approach allows DoS-related tactics to be composed into more sophisticated mitigation strategies that encapsulate possible responses to a security problem. Then, utility-based reasoning can be used to consider different business contexts and qualities. We describe how this approach forms the underpinnings of a scientific approach to self-protection, allowing us to reason about how to make the best choice of mitigation at runtime. Moreover, we also show how formal analysis can be used to determine whether the mitigations cover the range of conditions the system is likely to encounter, and the effect of mitigations on other quality attributes of the system. We evaluate the approach using the Rainbow self-adaptive framework and show how Rainbow chooses DoS mitigation tactics that are sensitive to different business contexts.

