Biblio
Distributed denial-of-service attacks are an increasing problem facing web applications, for which many defense techniques have been proposed, including several moving-target strategies. These strategies typically work by relocating targeted services over time, increasing uncertainty for the attacker, while trying not to disrupt legitimate users or incur excessive costs. Prior work has not shown, however, whether and how a rational defender would choose a moving-target method against an adaptive attacker, and under what conditions. We formulate a denial-of-service scenario as a two-player game, and solve a restricted-strategy version of the game using the methods of empirical game-theoretic analysis. Using agent-based simulation, we evaluate the performance of strategies from prior literature under a variety of attacks and environmental conditions. We find evidence for the strategic stability of various proposed strategies, such as proactive server movement, delayed attack timing, and suspected insider blocking, along with guidelines for when each is likely to be most effective.
India being digitized through digital India, the most basic unique identity for each individual is biometrics. Since India is the second most populous nation, the database that has to be maintained is surplus. Shielding those information by using the present techniques has been questioned. This contravene problem can be overcome by using cryptographic algorithms in accumulation to biometrics. Hence proposed system is developed by combining multimodal biometric (Fingerprint, Retina, Finger vein) with cryptographic algorithm with Genuine Acceptance Rate of 94%, False Acceptance Rate of 1.46%, and False Rejection Rate of 1.07%.
The security game is a basic model for resource allocation in adversarial environments. Here there are two players, a defender and an attacker. The defender wants to allocate her limited resources to defend critical targets and the attacker seeks his most favorable target to attack. In the past decade, there has been a surge of research interest in analyzing and solving security games that are motivated by applications from various domains. Remarkably, these models and their game-theoretic solutions have led to real-world deployments in use by major security agencies like the LAX airport, the US Coast Guard and Federal Air Marshal Service, as well as non-governmental organizations. Among all these research and applications, equilibrium computation serves as a foundation. This paper examines security games from a theoretical perspective and provides a unified view of various security game models. In particular, each security game can be characterized by a set system E which consists of the defender's pure strategies; The defender's best response problem can be viewed as a combinatorial optimization problem over E. Our framework captures most of the basic security game models in the literature, including all the deployed systems; The set system E arising from various domains encodes standard combinatorial problems like bipartite matching, maximum coverage, min-cost flow, packing problems, etc. Our main result shows that equilibrium computation in security games is essentially a combinatorial problem. In particular, we prove that, for any set system \$E\$, the following problems can be reduced to each other in polynomial time: (0) combinatorial optimization over E; (1) computing the minimax equilibrium for zero-sum security games over E; (2) computing the strong Stackelberg equilibrium for security games over E; (3) computing the best or worst (for the defender) Nash equilibrium for security games over E. Therefore, the hardness [polynomial solvability] of any of these problems implies the hardness [polynomial solvability] of all the others. Here, by "games over E" we mean the class of security games with arbitrary payoff structures, but a fixed set E of defender pure strategies. This shows that the complexity of a security game is essentially determined by the set system E. We view drawing these connections as an important conceptual contribution of this paper.
In view of the high demand for the security of visiting data in power system, a network data security analysis method based on DPI technology was put forward in this paper, to solve the problem of security gateway judge the legality of the network data. Considering the legitimacy of the data involves data protocol and data contents, this article will filters the data from protocol matching and content detection. Using deep packet inspection (DPI) technology to screen the protocol. Using protocol analysis to detect the contents of data. This paper implements the function that allowing secure data through the gateway and blocking threat data. The example proves that the method is more effective guarantee the safety of visiting data.
In this paper, we propose a new risk analysis framework that enables to supervise risks in complex and distributed systems. Our contribution is twofold. First, we provide the Risk Assessment Graphs (RAGs) as a model of risk analysis. This graph-based model is adaptable to the system changes over the time. We also introduce the potentiality and the accessibility functions which, during each time slot, evaluate respectively the chance of exploiting the RAG's nodes, and the connection time between these nodes. In addition, we provide a worst-case risk evaluation approach, based on the assumption that the intruder threats usually aim at maximising their benefits by inflicting the maximum damage to the target system (i.e. choosing the most likely paths in the RAG). We then introduce three security metrics: the propagated risk, the node risk and the global risk. We illustrate the use of our framework through the simple example of an enterprise email service. Our framework achieves both flexibility and generality requirements, it can be used to assess the external threats as well as the insider ones, and it applies to a wide set of applications.
In threshold schemes one represents each sensitive variable by a number n of shares such that their (usually) bitwise sum equals that variable. These shares are initially generated in such a way that any subset of n-1 shares gives no information about the sensitive variable. Functions (S-boxes, mixing layers, round functions, etc.) are computed on the shares of the inputs resulting in the output as a number of shares. An essential property of a threshold implementation of a function is that each output share is computed from at most n-1 input shares. This is called incompleteness and guarantees that that computation cannot leak information about sensitive variables. The resulting output is then typically subject to some further computation, again in the form of separate, incomplete, computation on shares. For these subsequent computations to not leak information about the sensitive variables, the output of the previous stage must still be uniform. Hence, in an iterative cryptographic primitive such as a block cipher, we need a threshold implementation of the round function that yields a uniformly shared output if its input is uniformly shared. This property of the threshold implementation is called uniformity. Threshold schemes form a good protection mechanism against differential power analysis (DPA). In particular, using it allows building cryptographic hardware that is guaranteed to be unattackable with first-order DPA, assuming certain leakage models of the cryptographic hardware at hand and for a plausible definition of "first order". Constructing an incomplete threshold implementation of a non-linear function is rather straightforward. To offer resistance against first-order DPA, the number of shares equals the algebraic degree of the function plus one. However, constructing one that is at the same time incomplete and uniform may present a challenge. For instance, for the Keccak non-linear layer, incomplete 3-share threshold implementations are easy to generate but no uniform one is known. Exhaustive investigations have been performed on all small S-boxes (3 to 5 bits) and there are many S-boxes for which it is not known to build uniform threshold implementations with d+1 shares if their algebraic degree is d. Uniformity of a threshold implementation is essential in its information-theoretical proof of resistance against first-order DPA. However, given a non-uniform threshold implementation, it is not immediate how to exploit its non-uniformity in an attack. In my talk I discuss the local and global effects of non-uniformity in iterated functions and their significance on the resistance against DPA. I treat methods to quantitatively limit the amount of non-uniformity and to keep it away from where it may be harmful. These techniques are relatively cheap and can reduce non-uniformity to such a low level that it would require an astronomical amount of samples to measure it.
The infrastructures of Supervisory Control and Data Acquisition (SCADA) systems have evolved through time in order to provide more efficient supervision services. Despite the changes made on SCADA architectures, several enhancements are still required to address the need for: a) large scale supervision using a high number of sensors, b) reduction of the reaction time when a malicious activity is detected; and c) the assurance of a high interoperability between SCADA systems in order to prevent the propagation of incidents. In this context, we propose a novel sensor cloud based SCADA infrastructure to monitor large scale and inter-dependant critical infrastructures, making an effective use of sensor clouds to increase the supervision coverage and the processing time. It ensures also the interoperability between interdependent SCADAs by offering a set of services to SCADA, which are created through the use of templates and are associated to set of virtual sensors. A simulation is conducted to demonstrate the effectiveness of the proposed architecture.
We explore the use of a new way to log into a web service, such as email or social media. Using on-demand biometrics, users sign in from a browser on a computer using just their name, which sends a request to their phone for approval. Users approve this request by authenticating on their phone using their fingerprint, which completes the login in the browser. On-demand biometrics thus replace passwords or temporary access codes found in two-step verification with the ease of use of biometrics. We present the results of an interview study on the use of on-demand biometrics with a live login backend. Participants perceived our system as convenient and fast to use and also expressed their trust in fingerprint authentication to keep their accounts safe. We motivate the design of on-demand biometrics, present an analysis of participants' use and responses around general account security and authentication, and conclude with implications for designing fast and easy cross-device authentication.
We present a code- and input-sensitive sanitization synthesis approach for repairing string vulnerabilities that are common in web applications. The synthesized sanitization patch modifies the user input in an optimal way while guaranteeing that the repaired web application is not vulnerable. Given a web application, an input pattern and an attack pattern, we use automata-based static string analysis techniques to compute a sanitization signature that characterizes safe input values that obey the given input pattern and are safe with respect to the given attack pattern. Using the sanitization signature, we synthesize an optimal sanitization patch that converts malicious user inputs to benign ones with minimal editing. When the generated patch is added to the web application, it is guaranteed that the repaired web application is no longer vulnerable. We present refinements to previous sanitization synthesis algorithms that reduce the runtime sanitization cost significantly. We evaluate our approach on open source web applications using common input and attack patterns, demonstrating the effectiveness of our approach.
In this paper, we investigate the impact of information-theoretic secrecy constraint on the capacity and delay of mobile ad hoc networks (MANETs) with mobile legitimate nodes and static eavesdroppers whose location and channel state information (CSI) are both unknown. We assume n legitimate nodes move according to the fast i.i.d. mobility pattern and each desires to communicate with one randomly selected destination node. There are also nv static eavesdroppers located uniformly in the network and we assume the number of eavesdroppers is much larger than that of legitimate nodes, i.e., v textgreater 1. We propose a novel simple secure communication model, i.e., the secure protocol model, and prove its equivalence to the widely accepted secure physical model under a few technical assumptions. Based on the proposed model, a framework of analyzing the secrecy capacity and delay in MANETs is established. Given a delay constraint D, we find that the optimal secrecy throughput capacity is [EQUATION](W((D/n))(2/3), where W is the data rate of each link. We observe that: 1) the capacity-delay tradeoff is independent of the number of eavesdroppers, which indicates that adding more eavesdroppers will not degenerate the performance of the legitimate network as long as v textgreater 1; 2) the capacity-delay tradeoff of our paper outperforms the previous result Θ((1/nψe)) in [11], where ψe = nv–1 = ω(1) is the density of the eavesdroppers. Throughout this paper, for functions f(n) and G(n), we denote f(n) = o(g(n)) if limn→∞ (f(n)/g(n)) = 0; f(n) = ω(g(n)) if g(n) = o(f(n)); f(n) = O(g(n)) if there is a positive constant c such that f(n) ≤ cg(n) for sufficiently large n; f(n) = Ω(g(n))if g(n) = O(f(n)); f(n) = Θ(g(n) if both f(n) = O(g(n)) and f(n) = Omega;(g(n)) hold. Besides, the order notation [EQUATION] omits the polylogarithmic factors for better readability.
We introduce OPTIK, a new practical design pattern for designing and implementing fast and scalable concurrent data structures. OPTIK relies on the commonly-used technique of version numbers for detecting conflicting concurrent operations. We show how to implement the OPTIK pattern using the novel concept of OPTIK locks. These locks enable the use of version numbers for implementing very efficient optimistic concurrent data structures. Existing state-of-the-art lock-based data structures acquire the lock and then check for conflicts. In contrast, with OPTIK locks, we merge the lock acquisition with the detection of conflicting concurrency in a single atomic step, similarly to lock-free algorithms. We illustrate the power of our OPTIK pattern and its implementation by introducing four new algorithms and by optimizing four state-of-the-art algorithms for linked lists, skip lists, hash tables, and queues. Our results show that concurrent data structures built using OPTIK are more scalable than the state of the art.
Simple connectivity and data requirements together with high lifetime of battery are the main issues for the machine-to-machine (M2M) communications. 3GPP focuses on three main licensed standardizations based on Long Term Evolution (LTE), GSM and clean-slate technologies. The paper considers the last one and proposes a modified slotted-Aloha method to increase the capability of supporting a massive number of low-throughput devices. The proposed method increases the access rate of users belonging to each class considered in the clean-slate standard and consequently the total throughput offered by the system. To derive the mean access rate per class, we use the Markov chain approach and simulation results are provided for scenarios with different data rate and also in terms of cell average delay.
This paper proposes a method for segmentation of nuclei of single/isolated and overlapping/touching immature white blood cells from microscopic images of B-Lineage acute lymphoblastic leukemia (ALL) prepared from peripheral blood and bone marrow aspirate. We propose deep belief network approach for the segmentation of these nuclei. Simulation results and comparison with some of the existing methods demonstrate the efficacy of the proposed method.
Path prediction on the Internet has been a topic of research in the networking community for close to a decade. Applications of path prediction solutions have ranged from optimizing selection of peers in peer- to-peer networks to improving and debugging CDN predictions. Recently, revelations of traffic correlation and surveillance on the Internet have raised the topic of path prediction in the context of network security. Specifically, predicting network paths can allow us to identify and avoid given organizations on network paths (e.g., to avoid traffic correlation attacks in Tor) or to infer the impact of hijacks and interceptions when direct measurements are not available. In this poster we propose the design and implementation of PathCache which aims to reuse measurement data to estimate AS level paths on the Internet. Unlike similar systems, PathCache does not assume that routing on the Internet is destination based. Instead, we develop an algorithm to compute confidence in paths between ASes. These multiple paths ranked by their confidence values are returned to the user.
TCP congestion control has been known for its crucial role in stabilizing the Internet and preventing congestion collapses. However, with the rapid advancement in networking technologies, resulting in the emergence of challenging network environments such as data center networks (DCNs), the traditional TCP algorithm leads to several impairments. The shortcomings of TCP when deployed in DCNs have motivated the development of multiple new variants, including DCTCP, ICTCP, IA-TCP, and D2TCP, but all of these algorithms exhibit their advantages at the cost of a number of drawbacks in the Global Internet. Motivated by the belief that new innovations need to be established on top of a solid foundation with a thorough understanding of the existing, well-established algorithms, we have been working towards a comprehensive analysis of various conventional TCP algorithms in DCNs and other modern networks. This paper presents our first milestone towards the completion of our comparative study in which we present the results obtained by simulating multiple TCP variants: NewReno, Vegas, HighSpeed, Scalable, Westwood+, BIC, CUBIC, and YeAH using a fat tree architecture. Each protocol is evaluated in terms of queue length, number of dropped packets, average packet delay, and aggregate bandwidth as a percentage of the channel bandwidth.
Screen lock is vulnerable against shoulder surfing since password, personal identification numbers (PIN) and pattern can be seen when smart phones are used in public space although important information is stored in them and they are often used in public space. In this paper, we propose a new method in which passwords are combined with biometrics authentication which cannot be seen by shoulder surfing and difficult to be guessed by brute-force attacks. In our method, the motion of a finger is measured by sensors when a user controls a mobile terminal, and the motion which includes characteristics of the user is registered. In our method, registered characteristics are classified by learning with self-organizing maps. Users are identified by referring the self-organizing maps when they input passwords on mobile terminals.
Recent years have seen an exponential growth of the collection and processing of data from heterogeneous sources for a variety of purposes. Several methods and techniques have been proposed to transform and fuse data into "useful" information. However, the security aspects concerning the fusion of sensitive data are often overlooked. This paper investigates the problem of data fusion and derived data control. In particular, we identify the requirements for regulating the fusion process and eliciting restrictions on the access and usage of derived data. Based on these requirements, we propose an attribute-based policy framework to control the fusion of data from different information sources and under the control of different authorities. The framework comprises two types of policies: access control policies, which define the authorizations governing the resources used in the fusion process, and fusion policies, which define constraints on allowed fusion processes. We also discuss how such policies can be obtained for derived data.
Multi-core is widely used for mobile devices due to high performance and good energy efficiency. For maintaining cores' cache coherency, mobile multi-core integrated new hardware ARM CCI. In this study, we focus on the security aspect of mobile multi-core. We monitor cache coherency operations that occur among PSL related processes' inter-core communication. After simple analysis, we can sneak android PSL information. Some preliminary results show that we could efficiently identify PSL pattern. This is a significant security violation in terms of confidentiality. In addition, mobile multi-cores are already prevalent, the attack is practical, and it can be easily spread.
This paper presents a new type of online password guessing attack called "WiPING" (Wi-Fi signal-based PIN Guessing attack) to guess a victim's PIN (Personal Identification Number) within a small number of unlock attempts. WiPING uses wireless signal patterns identified from observing sequential finger movements involved in typing a PIN to unlock a mobile device. A list of possible PIN candidates is generated from the wireless signal patterns, and is used to improve performance of PIN guessing attacks. We implemented a proof-of-concept attack to demonstrate the feasibility of WiPING. Our results showed that WiPING could be practically effective: while pure guessing attacks failed to guess all 20 PINs, WiPING successfully guessed two PINs.
Pseudo-random number generators (PRNGs) are a critical infrastructure for cryptography and security of many computer applications. At the same time, PRNGs are surprisingly difficult to design, implement, and debug. This paper presents the first static analysis technique specifically for quality assurance of cryptographic PRNG implementations. The analysis targets a particular kind of implementation defect, the entropy loss. Entropy loss occurs when the entropy contained in the PRNG seed is not utilized to the full extent for generating the pseudo-random output stream. The Debian OpenSSL disaster, probably the most prominent PRNG-related security incident, was one but not the only manifestation of such a defect. Together with the static analysis technique, we present its implementation, a tool named Entroposcope. The tool offers a high degree of automation and practicality. We have applied the tool to five real-world PRNGs of different designs and show that it effectively detects both known and previously unknown instances of entropy loss.
Vehicular users are expected to consume large amounts of data, for both entertainment and navigation purposes. This will put a strain on cellular networks, which will be able to cope with such a load only if proper caching is in place; this in turn begs the question of which caching architecture is the best-suited to deal with vehicular content consumption. In this paper, we leverage a large-scale, crowd-sourced trace to (i) characterize the vehicular traffic demand, in terms of overall magnitude and content breakup; (ii) assess how different caching approaches perform against such a real-world load; (iii) study the effect of recommendation systems and local content items. We define a price-of-fog metric, expressing the additional caching capacity to deploy when moving from traditional, centralized caching architectures to a "fog computing" approach, where caches are closer to the network edge. We find that for location-specific items, such as the ones that vehicular users are most likely to request, such a price almost disappears. Vehicular networks thus make a strong case for the adoption of mobile-edge caching, as we are able to reap the benefit thereof – including a reduction in the distance travelled by data, within the core network – with little or none of the associated disadvantages.
Processing smart grid data for analytics purposes brings about a series of privacy-related risks. In order to allow for the most suitable mitigation strategies, reasonable privacy risks need to be addressed by taking into consideration the perspective of each smart grid stakeholder separately. In this context, we use the notion of privacy concerns to reflect potential privacy risks from the perspective of different smart grid stakeholders. Privacy concerns help to derive privacy goals, which we represent using the goals structuring notation. Thus represented goals can more comprehensibly be addressed through technical and non-technical strategies and solutions. The thread of argumentation - from concerns to goals to strategies and solutions - is presented in form of a privacy case, which is analogous to the safety case used in the automotive domain. We provide an exemplar privacy case for the smart grid developed as part of the Aspern Smart City Research project.
A major challenge for utilities is energy theft, wherein malicious actors steal energy for financial gain. One such form of theft in the smart grid is the fraudulent amplification of energy generation measurements from DERs, such as photo-voltaics. It is important to detect this form of malicious activity, but in a way that ensures the privacy of customers. Not considering privacy aspects could result in a backlash from customers and a heavily curtailed deployment of services, for example. In this short paper, we present a novel privacy-preserving approach to the detection of manipulated DER generation measurements.
The Internet of Things (IoT) offers new opportunities, but alongside those come many challenges for security and privacy. Most IoT devices offer no choice to users of where data is published, which data is made available and what identities are used for both devices and users. The aim of this work is to explore new middleware models and techniques that can provide users with more choice as well as enhance privacy and security. This paper outlines a new model and a prototype of a middleware system that implements this model.
Biometric verification systems have security issues regarding the storage of biometric data in that a user's biometric features cannot be changed into other ones even when a system is compromised. To address this issue, it may be safe to store the biometrics data on a reliable remote server instead of storing them in a local device. However, this approach may raise a privacy issue. In this paper, we propose a biometric verification system where the biometric data are stored in a remote server in an encrypted form and the similarity of the user input to the registered biometric data is computed in an encrypted domain using a homomorphic encryption. We evaluated the performance of the proposed system through an implementation on an Android-based smartphone and an i7-based server.