Biblio
Traditionally, power grid vulnerability assessment methods are separated to the study of nodes vulnerability and edges vulnerability, resulting in the evaluation results are not accurate. A framework for vulnerability assessment is still required for power grid. Thus, this paper proposes a universal method for vulnerability assessment of power grid by establishing a complex network model with uniform weight of nodes and edges. The concept of virtual edge is introduced into the distinct weighted complex network model of power system, and the selection function of edge weight and virtual edge weight are constructed based on electrical and physical parameters. In addition, in order to reflect the electrical characteristics of power grids more accurately, a weighted betweenness evaluation index with transmission efficiency is defined. Finally, the method has been demonstrated on the IEEE 39 buses system, and the results prove the effectiveness of the proposed method.
Nowadays citizens live in a world where communication technologies offer opportunities for new interactions between people and society. Clearly, e-government is changing the way citizens relate to their government, moving the interaction of physical environment and management towards digital participation. Therefore, it is necessary for e-government to have procedures in place to prevent and lessen the negative impact of an attack or intrusion by third parties. In this research work, he focuses on the implementation of anonymous communication in a proof of concept application called “Delta”, whose function is to allow auctions and offers of products, thus marking the basis for future implementations in e-government services.
Security within the IoT is currently below par. Common security issues include IoT device vendors not following security best practices and/or omitting crucial security controls and features within their devices, lack of defined and mandated IoT security standards, default IoT device configurations, missing secure update mechanisms to rectify security flaws discovered in IoT devices and the overall unintended consequence of complexity - the attack surface of networks comprising IoT devices can increase exponentially with the addition of each new device. In this paper we set out an approach using graphs and graph databases to understand IoT network complexity and the impact that different devices and their profiles have on the overall security of the underlying network and its associated data.
Use-After-Free (UAF) vulnerabilities are caused by the program operating on a dangling pointer and can be exploited to compromise critical software systems. While there have been many tools to mitigate UAF vulnerabilities, UAF remains one of the most common attack vectors. UAF is particularly di cult to detect in concurrent programs, in which a UAF may only occur with rare thread schedules. In this paper, we present a novel technique, UFO, that can precisely predict UAFs based on a single observed execution trace with a provably higher detection capability than existing techniques with no false positives. The key technical advancement of UFO is an extended maximal thread causality model that captures the largest possible set of feasible traces that can be inferred from a given multithreaded execution trace. By formulating UAF detection as a constraint solving problem atop this model, we can explore a much larger thread scheduling space than classical happens-before based techniques. We have evaluated UFO on several real-world large complex C/C++ programs including Chromium and FireFox. UFO scales to real-world systems with hundreds of millions of events in their execution and has detected a large number of real concurrency UAFs.
The paper is devoted to analysis of condition of executing devices and sensors of Industrial Control Systems information security. The work contains structures of industrial control systems divided into groups depending on system's layer. The article contains the analysis of analog interfaces work and work features of data transmission protocols in industrial control system field layer. Questions about relevance of industrial control systems information security, both from the point of view of the information security occurring incidents, and from the point of view of regulators' reaction in the form of normative legal acts, are described. During the analysis of the information security systems of industrial control systems a possibility of leakage through technical channels of information leakage at the field layer was found. Potential vectors of the attacks on devices of field layer and data transmission network of an industrial control system are outlined in the article. The relevance analysis of the threats connected with the attacks at the field layer of an industrial control system is carried out, feature of this layer and attractiveness of this kind of attacks is observed.
Due to its costly and time-consuming nature and a wide range of passive barrier elements and tools for their breaching, testing the delay time of passive barriers is only possible as an experimental tool to verify expert judgements of said delay times. The article focuses on the possibility of creating and utilizing a new method of acquiring values of delay time for various passive barrier elements using expert judgements which could add to the creation of charts where interactions between the used elements of mechanical barriers and the potential tools for their bypassing would be assigned a temporal value. The article consists of basic description of methods of expert judgements previously applied for making prognoses of socio-economic development and in other societal areas, which are called soft system. In terms of the problem of delay time, this method needed to be modified in such a way that the prospective output would be expressible by a specific quantitative value. To achieve this goal, each stage of the expert judgements was adjusted to the use of suitable scientific methods to select appropriate experts and then to achieve and process the expert data. High emphasis was placed on evaluation of quality and reliability of the expert judgements, which takes into account the specifics of expert selection such as their low numbers, specialization and practical experience.
Mobile applications frequently request sensitive data. While prior work has focused on analyzing sensitive-data uses originating from well-dened API calls in the system, the security and privacy implications of inputs requested via application user interfaces have been widely unexplored. In this paper, our goal is to understand the broad implications of such requests in terms of the type of sensitive data being requested by applications.
To this end, we propose UiRef (User Input REsolution Framework), an automated approach for resolving the semantics of user inputs requested by mobile applications. UiRef’s design includes a number of novel techniques for extracting and resolving user interface labels and addressing ambiguity in semantics, resulting in signicant improvements over prior work.We apply UiRef to 50,162 Android applications from Google Play and use outlier analysis to triage applications with questionable input requests. We identify concerning developer practices, including insecure exposure of account passwords and non-consensual input disclosures to third parties. These ndings demonstrate the importance of user-input semantics when protecting end users.
Recent years have witnessed a rapid growth in the domain of Internet of Things (IoT). This network of billions of devices generates and exchanges huge amount of data. The limited cache capacity and memory bandwidth make transferring and processing such data on traditional CPUs and GPUs highly inefficient, both in terms of energy consumption and delay. However, many IoT applications are statistical at heart and can accept a part of inaccuracy in their computation. This enables the designers to reduce complexity of processing by approximating the results for a desired accuracy. In this paper, we propose an ultra-efficient approximate processing in-memory architecture, called APIM, which exploits the analog characteristics of non-volatile memories to support addition and multiplication inside the crossbar memory, while storing the data. The proposed design eliminates the overhead involved in transferring data to processor by virtually bringing the processor inside memory. APIM dynamically configures the precision of computation for each application in order to tune the level of accuracy during runtime. Our experimental evaluation running six general OpenCL applications shows that the proposed design achieves up to 20x performance improvement and provides 480x improvement in energy-delay product, ensuring acceptable quality of service. In exact mode, it achieves 28x energy savings and 4.8x speed up compared to the state-of-the-art GPU cores.
Biometrics are widely used for authentication in several domains, services and applications. However, only very few systems succeed in effectively combining highly secure user authentication with an adequate privacy protection of the biometric templates, due to the difficulty associated with jointly providing good authentication performance, unlinkability and irreversibility to biometric templates. This thwarts the use of biometrics in remote authentication scenarios, despite the advantages that this kind of architectures provides. We propose a user-specific approach for decoupling the biometrics from their binary representation before using biometric protection schemes based on fuzzy extractors. This allows for more reliable, flexible, irreversible and unlinkable protected biometric templates. With the proposed biometrics decoupling procedures, biometric metadata, that does not allow to recover the original biometric template, is generated. However, different biometric metadata that are generated starting from the same biometric template remain statistically linkable, therefore we propose to additionally protect these using a second authentication factor (e.g., knowledge or possession based). We demonstrate the potential of this approach within a two-factor authentication protocol for remote biometric authentication in mobile scenarios.
Social network services enable users to conveniently share personal information. Often, the information shared concerns other people, especially other members of the social network service. In such situations, two or more people can have conflicting privacy preferences; thus, an appropriate sharing policy may not be apparent. We identify such situations as multiuser privacy scenarios. Current approaches propose finding a sharing policy through preference aggregation. However, studies suggest that users feel more confident in their decisions regarding sharing when they know the reasons behind each other's preferences. The goals of this paper are (1) understanding how people decide the appropriate sharing policy in multiuser scenarios where arguments are employed, and (2) developing a computational model to predict an appropriate sharing policy for a given scenario. We report on a study that involved a survey of 988 Amazon MTurk users about a variety of multiuser scenarios and the optimal sharing policy for each scenario. Our evaluation of the participants' responses reveals that contextual factors, user preferences, and arguments influence the optimal sharing policy in a multiuser scenario. We develop and evaluate an inference model that predicts the optimal sharing policy given the three types of features. We analyze the predictions of our inference model to uncover potential scenario types that lead to incorrect predictions, and to enhance our understanding of when multiuser scenarios are more or less prone to dispute.
To appear
The advancement in technology has changed how people work and what software and hardware people use. From conventional personal computer to GPU, hardware technology and capability have dramatically improved so does the operating systems that come along. Unfortunately, current industry practice to compare OS is performed with single perspective. It is either benchmark the hardware level performance or performs penetration testing to check the security features of an OS. This rigid method of benchmarking does not really reflect the true performance of an OS as the performance analysis is not comprehensive and conclusive. To illustrate this deficiency, the study performed hardware level and operational level benchmarking on Windows XP, Windows 7 and Windows 8 and the results indicate that there are instances where Windows XP excels over its newer counterparts. Overall, the research shows Windows 8 is a superior OS in comparison to its predecessors running on the same hardware. Furthermore, the findings also show that the automated benchmarking tools are proved less efficient benchmark systems that run on Windows XP and older OS as they do not support DirectX 11 and other advanced features that the hardware supports. There lies the need to have a unified benchmarking approach to compare other aspects of OS such as user oriented tasks and security parameters to provide a complete comparison. Therefore, this paper is proposing a unified approach for Operating System (OS) comparisons with the help of a Windows OS case study. This unified approach includes comparison of OS from three aspects which are; hardware level, operational level performance and security tests.
Universally Composable (UC) framework provides the strongest security notion for designing fully trusted cryptographic protocols, and it is very challenging on applying UC security in the design of RFID mutual authentication protocols. In this paper, we formulate the necessary conditions for achieving UC secure RFID mutual authentication protocols which can be fully trusted in arbitrary environment, and indicate the inadequacy of some existing schemes under the UC framework. We define the ideal functionality for RFID mutual authentication and propose the first UC secure RFID mutual authentication protocol based on public key encryption and certain trusted third parties which can be modeled as functionalities. We prove the security of our protocol under the strongest adversary model assuming both the tags' and readers' corruptions. We also present two (public) key update protocols for the cases of multiple readers: one uses Message Authentication Code (MAC) and the other uses trusted certificates in Public Key Infrastructure (PKI). Furthermore, we address the relations between our UC framework and the zero-knowledge privacy model proposed by Deng et al. [1].
User privacy is an important issue in a blockchain based transaction system. Bitcoin, being one of the most widely used blockchain based transaction system, fails to provide enough protection on users' privacy. Many subsequent studies focus on establishing a system that hides the linkage between the identities (pseudonyms) of users and the transactions they carry out in order to provide a high level of anonymity. Examples include Zerocoin, Zerocash and so on. It thus becomes an interesting question whether such new transaction systems do provide enough protection on users' privacy. In this paper, we propose a novel and effective approach for de-anonymizing these transaction systems by leveraging information in the system that is not directly related, including the number of transactions made by each identity and time stamp of sending and receiving. Combining probability studies with optimization tools, we establish a model which allows us to determine, among all possible ways of linking between transactions and identities, the one that is most likely to be true. Subsequent transaction graph analysis could then be carried out, leading to the de-anonymization of the system. To solve the model, we provide exact algorithms based on mixed integer linear programming. Our research also establishes interesting relationships between the de-anonymization problem and other problems studied in the literature of theoretical computer science, e.g., the graph matching problem and scheduling problem.
The evolution of convolutional neural networks (CNNs) into more complex forms of organization, with additional layers, larger convolutions and increasing connections, established the state-of-the-art in terms of accuracy errors for detection and classification challenges in images. Moreover, as they evolved to a point where Gigabytes of memory are required for their operation, we have reached a stage where it becomes fundamental to understand how their inference capabilities can be impaired if data elements somehow become corrupted in memory. This paper introduces fault-injection in these systems by simulating failing bit-cells in hardware memories brought on by relaxing the 100% reliable operation assumption. We analyze the behavior of these networks calculating inference under severe fault-injection rates and apply fault mitigation strategies to improve on the CNNs resilience. For the MNIST dataset, we show that 8x less memory is required for the feature maps memory space, and that in sub-100% reliable operation, fault-injection rates up to 10-1 (with most significant bit protection) can withstand only a 1% error probability degradation. Furthermore, considering the offload of the feature maps memory to an embedded dynamic RAM (eDRAM) system, using technology nodes from 65 down to 28 nm, up to 73 80% improved power efficiency can be obtained.