Biblio
We present the first formalisation of a blockchain-based distributed consensus protocol with a proof of its consistency mechanised in an interactive proof assistant. Our development includes a reference mechanisation of the block forest data structure, necessary for implementing provably correct per-node protocol logic. We also define a model of a network, implementing the protocol in the form of a replicated state-transition system. The protocol's executions are modeled via a small-step operational semantics for asynchronous message passing, in which packages can be rearranged or duplicated. In this work, we focus on the notion of global system safety, proving a form of eventual consistency. To do so, we provide a library of theorems about a pure functional implementation of block forests, define an inductive system invariant, and show that, in a quiescent system state, it implies a global agreement on the state of per-node transaction ledgers. Our development is parametric with respect to implementations of several security primitives, such as hash-functions, a notion of a proof object, a Validator Acceptance Function, and a Fork Choice Rule. We precisely characterise the assumptions, made about these components for proving the global system consensus, and discuss their adequacy. All results described in this paper are formalised in Coq.
One of the effective ways to improve the quality of airport security (AS) is to improve the quality of management of the state of the system for countering acts of unlawful interference by intruders into the airports (SCAUI), which is a set of AS employees, technical systems and devices used for passenger screening, luggage, other operational procedures, as well as to protect the restricted areas of the airports. Proactive control of the SCAUI state includes ongoing conducting assessment of airport AS quality by experts, identification of SCAUI elements (functional state of AS employees, characteristics of technical systems and devices) that have a predominant influence on AS, and improvement of their performance. This article presents principles of the model and the method for conducting expert quality assessment of airport AS, whose application allows to increase the efficiency and quality of AS assessment by experts, and, consequently, the quality of SCAUI state control.
GPU-accelerated graphics is commonly used in mobile applications. Unfortunately, the graphics interface exposes a large amount of potentially vulnerable kernel code (i.e., the GPU device driver) to untrusted applications. This broad attack surface has resulted in numerous reported vulnerabilities that are exploitable from unprivileged mobile apps. We observe that web browsers have faced and addressed the exact same problem in WebGL, a framework used by web apps for graphics acceleration. Web browser vendors have developed and deployed a plethora of security checks for the WebGL interface. We introduce Milkomeda, a system solution for automatically repurposing WebGL security checks to safeguard the mobile graphics interface. We show that these checks can be used with minimal modifications (which we have automated using a tool called CheckGen), significantly reducing the engineering effort. Moreover, we demonstrate an in-process shield space for deploying these checks for mobile applications. Compared to the multi-process architecture used by web browsers to protect the integrity of the security checks, our solution improves the graphics performance by eliminating the need for Inter-Process Communication and shared memory data transfer, while providing integrity guarantees for the evaluation of security checks. Our evaluation shows that Milkomeda achieves close-to-native GPU performance at reasonably increased CPU utilization.
In this paper, we proposed a framework to evaluate information retrieval systems in presence of multidimensional relevance. This is an important problem in tasks such as consumer health search, where the understandability and trustworthiness of information greatly influence people's decisions based on the search engine results, but common topicality-only evaluation measures ignore these aspects. We used synthetic and real data to compare our proposed framework, named MM, to the understandability-biased information evaluation (UBIRE), an existing framework used in the context of consumer health search. We showed how the proposed approach diverges from the UBIRE framework, and how MM can be used to better understand the trade-offs between topical relevance and the other relevance dimensions.
A process of critical transmission lines identification in presented here. The criticality is based on network flow, which is essential for power grid connectivity monitoring as well as vulnerability assessment. The proposed method can be utilized as a supplement of traditional situational awareness tool in the energy management system of the power grid control center. At first, a flow network is obtained from topological as well as functional features of the power grid. Then from the duality property of a linear programming problem, the maximum flow problem is converted to a minimum cut problem. Critical transmission lines are identified as a solution of the dual problem. An overall set of transmission lines are identified from the solution of the network flow problem. Simulation of standard IEEE test cases validates the application of the method in finding critical transmission lines of the power grid.
The class φ2 is a single transistor, fast transient inverter topology often associated with power conversion at very high frequency (VHF: 30MHz-300MHz). At VHF, gate drivers available on the market fail to provide the adequate transistor switching signal. Hence, there is a need for new power topologies that do no make use of gate drivers but are still suitable for power conversion at VHF. In This paper, we introduce a new class φ;2 topology that incorporates an oscillator, which takes the drain signal through a feedback circuit in order to force the transistor switching. A design methodology is provided and a 1MHz 20V input prototype is built in order to validate the topology behaviour.
In recent years, the increasing concerns around the centralized cloud web services (e.g. privacy, governance, surveillance, security) have triggered the emergence of new distributed technologies, such as IPFS or the Blockchain. These innovations have tackled technical challenges that were unresolved until their appearance. Existing models of peer-to-peer systems need a revision to cover the spectrum of potential systems that can be now implemented as peer-to-peer systems. This work presents a framework to build these systems. It uses an agent-oriented approach in an open environment where agents have only partial information of the system data. The proposal covers data access, data discovery and data trust in peer-to-peer systems where different actors may interact. Moreover, the framework proposes a distributed architecture for these open systems, and provides guidelines to decide in which cases Blockchain technology may be required, or when other technologies may be sufficient.
Aiming at the phenomenon that the urban traffic is complex at present, the optimization algorithm of the traditional logistic distribution path isn't sensitive to the change of road condition without strong application in the actual logistics distribution, the optimization algorithm research of logistics distribution path based on the deep belief network is raised. Firstly, build the traffic forecast model based on the deep belief network, complete the model training and conduct the verification by learning lots of traffic data. On such basis, combine the predicated road condition with the traffic network to build the time-share traffic network, amend the access set and the pheromone variable of ant algorithm in accordance with the time-share traffic network, and raise the optimization algorithm of logistics distribution path based on the traffic forecasting. Finally, verify the superiority and application value of the algorithm in the actual distribution through the optimization algorithm contrast test with other logistics distribution paths.
The strength of an anonymity system depends on the number of users. Therefore, User eXperience (UX) and usability of these systems is of critical importance for boosting adoption and use. To this end, we carried out a study with 19 non-expert participants to investigate how users experience routine Web browsing via the Tor Browser, focusing particularly on encountered problems and frustrations. Using a mixed-methods quantitative and qualitative approach to study one week of naturalistic use of the Tor Browser, we uncovered a variety of UX issues, such as broken Web sites, latency, lack of common browsing conveniences, differential treatment of Tor traffic, incorrect geolocation, operational opacity, etc. We applied this insight to suggest a number of UX improvements that could mitigate the issues and reduce user frustration when using the Tor Browser.
In this work a platform-aware model-driven engineering process for building component-based embedded software systems using annotated analysis models is described. The process is supported by a framework, called MICOBS, that allows working with different component technologies and integrating different tools that, independently of the component technology, enable the analysis of non-functional properties based on the principles of composability and compositionality. An actor, called Framework Architect, is responsible for this integration. Three other actors take a relevant part in the analysis process. The Component Provider supplies the components, while the Component Tester is in charge of their validation. The latter also feeds MICOBS with the annotated analysis models that characterize the extra-functional properties of the components for the different platforms on which they can be deployed. The Application Architect uses these components to build new systems, performing the trade-off between different alternatives. At this stage, and in order to verify that the final system meets the extra-functional requirements, the Application Architect uses the reports generated by the integrated analysis tools. This process has been used to support the validation and verification of the on-board application software for the Instrument Control Unit of the Energetic Particle Detector of the Solar Orbiter mission.
Enterprises continue to migrate their services to the cloud on a massive scale, but the increasing attack surface has become a natural target for malevolent actors. We show policy injection, a novel algorithmic complexity attack that enables a tenant to add specially tailored ACLs into the data center fabric to mount a denial-of-service attack through exploiting the built-in security mechanisms of the cloud management systems (CMS). Our insight is that certain ACLs, when fed with special covert packets by an attacker, may be very difficult to evaluate, leading to an exhaustion of cloud resources. We show how a tenant can inject seemingly harmless ACLs into the cloud data plane to abuse an algorithmic deficiency in the most popular cloud hypervisor switch, Open vSwitch, and reduce its effective peak performance by 80–90%, and, in certain cases, denying network access altogether.
As network security monitoring grows more sophisticated, there is an increasing need for outsourcing such tasks to third-party analysts. However, organizations are usually reluctant to share their network traces due to privacy concerns over sensitive information, e.g., network and system configuration, which may potentially be exploited for attacks. In cases where data owners are convinced to share their network traces, the data are typically subjected to certain anonymization techniques, e.g., CryptoPAn, which replaces real IP addresses with prefix-preserving pseudonyms. However, most such techniques either are vulnerable to adversaries with prior knowledge about some network flows in the traces, or require heavy data sanitization or perturbation, both of which may result in a significant loss of data utility. In this paper, we aim to preserve both privacy and utility through shifting the trade-off from between privacy and utility to between privacy and computational cost. The key idea is for the analysts to generate and analyze multiple anonymized views of the original network traces; those views are designed to be sufficiently indistinguishable even to adversaries armed with prior knowledge, which preserves the privacy, whereas one of the views will yield true analysis results privately retrieved by the data owner, which preserves the utility. We formally analyze the privacy of our solution and experimentally evaluate it using real network traces provided by a major ISP. The results show that our approach can significantly reduce the level of information leakage (e.g., less than 1% of the information leaked by CryptoPAn) with comparable utility.
The Internet of Things (IoT) envisions a huge number of networked sensors connected to the internet. These sensors collect large streams of data which serve as input to wide range of IoT applications and services such as e-health, e-commerce, and automotive services. Complex Event Processing (CEP) is a powerful tool that transforms streams of raw sensor data into meaningful information required by these IoT services. Often these streams of data collected by sensors carry privacy-sensitive information about the user. Thus, protecting privacy is of paramount importance in IoT services based on CEP. In this paper we present a novel pattern-level access control mechanism for CEP based services that conceals private information while minimizing the impact on useful non-sensitive information required by the services to provide a certain quality of service (QoS). The idea is to reorder events from the event stream to conceal privacy-sensitive event patterns while preserving non-privacy sensitive event patterns to maximize QoS. We propose two approaches, namely an ILP-based approach and a graph-based approach, calculating an optimal reordering of events. Our evaluation results show that these approaches are effective in concealing private patterns without significant loss of QoS.
Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have enough data to train a reasonably accurate model. For such organizations, a realistic solution is to train machine learning models based on a joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player.
Inter-vehicle communications disclose rich information about vehicle whereabouts. Pseudonymous authentication secures communication while enhancing user privacy thanks to a set of anonymized certificates, termed pseudonyms. Vehicles switch the pseudonyms (and the corresponding private key) frequently; we term this pseudonym transition process. However, exactly because vehicles can in principle change their pseudonyms asynchronously, an adversary that eavesdrops (pseudonymously) signed messages, could link pseudonyms based on the times of pseudonym transition processes. In this poster, we show how one can link pseudonyms of a given vehicle by simply looking at the timing information of pseudonym transition processes. We also propose "mix-zone everywhere": time-aligned pseudonyms are issued for all vehicles to facilitate synchronous pseudonym update; as a result, all vehicles update their pseudonyms simultaneously, thus achieving higher user privacy protection.
Awareness and knowledge management are key components to achieve a high level of information security in organizations. However, practical evidence suggests that there are significant discrepancies between the typical elements of security awareness campaigns, the decisions made and goals set by top-level management, and routine operations carried out by systems administration personnel. This paper presents Vis4Sec, a process framework for the generation and distribution of stakeholder-specific visualizations of security metrics, which assists in closing the gap between theoretical and practical information security by respecting the different points of view of the involved security report audiences. An implementation for patch management on Linux servers, deployed at a large data center, is used as a running example.
Digital signatures are replacing paper-based work to make life easier for customers and employees in various industries. We rigorously use RSA and Elliptic Curve Cryptography (ECC) for public key cryptographic algorithms. Nowadays ECDSA (Elliptical Curve Digital Signature Algorithm) gaining more popularity than the RSA algorithm because of the better performance of ECDSA over RSA. The main advantage of ECC over RSA is ECC provides the same level of security with less key size and overhead than RSA. This paper focuses on a brief review of the performance of ECDSA and RSA in various aspects like time, security and power. This review tells us about why ECC has become the latest trend in the present cryptographic scenario.
Close physical proximity among wireless devices that have never shared a secret key is sometimes used as a basis of trust. In these cases, devices in close proximity are deemed trustworthy while more distant devices are viewed as potential adversaries. Because radio waves are invisible, however, a user may believe a wireless device is communicating with a nearby device when in fact the user's device is communicating with a distant adversary. Researchers have previously proposed methods for multi-antenna devices to ascertain physical proximity with other devices, but devices with a single antenna, such as those commonly used in the Internet of Things, cannot take advantage of these techniques. We investigate a method for a single-antenna Wi-Fi device to quickly determine proximity with another Wi-Fi device. Our approach leverages the repeating nature Wi-Fi's preamble and the characteristics of a transmitting antenna's near field to detect proximity with high probability. Our method never falsely declares proximity at ranges longer than 14 cm.
We present attacks that use only the volume of responses to range queries to reconstruct databases. Our focus is on practical attacks that work for large-scale databases with many values and records, without requiring assumptions on the data or query distributions. Our work improves on the previous state-of-the-art due to Kellaris et al. (CCS 2016) in all of these dimensions. Our main attack targets reconstruction of database counts and involves a novel graph-theoretic approach. It generally succeeds when R , the number of records, exceeds \$N2/2\$, where N is the number of possible values in the database. For a uniform query distribution, we show that it requires volume leakage from only O(N2 łog N) queries (cf. O(N4łog N) in prior work). We present two ancillary attacks. The first identifies the value of a new item added to a database using the volume leakage from fresh queries, in the setting where the adversary knows or has previously recovered the database counts. The second shows how to efficiently recover the ranges involved in queries in an online fashion, given an auxiliary distribution describing the database. Our attacks are all backed with mathematical analyses and extensive simulations using real data.
Security assurance is the confidence that a system meets its security requirements based on specific evidences that an assurance technique provide. The notion of measuring security is complex and tricky. Existing approaches either (1) consider one aspect of assurance, like security requirements fulfillment, or threat/vulnerability existence, or (2) do not consider the relevance of the different security requirements to the evaluated application context. Furthermore, they are mostly qualitative in nature and are heavily based on manual processing, which make them costly and time consuming. Therefore, they are not widely used and applied, especially by small and medium-sized enterprises (SME), which constitute the backbone of the Norwegian economy. In this paper, we propose a quantification method that aims at evaluating security assurance of systems by measuring (1) the level of confidence that the mechanisms fulfilling security requirements are present and (2) the vulnerabilities associated with possible security threats are absent. Additionally, an assurance evaluation process is proposed. Two case studies applying our method are presented. The case studies use our assurance method to evaluate the security level of two REST APIs developed by Statistics Norway, where one of the authors is employed. Analysis shows that the API with the most security mechanisms implemented got a slightly higher security assurance score. Security requirement relevance and vulnerability impact played a role in the overall scores.
Cyber-Physical Systems (CPS), such as Water Distribution Networks (WDNs), deploy digital devices to monitor and control the behavior of physical processes. These digital devices, however, are susceptible to cyber and physical attacks, that may alter their functionality, and therefore the integrity of their measurements/actions. In practice, industrial control systems utilize simple control laws, which rely on various sensor measurements and algorithms which are expected to operate normally. To reduce the impact of a potential failure, operators may deploy redundant components; this however may not be useful, e.g., when a cyber attack at a PLC component occurs. In this work, we address the problem of reducing vulnerability to cyber-physical attacks in water distribution networks. This is achieved by augmenting the graph which describes the information flow from sensors to actuators, by adding new connections and algorithms, to increase the number of redundant cyber components. These, in turn, increase the \textitcyber-physical security level, which is defined in the present paper as the number of malicious attacks a CPS may sustain before becoming unable to satisfy the control requirements. A proof-of-concept of the approach is demonstrated over a simple WDN, with intuition on how this can be used to increase the cyber-physical security level of the system.
Modern industrial control systems (ICS) act as victims of cyber attacks more often in last years. These attacks are hard to detect and their consequences can be catastrophic. Cyber attacks can cause anomalies in the work of the ICS and its technological equipment. The presence of mutual interference and noises in this equipment significantly complicates anomaly detection. Moreover, the traditional means of protection, which used in corporate solutions, require updating with each change in the structure of the industrial process. An approach based on the machine learning for anomaly detection was used to overcome these problems. It complements traditional methods and allows one to detect signal correlations and use them for anomaly detection. Additional Tennessee Eastman Process Simulation Data for Anomaly Detection Evaluation dataset was analyzed as example of industrial process. In the course of the research, correlations between the signals of the sensors were detected and preliminary data processing was carried out. Algorithms from the most common techniques of machine learning (decision trees, linear algorithms, support vector machines) and deep learning models (neural networks) were investigated for industrial process anomaly detection task. It's shown that linear algorithms are least demanding on computational resources, but they don't achieve an acceptable result and allow a significant number of errors. Decision tree-based algorithms provided an acceptable accuracy, but the amount of RAM, required for their operations, relates polynomially with the training sample volume. The deep neural networks provided the greatest accuracy, but they require considerable computing power for internal calculations.