Biblio
Mobile military networks are uniquely challenging to build and maintain, because of their wireless nature and the unfriendliness of the environment, resulting in unreliable and capacity limited performance. Currently, most tactical networks implement TCP/IP, which was designed for fairly stable, infrastructure-based environments, and requires sophisticated and often application-specific extensions to address the challenges of the communication scenario. Information Centric Networking (ICN) is a clean slate networking approach that does not depend on stable connections to retrieve information and naturally provides support for node mobility and delay/disruption tolerant communications - as a result it is particularly interesting for tactical applications. However, despite ICN seems to offer some structural benefits for tactical environments over TCP/IP, a number of challenges including naming, security, performance tuning, etc., still need to be addressed for practical adoption. This document, prepared within NATO IST-161 RTG, evaluates the effectiveness of Named Data Networking (NDN), the de facto standard implementation of ICN, in the context of tactical edge networks and its potential for adoption.
Peer to Peer (P2P) is a dynamic and self-organized technology, popularly used in File sharing applications to achieve better performance and avoids single point of failure. The popularity of this network has attracted many attackers framing different attacks including Sybil attack, Routing Table Insertion attack (RTI) and Free Riding. Many mitigation methods are also proposed to defend or reduce the impact of such attacks. However, most of those approaches are protocol specific. In this work, we propose a Blockchain based security framework for P2P network to address such security issues. which can be tailored to any P2P file-sharing system.
Personal cryptographic keys are the foundation of many secure services, but storing these keys securely is a challenge, especially if they are used from multiple devices. Storing keys in a centralized location, like an Internet-accessible server, raises serious security concerns (e.g. server compromise). Hardware-based Trusted Execution Environments (TEEs) are a well-known solution for protecting sensitive data in untrusted environments, and are now becoming available on commodity server platforms. Although the idea of protecting keys using a server-side TEE is straight-forward, in this paper we validate this approach and show that it enables new desirable functionality. We describe the design, implementation, and evaluation of a TEE-based Cloud Key Store (CKS), an online service for securely generating, storing, and using personal cryptographic keys. Using remote attestation, users receive strong assurance about the behaviour of the CKS, and can authenticate themselves using passwords while avoiding typical risks of password-based authentication like password theft or phishing. In addition, this design allows users to i) define policy-based access controls for keys; ii) delegate keys to other CKS users for a specified time and/or a limited number of uses; and iii) audit all key usages via a secure audit log. We have implemented a proof of concept CKS using Intel SGX and integrated this into GnuPG on Linux and OpenKeychain on Android. Our CKS implementation performs approximately 6,000 signature operations per second on a single desktop PC. The latency is in the same order of magnitude as using locally-stored keys, and 20x faster than smart cards.
The collection of students' sensible data raises adverse reactions against Learning Analytics that decreases the confidence in its adoption. The laws and policies that surround the use of educational data are not enough to ensure privacy, security, validity, integrity and reliability of students' data. This problem has been detected through literature review and can be solved if a technological layer of automated checking rules is added above these policies. The aim of this thesis is to research about an emerging technology such as blockchain to preserve the identity of students and secure their data. In a first stage a systematic literature review will be conducted in order to set the context of the research. Afterwards, and through the scientific method, we will develop a blockchain based solution to automate rules and constraints with the aim to let students the governance of their data and to ensure data privacy and security.
As a frequent participant in eSociety, Willeke is often preoccupied with paperwork because there is no easy to use, affordable way to act as a qualified person in the digital world. Confidential interactions take place over insecure channels like e-mail and post. This situation poses risks and costs for service providers, civilians and governments, while goals regarding confidentiality and privacy are not always met. The objective of this paper is to demonstrate an alternative architecture in which identifying persons, exchanging information, authorizing external parties and signing documents will become more user-friendly and secure. As a starting point, each person has their personal data space, provided by a qualified trust service provider that also issues a high level of assurance electronic ID. Three main building blocks are required: (1) secure exchange between the personal data space of each person, (2) coordination functionalities provided by a token based infrastructure, and (3) governance over this infrastructure. Following the design science research approach, we developed prototypes of the building blocks that we will pilot in practice. Policy makers and practitioners that want to enable Willeke to get rid of her paperwork can find guidance throughout this paper and are welcome to join the pilots in the Netherlands.
Attribute-based access control (ABAC) is a general access control model that subsumes numerous earlier access control models. Its increasing popularity stems from the intuitive generic structure of granting permissions based on application and domain attributes of users, subjects, objects, and other entities in the system. Multiple formal and informal languages have been developed to express policies in terms of such attributes. The utility of ABAC policy languages is potentially undermined without a properly formalized underlying model. The high-level structure in a majority of ABAC models consists of sets of tokens and sets of sets, expressions that demand that the reader unpack multiple levels of sets and tokens to determine what things mean. The resulting reduced readability potentially endangers correct expression, reduces maintainability, and impedes validation. These problems could be magnified in models that employ nonuniform representations of actions and their governing policies. We propose to avoid these magnified problems by recasting the high-level structure of ABAC models in a logical formalism that treats all actions (by users and others) uniformly and that keeps existing policy languages in place by interpreting their attributes in terms of the restructured model. In comparison to existing ABAC models, use of a logical language for model formalization, including hierarchies of types of entities and attributes, promises improved expressiveness in specifying the relationships between and requirements on application and domain attributes. A logical modeling language also potentially improves flexibility in representing relationships as attributes to support some widely used policy languages. Consistency and intelligibility are improved by using uniform means for representing different types of controlled actions—such as regular access control actions, administrative actions, and user logins—and their governing policies. Logical languages also provide a well-defined denotational semantics supported by numerous formal inference and verification tools.
Interoperability has become a crucial value in European e-government developments, as promoted by the Digital Single Market strategy and the Tallinn Declaration. The European Union and its Member States have made considerable investments in improving the understanding of interoperability and in developing interoperable building blocks to support cross-border data exchange and public service provisioning. This includes recent updates of the European Interoperability Framework (EIF) and European Interoperability Reference Architecture (EIRA), as well as the publication of a number of generic and domain specific architecture and solutions building blocks such as digital identification or electronic delivery services. While in the previous version of the EIF, interoperability governance was not clearly developed, the new version of 2017 puts interoperability governance as a concept that spans across the different interoperability layers (legal, organizational, semantic and technical) and that builds the frame for interoperability overall. In this paper, we develop a definition of interoperability governance from a literature review and we put forward a model to investigate interoperability governance models at European and Member State levels. Based on several case studies of EU institutions and Member States, we could draw recommendations for what the key aspects of interoperability governance are to successfully diffuse interoperability into public service provisioning.
Exclusive-or (XOR) operations are common in cryptographic protocols, in particular in RFID protocols and electronic payment protocols. Although there are numerous applications, due to the inherent complexity of faithful models of XOR, there is only limited tool support for the verification of cryptographic protocols using XOR. The Tamarin prover is a state-of-the-art verification tool for cryptographic protocols in the symbolic model. In this paper, we improve the underlying theory and the tool to deal with an equational theory modeling XOR operations. The XOR theory can be freely combined with all equational theories previously supported, including user-defined equational theories. This makes Tamarin the first tool to support simultaneously this large set of equational theories, protocols with global mutable state, an unbounded number of sessions, and complex security properties including observational equivalence. We demonstrate the effectiveness of our approach by analyzing several protocols that rely on XOR, in particular multiple RFID-protocols, where we can identify attacks as well as provide proofs.
Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. In the deductive verification approach, the programmer provides inductive invariants and pre/post specifications of procedures, reducing the verification problem to checking validity of logical verification conditions. This check is often performed by automated theorem provers and SMT solvers, substantially increasing productivity in the verification of complex systems. However, the unpredictability of automated provers presents a major hurdle to usability of these tools. This problem is particularly acute in case of provers that handle undecidable logics, for example, first-order logic with quantifiers and theories such as arithmetic. The resulting extreme sensitivity to minor changes has a strong negative impact on the convergence of the overall proof effort.
In VLSI industry the design cycle is categorized into Front End Design and Back End Design. Front End Design flow is from Specifications to functional verification of RTL design. Back End Design is from logic synthesis to fabrication of chip. Handheld devices like Mobile SOC's is an amalgamation of many components like GPU, camera, sensor, display etc. on one single chip. In order to integrate these components protocols are needed. One such protocol in the emerging trend is I3C protocol. I3C is abbreviated as Improved Inter Integrated Circuit developed by Mobile Industry Processor Interface (MIPI) alliance. Most probably used for the interconnection of sensors in Mobile SOC's. The main motivation of adapting the standard is for the increase speed and low pin count in most of the hardware chips. The bus protocol is backward compatible with I2C devices. The paper includes detailed study I3C bus protocol and developing verification environment for the protocol. The test bench environment is written and verified using system Verilog and UVM. The Universal Verification Methodology (UVM) is base class library built using System Verilog which provides the fundamental blocks needed to quickly develop reusable and well-constructed verification components and test environments. The Functional Coverage of around 93.55 % and Code Coverage of around 98.89 % is achieved by verification closure.
With the development of large scale integrated circuits, the functions of the IoT chips have been increasingly perfect. The verification work has become one of the most important aspects. On the one hand, an efficient verification platform can ensure the correctness of the design. On the other hand, it can shorten the chip design cycle and reduce the design cost. In this paper, based on a transmission protocol of the IoT node, we propose a verification method which combines simulation verification and FPGA-based prototype verification. We also constructed a system verification platform for the IoT smart node chip combining two kinds of verification above. We have simulated and verificatied the related functions of the node chip using this platform successfully. It has a great reference value.
For aerospace FPGA software products, traditional simulation method faces severe challenges to verify product requirements under complicated scenarios. Given the increasing maturity of formal verification technology, this method can significantly improve verification work efficiency and product design quality, by expanding coverage on those "blind spots" in product design which were not easily identified previously. Taking UART communication as an example, this paper proposes several critical points to use formal verification for asynchronous communication protocol. Experiments and practices indicate that formal verification for asynchronous communication protocol can effectively reduce the time required, ensure a complete verification process and more importantly, achieve more accurate and intuitive results.
With the growth of technology, designs became more complex and may contain bugs. This makes verification an indispensable part in product development. UVM describe a standard method for verification of designs which is reusable and portable. This paper verifies IIC bus protocol using Universal Verification Methodology. IIC controller is designed in Verilog using Vivado. It have APB interface and its function and code coverage is carried out in Mentor graphic Questasim 10.4e. This work achieved 83.87% code coverage and 91.11% functional coverage.
SDN (Software Defined Network) with multiple controllers draws more attention for the increasing scale of the network. The architecture can handle what SDN with single controller is not able to address. In order to understand what this architecture can accomplish and face precisely, we analyze it with formal methods. In this paper, we apply CSP (Communicating Sequential Processes) to model the routing service of SDN under HyperFlow architecture based on OpenFlow protocol. By using model checker PAT (Process Analysis Toolkit), we verify that the models satisfy three properties, covering deadlock freeness, consistency and fault tolerance.
As a decentralized and distributed secure storage technology, the notion of blockchain is now widely used for electronic trading in finance, for issuing digital certificates, for copyrights management, and for many other security-critical applications. With applications in so many domains with high-assurance requirements, the formalization and verification of safety and security properties of blockchain becomes essential, and the aim of the present paper. We present the model-based formalization, simulation and verification of a blockchain protocol by using the SDL formalism of Telelogic Tau. We consider the hierarchical and modular SDL model of the blockchain protocol and exercise a methodology to formally simulate and verify it. This way, we show how to effectively increase the security and safety of blockchain in order to meet high assurance requirements demanded by its application domains. Our work also provides effective support for assessing different network consensus algorithms, which are key components in blockchain protocols, as well as on the topology of blockchain networks. In conclusion, our approach contributes to setting up a verification methodology for future blockchain standards in digital trading.
Mobile communication networks connect much of the world's population. The security of users' calls, SMSs, and mobile data depends on the guarantees provided by the Authenticated Key Exchange protocols used. For the next-generation network (5G), the 3GPP group has standardized the 5G AKA protocol for this purpose. We provide the first comprehensive formal model of a protocol from the AKA family: 5G AKA. We also extract precise requirements from the 3GPP standards defining 5G and we identify missing security goals. Using the security protocol verification tool Tamarin, we conduct a full, systematic, security evaluation of the model with respect to the 5G security goals. Our automated analysis identifies the minimal security assumptions required for each security goal and we find that some critical security goals are not met, except under additional assumptions missing from the standard. Finally, we make explicit recommendations with provably secure fixes for the attacks and weaknesses we found.
We present the first formalisation of a blockchain-based distributed consensus protocol with a proof of its consistency mechanised in an interactive proof assistant. Our development includes a reference mechanisation of the block forest data structure, necessary for implementing provably correct per-node protocol logic. We also define a model of a network, implementing the protocol in the form of a replicated state-transition system. The protocol's executions are modeled via a small-step operational semantics for asynchronous message passing, in which packages can be rearranged or duplicated. In this work, we focus on the notion of global system safety, proving a form of eventual consistency. To do so, we provide a library of theorems about a pure functional implementation of block forests, define an inductive system invariant, and show that, in a quiescent system state, it implies a global agreement on the state of per-node transaction ledgers. Our development is parametric with respect to implementations of several security primitives, such as hash-functions, a notion of a proof object, a Validator Acceptance Function, and a Fork Choice Rule. We precisely characterise the assumptions, made about these components for proving the global system consensus, and discuss their adequacy. All results described in this paper are formalised in Coq.
Experimentation tools facilitate exploration of Tor performance and security research problems and allow researchers to safely and privately conduct Tor experiments without risking harm to real Tor users. However, researchers using these tools configure them to generate network traffic based on simplifying assumptions and outdated measurements and without understanding the efficacy of their configuration choices. In this work, we design a novel technique for dynamically learning Tor network traffic models using hidden Markov modeling and privacy-preserving measurement techniques. We conduct a safe but detailed measurement study of Tor using 17 relays (\textasciitilde2% of Tor bandwidth) over the course of 6 months, measuring general statistics and models that can be used to generate a sequence of streams and packets. We show how our measurement results and traffic models can be used to generate traffic flows in private Tor networks and how our models are more realistic than standard and alternative network traffic generation\textasciitildemethods.
This paper presents DeDoS, a novel platform for mitigating asymmetric DoS attacks. These attacks are particularly challenging since even attackers with limited resources can exhaust the resources of well-provisioned servers. DeDoS offers a framework to deploy code in a highly modular fashion. If part of the application stack is experiencing a DoS attack, DeDoS can massively replicate only the affected component, potentially across many machines. This allows scaling of the impacted resource separately from the rest of the application stack, so that resources can be precisely added where needed to combat the attack. Our evaluation results show that DeDoS incurs reasonable overheads in normal operations, and that it significantly outperforms standard replication techniques when defending against a range of asymmetric attacks.
The Internet-of-things (IoT) holds a lot of benefits to our lives by removing menial tasks and improving efficiency of everyday objects. You are trusting your personal data and device control to the manufactures and you may not be aware of how much risk your putting your privacy at by sending your data over the internet. The internet-of-things may not be as secure as you think when the devices used are constrained by a lot of variables which attackers can exploit to gain access to your data / device and anything they connected to and as the internet-of-things is all about connecting devices together one weak point can be all it takes to gain full access. In this paper we have a look at the current advances in IoT security and the most efficient methods to protect IoT devices.
Software container solutions have revolutionized application development approaches by enabling lightweight platform abstractions within the so-called "containers." Several solutions are being actively developed in attempts to bring the benefits of containers to high-performance computing systems with their stringent security demands on the one hand and fundamental resource sharing requirements on the other. In this paper, we discuss the benefits and short-comings of such solutions when deployed on real HPC systems and applied to production scientific applications. We highlight use cases that are either enabled by or significantly benefit from such solutions. We discuss the efforts by HPC system administrators and support staff to support users of these type of workloads on HPC systems not initially designed with these workloads in mind focusing on NCSA's Blue Waters system.
The existing state-of-the-art in the field of intrusion detection systems (IDSs) generally involves some use of machine learning algorithms. However, the computer security community is growing increasingly aware that a sophisticated adversary could target the learning module of these IDSs in order to circumvent future detections. Consequently, going forward, robustness of machine-learning based IDSs against adversarial manipulation (i.e., poisoning) will be the key factor for the overall success of these systems in the real world. In our work, we focus on adaptive IDSs that use anomaly-based detection to identify malicious activities in an information system. To be able to evaluate the susceptibility of these IDSs to deliberate adversarial poisoning, we have developed a novel framework for their performance testing under adversarial contamination. We have also studied the viability of using deep autoencoders in the detection of anomalies in adaptive IDSs, as well as their overall robustness against adversarial poisoning. Our experimental results show that our proposed autoencoder-based IDS outperforms a generic PCA-based counterpart by more than 15% in terms of detection accuracy. The obtained results concerning the detection ability of the deep autoencoder IDS under adversarial contamination, compared to that of the PCA-based IDS, are also encouraging, with the deep autoencoder IDS maintaining a more stable detection in parallel to limiting the contamination of its training dataset to just bellow 2%.
The Internet of Things (IoT) market is growing rapidly, allowing continuous evolution of new technologies. Alongside this development, most IoT devices are easy to compromise, as security is often not a prioritized characteristic. This paper proposes a novel IoT Security Model (IoTSM) that can be used by organizations to formulate and implement a strategy for developing end-to-end IoT security. IoTSM is grounded by the Software Assurance Maturity Model (SAMM) framework, however it expands it with new security practices and empirical data gathered from IoT practitioners. Moreover, we generalize the model into a conceptual framework. This approach allows the formal analysis for security in general and evaluates an organization's security practices. Overall, our proposed approach can help researchers, practitioners, and IoT organizations, to discourse about IoT security from an end-to-end perspective.
The principle of least privilege requires that components of a program have access to only those resources necessary for their proper function. Defining proper function is a difficult task. Existing methods of privilege separation, like Control Flow Integrity and Software Fault Isolation, attempt to infer proper function by bridging the gaps between language abstractions and hardware capabilities. However, it is programmer intent that defines proper function, as the programmer writes the code that becomes law. Codifying programmer intent into policy is a promising way to capture proper function; however, often onerous policy creation can unnecessarily delay development and adoption. In this paper, we demonstrate the use of our ELF-based access control (ELFbac), a novel technique for policy definition and enforcement. ELFbac leverages the common programmer's existing mental model of scope, and allows for policy definition at the Application Binary Interface (ABI) level. We consider the roaming vulnerability found in OpenSSH, and demonstrate how using ELFbac would have provided strong mitigation with minimal program modification. This serves to illustrate the effectiveness of ELFbac as a means of privilege separation in further applications, and the intuitive, yet robust nature of our general approach to policy creation.
The security field relies on user studies, often including survey questions, to query end users' general security behavior and experiences, or hypothetical responses to new messages or tools. Self-report data has many benefits – ease of collection, control, and depth of understanding – but also many well-known biases stemming from people's difficulty remembering prior events or predicting how they might behave, as well as their tendency to shape their answers to a perceived audience. Prior work in fields like public health has focused on measuring these biases and developing effective mitigations; however, there is limited evidence as to whether and how these biases and mitigations apply specifically in a computer-security context. In this work, we systematically compare real-world measurement data to survey results, focusing on an exemplar, well-studied security behavior: software updating. We align field measurements about specific software updates (n=517,932) with survey results in which participants respond to the update messages that were used when those versions were released (n=2,092). This allows us to examine differences in self-reported and observed update speeds, as well as examining self-reported responses to particular message features that may correlate with these results. The results indicate that for the most part, self-reported data varies consistently and systematically with measured data. However, this systematic relationship breaks down when survey respondents are required to notice and act on minor details of experimental manipulations. Our results suggest that many insights from self-report security data can, when used with care, translate to real-world environments; however, insights about specific variations in message texts or other details may be more difficult to assess with surveys.