Biblio
As a decentralized and distributed secure storage technology, the notion of blockchain is now widely used for electronic trading in finance, for issuing digital certificates, for copyrights management, and for many other security-critical applications. With applications in so many domains with high-assurance requirements, the formalization and verification of safety and security properties of blockchain becomes essential, and the aim of the present paper. We present the model-based formalization, simulation and verification of a blockchain protocol by using the SDL formalism of Telelogic Tau. We consider the hierarchical and modular SDL model of the blockchain protocol and exercise a methodology to formally simulate and verify it. This way, we show how to effectively increase the security and safety of blockchain in order to meet high assurance requirements demanded by its application domains. Our work also provides effective support for assessing different network consensus algorithms, which are key components in blockchain protocols, as well as on the topology of blockchain networks. In conclusion, our approach contributes to setting up a verification methodology for future blockchain standards in digital trading.
Industrial control systems are changing from monolithic to distributed and interconnected architectures, entering the era of industrial IoT. One fundamental issue is that security properties of such distributed control systems are typically only verified empirically, during development and after system deployment. We propose a novel modelling framework for the security verification of distributed industrial control systems, with the goal of moving towards early design stage formal verification. In our framework we model industrial IoT infrastructures, attack patterns, and mitigation strategies for countering attacks. We conduct model checking-based formal analysis of system security through scenario execution, where the analysed system is exposed to attacks and implement mitigation strategies. We study the applicability of our framework for large systems using a scalability analysis.
In the past decades, static code analysis has become a prevalent means to detect bugs and security vulnerabilities in software systems. As software becomes more complex, analysis tools also report lists of increasingly complex warnings that developers need to address on a daily basis. The novel insight we present in this work is that static analysis tools and video games both require users to take on repetitive and challenging tasks. Importantly, though, while good video games manage to keep players engaged, static analysis tools are notorious for their lacking user experience, which prevents developers from using them to their full potential, frequently resulting in dissatisfaction and even tool abandonment. We show parallels between gaming and using static analysis tools, and advocate that the user-experience issues of analysis tools can be addressed by looking at the analysis tooling system as a whole, and by integrating gaming elements that keep users engaged, such as providing immediate and clear feedback, collaborative problem solving, or motivators such as points and badges.
Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich's famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich's barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank $łeq$ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP $\subseteq$ $\exists$ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class $\exists$ BPP is known to be a subset of the more well known complexity class in the literature. Thus $\exists$ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P\#P $\subseteq$ $\exists$ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case.
A Mobile ad hoc network (MANET) is a set of nodes that communicate together in a cooperative way using the wireless medium, and without any central administration. Due to its inherent open nature and the lack of infrastructure, security is a complicated issue compared to other networks. That is, these networks are vulnerable to a a wide range of attacks at different network layers. At the network level, malicious nodes can perform several attacks ranging from passive eavesdropping to active interfering. Wormhole is an example of severe attack that has attracted much attention recently. It involves the redirection of traffic between two end-nodes through a Wormhole tunnel, and manipulates the routing algorithm to give illusion that nodes located far from each other are neighbors. To handle with this issue, we propose a novel detection model to allow a node to check whether a presumed shortest path contains a Wormhole tunnel or not. Our approach is based on the fact that the Wormhole tunnel reduces significantly the length of the paths passing through it.
The Advanced Encryption Standard (AES) enables secure transmission of confidential messages. Since its invention, there have been many proposed attacks against the scheme. For example, one can inject errors or faults to acquire the encryption keys. It has been shown that the AES algorithm itself does not provide a protection against these types of attacks. Therefore, additional techniques like error control codes (ECCs) have been proposed to detect active attacks. However, not all the proposed solutions show the adequate efficacy. For instance, linear ECCs have some critical limitations, especially when the injected errors are beyond their fault detection or tolerance capabilities. In this paper, we propose a new method based on a non-linear code to protect all four internal stages of the AES hardware implementation. With this method, the protected AES system is able to (a) detect all multiplicity of errors with a high probability and (b) correct them if the errors follow certain patterns or frequencies. Results shows that the proposed method provides much higher security and reliability to the AES hardware implementation with minimal overhead.
In the distributed Internet of Things (IoT) architecture, sensors collect data from vehicles, home appliances and office equipment and other environments. Various objects contain the sensor which process data, cooperate and exchange information with other embedded devices and end users in a distributed network. It is important to provide end-to-end communication security and an authentication system to guarantee the security and reliability of the data in such a distributed system. Two-factor authentication is a solution to improve the security level of password-based authentication processes and immunized the system against many attacks. At the same time, the computational and storage overhead of an authentication method also needs to be considered in IoT scenarios. For this reason, many cryptographic schemes are designed especially for the IoT; however, we observe a lack of laboratory hardware test beds and modules, and universal authentication hardware modules. This paper proposes a design and analysis for a hardware module in the IoT which allows the use of two-factor authentication based on smart cards, while taking into consideration the limited processing power and energy reserves of nodes, as well as designing the system with scalability in mind.
In spite of numerous advantages of biometrics-based personal authentication systems over traditional security systems based on token or knowledge, they are vulnerable to attacks that can decrease their security considerably. In this paper, we propose a new hardware solution to protect biometric templates such as fingerprint. The proposed scheme is based on chaotic N × N grid multi-scroll system and it is implemented on Xilinx FPGA. The hardware implementation is achieved by applying numerical solution methods in our study, we use EM (Euler Method). Simulation and experimental results show that the proposed scheme allows a low cost image encryption for embedded systems while still providing a good trade-off between performance and hardware resources. Indeed, security analysis performed to the our scheme, is strong against known different attacks, such as: brute force, statistical, differential, and entropy. Therefore, the proposed chaos-based multiscroll encryption algorithm is suitable for use in securing embedded biometric systems.
The NEREIDA wave generation power plant installed in Mutriku, Spain is a multiple Oscillating Water Column (OWC) plant. The power takeoff consists of a Wells turbine coupled to a Doubly Fed Induction Generator (DFIG). The stalling behavior present in the Wells turbine limits the generated power. This paper presents the modeling and a Harmony Search Algorithm-based airflow control of the OWC. The Harmony Search Algorithm (HSA) is proposed to help overcome the limitations of a traditionally tuned PID. An investigation between HSA-tuned controller and the traditionally tuned controller has been performed. Results of the controlled and uncontrolled plant prove the effectiveness of the airflow control and the superiority of the HSA-tuned controller.
When sensitive data is stored in the cloud, the only way to ensure its secrecy is by encrypting it before it is uploaded. The emerging multi-cloud model, in which data is stored redundantly in two or more independent clouds, provides an opportunity to protect sensitive data with secret-sharing schemes. Both data-protection approaches are considered computationally expensive, but recent advances reduce their costs considerably: (1) Hardware acceleration methods promise to eliminate the computational complexity of encryption, but leave clients with the challenge of securely managing encryption keys. (2) Secure RAID, a recently proposed scheme, minimizes the computational overheads of secret sharing, but requires non-negligible storage overhead and random data generation. Each data-protection approach offers different tradeoffs and security guarantees. However, when comparing them, it is difficult to determine which approach will provide the best application-perceived performance, because previous studies were performed before their recent advances were introduced. To bridge this gap, we present the first end-to-end comparison of state-of-the-art encryption-based and secret sharing data protection approaches. Our evaluation on a local cluster and on a multi-cloud prototype identifies the tipping point at which the bottleneck of data protection shifts from the computational overhead of encoding and random data generation to storage and network bandwidth and global availability.
The Internet-of-Things designates the interconnection of a variety of communication-enabled physical objects. IoT systems and devices must operate with a deterministic behavior and respect user-defined system goals in any situation. We thus defined hybrid controller synthesis for decentralized and critical IoT systems relying on a set of rules to handle situations with asynchronous and synchronous event processing. This framework defines a declarative rule-driven governance mechanism of locally synchronous sub-systems enabling the hybrid control of IoT systems with formal guarantees over the satisfaction of system-wide QoS requirements. In order to prove the practicality of our framework, it was applied to a critical medical Internet-of-Things use case, demonstrating its usability for safety-critical IoT applications.
Third-party software daemons called host agents are increasingly responsible for a modern host's security, automation, and monitoring tasks. Because of their location within the host, these agents are at risk of manipulation by malware and users. Additionally, in virtualized environments where multiple adjacent guests each run their own set of agents, the cumulative resources that agents consume adds up rapidly. Consolidating agents onto the hypervisor can address these problems, but places a technical burden on agent developers. This work presents a development methodology to re-engineer a host agent in to a hyperagent, an out-of-guest agent that gains unique hypervisor-based advantages while retaining its original in-guest capabilities. This three-phase methodology makes integrating Virtual Machine Introspection (VMI) functionality in to existing code easier and more accessible, minimizing an agent developer's re-engineering effort. The benefits of hyperagents are illustrated by porting the GRR live forensics agent, which retains 89% of its codebase, uses 40% less memory than its in-guest counterparts, and enables a 4.9x speedup for a representative data-intensive workload. This work shows that a conventional off-the-shelf host agent can be feasibly transformed into a hyperagent and provide a powerful, efficient tool for defending virtualized systems.
An important source of cyber-attacks is malware, which proliferates in different forms such as botnets. The botnet malware typically looks for vulnerable devices across the Internet, rather than targeting specific individuals, companies or industries. It attempts to infect as many connected devices as possible, using their resources for automated tasks that may cause significant economic and social harm while being hidden to the user and device. Thus, it becomes very difficult to detect such activity. A considerable amount of research has been conducted to detect and prevent botnet infestation. In this paper, we attempt to create a foundation for an anomaly-based intrusion detection system using a statistical learning method to improve network security and reduce human involvement in botnet detection. We focus on identifying the best features to detect botnet activity within network traffic using a lightweight logistic regression model. The network traffic is processed by Bro, a popular network monitoring framework which provides aggregate statistics about the packets exchanged between a source and destination over a certain time interval. These statistics serve as features to a logistic regression model responsible for classifying malicious and benign traffic. Our model is easy to implement and simple to interpret. We characterized and modeled 8 different botnet families separately and as a mixed dataset. Finally, we measured the performance of our model on multiple parameters using F1 score, accuracy and Area Under Curve (AUC).
This paper presents some verifications and improved considerations of NAND PUF, which was introduced recently [1]. For embedded system such as IC cards, the secret data in memory is vulnerable, so it has to be encrypted and secured. PUF circuit is sensitive to environmental condition, especially in the temperature range influences and variations of current and voltages. This proposed bank IC card would be operated in AB class standard, i.e. voltage would be constant except for power mode changing. Nevertheless, operational temperatures may vary such as the situation of outdoor ATM. Thus, this paper presented some results of our PUF work in Cadence, also on FPGA board. Around 5ns is spent for stabilization of our PUF output that is under variance temperature when power mode changes. Inter Hamming distances is 48.9%, very near to uniqueness and robustness value, that our PUF is feasible to use in bankcard. The maximum error rates are HDintra(0$^\circ$C) = 3.9961 and HDintra(80$^\circ$C) = 3.9916 where at antipoles, while the minimum error rate is HDintra(20$^\circ$C) = 2.9 at room temperature. For improvement, Repetition, LDPC and SEC-DED codes are considered that would eliminate error rates.
Industry 4.0 is based on the CPS architecture since it is the next generation in the industry. The CPS architecture is a system based on Cloud Computing technology and Internet of Things where computer elements collaborate for the control of physical entities. The security framework in this architecture is necessary for the protection of two parts (physical and information) so basically, security in CPS is classified into two main parts: information security (data) and security of control. In this work, we propose two models to solve the two problems detected in the security framework. The first proposal SCCAF (Smart Cloud Computing Adoption Framework) treats the nature of information that serves for the detection and the blocking of the threats our basic architecture CPS. The second model is a modeled detector related to the physical nature for detecting node information.
Dynamic interaction appears in many real-world scenarios where players are able to observe (perhaps imperfectly) the actions of another player and react accordingly. We consider the baseline representation of dynamic games - the extensive form - and focus on computing Stackelberg equilibrium (SE), where the leader commits to a strategy to which the follower plays a best response. For one-shot games (e.g., security games), strategy-generation (SG) algorithms offer dramatic speed-up by incrementally expanding the strategy spaces. However, a direct application of SG to extensive-form games (EFGs) does not bring a similar speed-up since it typically results in a nearly-complete strategy space. Our contributions are twofold: (1) for the first time we introduce an algorithm that allows us to incrementally expand the strategy space to find a SE in EFGs; (2) we introduce a heuristic variant of the algorithm that is theoretically incomplete, but in practice allows us to find exact (or close-to optimal) Stackelberg equilibrium by constructing a significantly smaller strategy space. Our experimental evaluation confirms that we are able to compute SE by considering only a fraction of the strategy space that often leads to a significant speed-up in computation times.
As modern societies become more dependent on IT services, the potential impact both of adversarial cyberattacks and non-adversarial service management mistakes grows. This calls for better cyber situational awareness-decision-makers need to know what is going on. The main focus of this paper is to examine the information elements that need to be collected and included in a common operational picture in order for stakeholders to acquire cyber situational awareness. This problem is addressed through a survey conducted among the participants of a national information assurance exercise conducted in Sweden. Most participants were government officials and employees of commercial companies that operate critical infrastructure. The results give insight into information elements that are perceived as useful, that can be contributed to and required from other organizations, which roles and stakeholders would benefit from certain information, and how the organizations work with creating cyber common operational pictures today. Among findings, it is noteworthy that adversarial behavior is not perceived as interesting, and that the respondents in general focus solely on their own organization.
Owing1 to an immense growth of internet-connected and learning-enabled cyber-physical systems (CPSs) [1], several new types of attack vectors have emerged. Analyzing security and resilience of these complex CPSs is difficult as it requires evaluating many subsystems and factors in an integrated manner. Integrated simulation of physical systems and communication network can provide an underlying framework for creating a reusable and configurable testbed for such analyses. Using a model-based integration approach and the IEEE High-Level Architecture (HLA) [2] based distributed simulation software; we have created a testbed for integrated evaluation of large-scale CPS systems. Our tested supports web-based collaborative metamodeling and modeling of CPS system and experiments and a cloud computing environment for executing integrated networked co-simulations. A modular and extensible cyber-attack library enables validating the CPS under a variety of configurable cyber-attacks, such as DDoS and integrity attacks. Hardware-in-the-loop simulation is also supported along with several hardware attacks. Further, a scenario modeling language allows modeling of alternative paths (Courses of Actions) that enables validating CPS under different what-if scenarios as well as conducting cyber-gaming experiments. These capabilities make our testbed well suited for analyzing security and resilience of CPS. In addition, the web-based modeling and cloud-hosted execution infrastructure enables one to exercise the entire testbed using simply a web-browser, with integrated live experimental results display.
Now-a-days, the security of data becomes more and more important, people store many personal information in their phones. However, stored information require security and maintain privacy. Encryption algorithm has become the main force of maintaining the security of data. Thus, the algorithm complexity and encryption efficiency have become the main measurement of whether the encryption algorithm is save or not. With the development of hardware, we have many tools to improve the algorithm at present. Because modular exponentiation in RSA algorithm can be divided into several parts mathematically. In this paper, we introduce a conception by dividing the process of encryption and add the model into graphics process unit (GPU). By using GPU's capacity in parallel computing, the core of RSA can be accelerated by using central process unit (CPU) and GPU. Compute unified device architecture (CUDA) is a platform which can combine CPU and GPU together to realize GPU parallel programming and this is the tool we use to perform experience of accelerating RSA algorithm. This paper will also build up a mathematical model to help understand the mechanism of RSA encryption algorithm.
Memory-safety violations are the primary cause of security and reliability issues in software systems written in unsafe languages. Given the limited adoption of decades-long research in software-based memory safety approaches, as an alternative, Intel released Memory Protection Extensions (MPX)–-a hardware-assisted technique to achieve memory safety. In this work, we perform an exhaustive study of Intel MPX architecture along three dimensions: (a) performance overheads, (b) security guarantees, and (c) usability issues. We present the first detailed root cause analysis of problems in the Intel MPX architecture through a cross-layer dissection of the entire system stack, involving the hardware, operating system, compilers, and applications. To put our findings into perspective, we also present an in-depth comparison of Intel MPX with three prominent types of software-based memory safety approaches. Lastly, based on our investigation, we propose directions for potential changes to the Intel MPX architecture to aid the design space exploration of future hardware extensions for memory safety. A complete version of this work appears in the 2018 proceedings of the ACM on Measurement and Analysis of Computing Systems.
We describe a new way of compressing two-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants' private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) $\cdot$ loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost.
Interoperability has become a crucial value in European e-government developments, as promoted by the Digital Single Market strategy and the Tallinn Declaration. The European Union and its Member States have made considerable investments in improving the understanding of interoperability and in developing interoperable building blocks to support cross-border data exchange and public service provisioning. This includes recent updates of the European Interoperability Framework (EIF) and European Interoperability Reference Architecture (EIRA), as well as the publication of a number of generic and domain specific architecture and solutions building blocks such as digital identification or electronic delivery services. While in the previous version of the EIF, interoperability governance was not clearly developed, the new version of 2017 puts interoperability governance as a concept that spans across the different interoperability layers (legal, organizational, semantic and technical) and that builds the frame for interoperability overall. In this paper, we develop a definition of interoperability governance from a literature review and we put forward a model to investigate interoperability governance models at European and Member State levels. Based on several case studies of EU institutions and Member States, we could draw recommendations for what the key aspects of interoperability governance are to successfully diffuse interoperability into public service provisioning.
Robots are sophisticated form of IoT devices as they are smart devices that scrutinize sensor data from multiple sources and observe events to decide the best procedural actions to supervise and manoeuvre objects in the physical world. In this paper, localization of the robot is addressed by QR code Detection and path optimization is accomplished by Dijkstras algorithm. The robot can navigate automatically in its environment with sensors and shortest path is computed whenever heading measurements are updated with QR code landmark recognition. The proposed approach highly reduces computational burden and deployment complexity as it reflects the use of artificial intelligence to self-correct its course when required. An Encrypted communication channel is established over wireless local area network using SSHv2 protocol to transfer or receive sensor data(or commands) making it an IoT enabled Robot.
Despite the additional protection it affords, two-factor authentication (2FA) adoption reportedly remains low. To better understand 2FA adoption and its barriers, we observed the deployment of a 2FA system at Carnegie Mellon University (CMU). We explore user behaviors and opinions around adoption, surrounding a mandatory adoption deadline. Our results show that (a) 2FA adopters found it annoying, but fairly easy to use, and believed it made their accounts more secure; (b) experience with CMU Duo often led to positive perceptions, sometimes translating into 2FA adoption for other accounts; and, (c) the differences between users required to adopt 2FA and those who adopted voluntarily are smaller than expected. We also explore the relationship between different usage patterns and perceived usability, and identify user misconceptions, insecure practices, and design issues. We conclude with recommendations for large-scale 2FA deployments to maximize adoption, focusing on implementation design, use of adoption mandates, and strategic messaging.