Biblio
It has recently become apparent that both accidental and maliciously caused randomness failures pose a real and serious threat to the security of cryptographic primitives, and in response, researchers have begone the development of primitives that provide robustness against these. In this paper, however, we focus on standardized, widely available primitives. Specifically, we analyze the RSA-OAEP encryption scheme and RSA-PSS signature schemes, specified in PKCS \#1, using the related randomness security notion introduced by Paterson et al. (PKC 2014) and its extension to signature schemes. We show that, under the RSA and $\Phi$-hiding assumptions, RSA-OAEP encryption is related randomness secure for a large class of related randomness functions in the random oracle model, as long as the recipient is honest, and remains secure even when additionally considering malicious recipients, as long as the related randomness functions does not allow the malicious recipients to efficiently compute the randomness used for the honest recipient. We furthermore show that, under the RSA assumption, the RSA-PSS signature scheme is secure for any class of related randomness functions, although with a non-tight security reduction. However, under additional, albeit somewhat restrictive assumptions on the related randomness functions and the adversary, a tight reduction can be recovered. Our results provides some reassurance regarding the use of RSA-OAEP and RSA-PSS in environments where randomness failures might be a concern. Lastly, we note that, unlike RSA-OAEP and RSA-PSS, several other schemes, including RSA-KEM, part of ISO 18033-2, and DHIES, part of IEEE P1363a, are not secure under simple repeated randomness attacks.
In this paper we conduct an empirical study with the purpose of identifying common software weaknesses of embedded devices used as part of industrial control systems in power grids. The data is gathered about the devices and software of 6 companies, ABB, General Electric, Schneider Electric, Schweitzer Engineering Laboratories, Siemens and Wind River. The study uses data from the manufacturersfi online databases, NVD, CWE and ICS CERT. We identified that the most common problems that were reported are related to the improper input validation, cryptographic issues, and programming errors.
We use model-based testing techniques to detect logical vulnerabilities in implementations of the Wi-Fi handshake. This reveals new fingerprinting techniques, multiple downgrade attacks, and Denial of Service (DoS) vulnerabilities. Stations use the Wi-Fi handshake to securely connect with wireless networks. In this handshake, mutually supported capabilities are determined, and fresh pairwise keys are negotiated. As a result, a proper implementation of the Wi-Fi handshake is essential in protecting all subsequent traffic. To detect the presence of erroneous behaviour, we propose a model-based technique that generates a set of representative test cases. These tests cover all states of the Wi-Fi handshake, and explore various edge cases in each state. We then treat the implementation under test as a black box, and execute all generated tests. Determining whether a failed test introduces a security weakness is done manually. We tested 12 implementations using this approach, and discovered irregularities in all of them. Our findings include fingerprinting mechanisms, DoS attacks, and downgrade attacks where an adversary can force usage of the insecure WPA-TKIP cipher. Finally, we explain how one of our downgrade attacks highlights incorrect claims made in the 802.11 standard.
Today, the proportion of software in society as a whole is steadily increasing. In addition to size of software increasing, the number of cases dealing with personal information is also increasing. This shows the importance of weekly software security verification. However, software security is very difficult in cases where libraries do not have source code. To solve this problem, it is necessary to develop a technique for checking existing binary security weaknesses. To this end, techniques for analyzing security weaknesses using intermediate languages are actively being discussed. In this paper, we propose a system that translate binary code to intermediate language to effectively analyze existing security weaknesses within binary code.
Due to flexibility, low cost and rapid deployment, wireless sensor networks (WSNs)have been drawing more and more interest from governments, researchers, application developers, and manufacturers in recent years. Nowadays, we are in the age of industry 4.0, in which the traditional industrial control systems will be connected with each other and provide intelligent manufacturing. Therefore, WSNs can play an extremely crucial role to monitor the environment and condition parameters for smart factories. Nevertheless, the introduction of the WSNs reveals the weakness, especially for industrial applications. Through the vulnerability of IWSNs, the latent attackers were likely to invade the information system. Risk evaluation is an overwhelmingly efficient method to reduce the risk of information system in order to an acceptable level. This paper aim to study the security issues about IWSNs as well as put forward a practical solution to evaluate the risk of IWSNs, which can guide us to make risk evaluation process and improve the security of IWSNs through appropriate countermeasures.
In this paper, an industrial testbed is proposed utilizing commercial-off-the-shelf equipment, and it is used to study the weakness of industrial Ethernet, i.e., PROFINET. The investigation is based on observation of the principles of operation of PROFINET and the functionality of industrial control systems.
Recently, Jung et al. [1] proposed a data access privilege scheme and claimed that their scheme addresses data and identity privacy as well as multi-authority, and provides data access privilege for attribute-based encryption. In this paper, we show that this scheme, and also its former and latest versions (i.e. [2] and [3] respectively) suffer from a number of weaknesses in terms of finegrained access control, users and authorities collusion attack, user authorization, and user anonymity protection. We then propose our new scheme that overcomes these shortcomings. We also prove the security of our scheme against user collusion attacks, authority collusion attacks and chosen plaintext attacks. Lastly, we show that the efficiency of our scheme is comparable with existing related schemes.
The factors that threaten electric power information network are analyzed. Aiming at the weakness of being unable to provide numerical value of risk, this paper presents the evaluation index system, the evaluation model and method of network security based on multilevel fuzzy comprehensive judgment. The steps and method of security evaluation by the synthesis evaluation model are provided. The results show that this method is effective to evaluate the risk of electric power information network.
This study is an empirical study of the factors affecting cyber security breach. Based on the previous researches, factors affecting information security were classified into three aspects of system management, policy and institution, awareness and culture, and measurement items were constructed for each. As a result, outsourcing of information security, data backup, information security awareness by top management and employees were significant influencing variables. The analysis results show that the elements of cognitive aspect are very important. This should be remembered in security that eventually the acceptance of the members who use the security system safely and observe the relevant rules is very important.
Improving e-government services by using data more effectively is a major focus globally. It requires Public Administrations to be transparent, accountable and provide trustworthy services that improve citizen confidence. However, despite all the technological advantages on developing such services and analysing security and privacy concerns, the literature does not provide evidence of frameworks and platforms that enable privacy analysis, from multiple perspectives, and take into account citizens' needs with regards to transparency and usage of citizens information. This paper presents the VisiOn (Visual Privacy Management in User Centric Open Requirements) platform, an outcome of a H2020 European Project. Our objective is to enable Public Administrations to analyse privacy and security from different perspectives, including requirements, threats, trust and law compliance. Finally, our platform-supported approach introduces the concept of Privacy Level Agreement (PLA) which allows Public Administrations to customise their privacy policies based on the privacy preferences of each citizen.
E-Governance is rising rapidly in various parts of the world. And with rising digitization of the resources, the threats to the infrastructure and digital data is also rising within the government departments. For developed nations, the security parameters and optimization process is well placed but for developing nations like India, the security parameter is yet to be addressed strongly. This study proposes a framework for security assessment amongst E-Governance departments based on Information System principles. The major areas of security to be covered up are towards Hardware, Network, Software, Server, & Data security, Physical Environment Security, and various policies for security of Information Systems at organizational level.
Authorization policy authoring has required tools from the start. With access policy governance now an executive-level responsibility, it is imperative that such a tool expose the policy to business users' with little or no IT intervention-as natural language. NIST SP 800-162 [1] first prescribes natural language policies (NLPs) as the preferred expression of policy and then implicitly calls for automated translation of NLP to machine-executable code. This paper therefore proposes an interoperable model for the NLP's human expression. It furthermore documents the research and development of a tool set for end-to-end authoring and translation. This R&D journey-focusing constantly on end users' has debunked certain myths, has responded to steadily increasing market sophistication, has applied formal disciplines (e.g. ontologies, grammars and compiler design) and has motivated an informal demonstration of autonomic code generation. The lessons learned should be of practical value to the entire ABAC community. The research in progress' increasingly complex policies, proactive rule analytics, and expanded NLP authoring language support will require collaboration with an ever-expanding technical community from industry and academia.
This paper presents the preliminary framework proposed by the authors for drivers of Smart Governance. The research question of this study is: What are the drivers for Smart Governance to achieve evidence-based policy-making? The framework suggests that in order to create a smart governance model, data governance and collaborative governance are the main drivers. These pillars are supported by legal framework, normative factors, principles and values, methods, data assets or human resources, and IT infrastructure. These aspects will guide a real time evaluation process in all levels of the policy cycle, towards to the implementation of evidence-based policies.
Embedded software is found everywhere from our highly visible mobile devices to the confines of our car in the form of smart sensors. Embedded software companies are under huge pressure to produce safe applications that limit risks, and testing is absolutely critical to alleviate concerns regarding safety and user privacy. This requires using large test suites throughout the development process, increasing time-to-market and ultimately hindering competitivity. Speeding up test execution is, therefore, of paramount importance for embedded software developers. This is traditionally achieved by running, in parallel, multiple tests on large-scale clusters of computers. However, this approach is costly in terms of infrastructure maintenance and energy consumed, and is at times inconvenient as developers have to wait for their tests to be scheduled on a shared resource. We propose to look at exploiting GPUs (Graphics Processing Units) for running embedded software testing. GPUs are readily available in most computers and offer tremendous amounts of parallelism, making them an ideal target for embedded software testing. In this paper, we demonstrate, for the first time, how test executions of embedded C programs can be automatically performed on a GPU, without involving the end user. We take a compiler-assisted approach which automatically compiles the C program into GPU kernels for parallel execution of the input tests. Using this technique, we achieve an average speedup of 16Ã when compared to CPU execution of input tests across nine programs from an industry standard embedded benchmark suite.
An operating system kernel written in the Rust language would have extremely fine-grained isolation boundaries, have no memory leaks, and be safe from a wide range of security threats and memory bugs. Previous efforts towards this end concluded that writing a kernel requires changing Rust. This paper reaches a different conclusion, that no changes to Rust are needed and a kernel can be implemented with a very small amount of unsafe code. It describes how three sample kernel mechanisms–-DMA, USB, and buffer caches–-can be built using these abstractions.
Over the years a lot of effort has been put on solving extensibility problems, while retaining important software engineering properties such as modular type-safety and separate compilation. Most previous work focused on operations that traverse and process extensible Abstract Syntax Tree (AST) structures. However, there is almost no work on operations that build such extensible ASTs, including parsing. This paper investigates solutions for the problem of modular parsing. We focus on semantic modularity and not just syntactic modularity. That is, the solutions should not only allow complete parsers to be built out of modular parsing components, but also enable the parsing components to be modularly type-checked and separately compiled. We present a technique based on parser combinators that enables modular parsing. Interestingly, the modularity requirements for modular parsing rule out several existing parser combinator approaches, which rely on some non-modular techniques. We show that Packrat parsing techniques, provide solutions for such modularity problems, and enable reasonable performance in a modular setting. Extensibility is achieved using multiple inheritance and Object Algebras. To evaluate the approach we conduct a case study based on the âTypes and Programming Languagesâ interpreters. The case study shows the effectiveness at reusing parsing code from existing interpreters, and the total parsing code is 69% shorter than an existing code base using a non-modular parsing approach.
Artificial software diversity is an effective way to prevent software vulnerabilities and errors to be exploited in code-reuse attacks. This is achieved by lowering the individual probability of a successful attack to a level that makes the attack unfeasible. Unfortunately, the existing approaches are not applicable to safety-critical real-time systems as they induce unacceptable performance overheads, they violate safety and timing guarantees, or they assume hardware resources which are typically not available in embedded systems. To overcome these problems, we propose a safe diversity approach that preserves the timing properties of real-time processes by controlling its impact on the worst case execution time (WCET). Our main idea is to use block-level diversity with a large, but fixed set of movable instruction sequences, and to use static WCET analysis to identify non-critical areas of code where it can safely be split into more movable instruction sequences.
Trustworthy and safe operation of the power grid critical infrastructures relies on secure execution of low-level substation controller devices such as programmable logic controllers (PLCs). Currently, there are very few security protection solutions deployed on these devices to ensure provenance control: to execute controller code on the device that is developed by trusted parties and complies with safety/security policies that are defined by the code developer as well as the power grid operators. Resource-limited PLC controllers have been becoming increasingly popular among not only legitimate system operators, but also malicious adversaries such as the most recent Stuxnet and BlackEnergy malware that caused various damages such as unauthorized infrastructural safety and integrity violations. We present PLCtrust, a domain-specific solution that deploys virtual micro security-perimeters, so-called capsules, and the corresponding device-level runtime power system-safety policy enforcement dynamically. PLCtrust makes use of data taint analysis to monitor and control data flow among the capsules based on data owner-defined policies. PLCtrust provides the operators with a transparent and lightweight solution to address various safety-critical data protection requirements. PLCtrust also provides the legitimate third-party controller code developers with a taint-aware programming interface to develop applications in compliance with the dynamic power system safety/security policies. Our experimental results on real-world settings show that PLCtrust is transparent to the end-users while ensuring the power grid safety maintenance with minimal performance overhead.
Byte-addressable non-volatile memory technology is emerging as an alternative for DRAM for main memory. This new Non-Volatile Main Memory (NVMM) allows programmers to store important data in data structures in memory instead of serializing it to the file system, thereby providing a substantial performance boost. However, modern systems reorder memory operations and utilize volatile caches for better performance, making it difficult to ensure a consistent state in NVMM. Intel recently announced a new set of persistence instructions, clflushopt, clwb, and pcommit. These new instructions make it possible to implement fail-safe code on NVMM, but few workloads have been written or characterized using these new instructions. In this work, we describe how these instructions work and how they can be used to implement write-ahead logging based transactions. We implement several common data structures and kernels and evaluate the performance overhead incurred over traditional non-persistent implementations. In particular, we find that persistence instructions occur in clusters along with expensive fence operations, they have long latency, and they add a significant execution time overhead, on average by 20.3% over code with logging but without fence instructions to order persists. To deal with this overhead and alleviate the performance bottleneck, we propose to speculate past long latency persistency operations using checkpoint-based processing. Our speculative persistence architecture reduces the execution time overheads to only 3.6%.
This paper introduces a new efficient algorithm for computing Grobner-bases named M4GB. Like Faugere's algorithm F4 it is an extension of Buchberger's algorithm that describes: how to store already computed (tail-)reduced multiples of basis polynomials to prevent redundant work in the reduction step; and how to exploit efficient linear algebra for the reduction step. In comparison to F4 it removes further redundant work in the processing of reducible monomials. Furthermore, instead of translating the reduction of many critical pairs into the row reduction of some large matrix, our algorithm is described more natively and is efficient while processing critical pairs one by one. This feature implies that typically M4GB has to process fewer critical pairs than F4, and reduces the time and data complexity 'staircase' related to the increasing degree of regularity for a sequence of problems one observes for F4. To achieve high efficiency, M4GB has been designed specifically to operate only on tail-reduced polynomials, i.e., polynomials of which all terms except the leading term are non-reducible. This allows it to perform full-reduction directly in the computation of a term polynomial multiplication, where all computations are done over coefficient vectors over the non-reducible monomials. We have implemented a version of our new algorithm tailored for dense overdefined polynomial systems as a proof of concept and made our source code publicly available. We have made a comparison of our implementation against the implementations of FGBlib, Magma and OpenF4 on various dense Fukuoka MQ challenge problems that we were able to compute in reasonable time and memory. We observed that M4GB uses the least total CPU time and the least memory of all these implementations for those MQ problems, often by a significant factor. In the Fukuoka MQ challenges, the starting challenges of Type V and Type VI have 16 equations which was chosen based on an extrapolated computational runtime of more than a month using Magma. M4GB allowed us to set new records for these Fukuoka MQ challenges breaking Type V (F28) up to 18 equations and Type VI (F31) up to 19 equations, each can be computed within up to 11 days on our dual Xeon system.
Meta-programs are programs that generate other programs, but in weakly type-safe systems, type-checking a meta-program only establishes its own type safety, and generated programs need additional type-checking after generation. Strong type safety of a meta-program implies type safety of any generated object program, a property with important engineering benefits. Current strongly type-safe systems suffer from expressivity limitations and cannot support many meta-programs found in practice, for example automatic generation of lenses. To overcome this, we move away from the idea of staged meta-programming. Instead, we use an off-the-shelf dependently-typed language as the meta-language and a relatively standard, intrinsically well-typed representation of the object language. We scale this approach to practical meta-programming, by choosing a high-level, explicitly typed intermediate representation as the object language, rather than a surface programming language. We implement our approach as a library for the Glasgow Haskell Compiler (GHC) and evaluate it on several meta-programs, including a deriveLenses meta-program taken from a real-world Haskell lens library. Our evaluation demonstrates expressivity beyond the state of the art and applicability to real settings, at little cost in terms of code size.
The Network Intrusion Detection Systems (NIDS) are either signature based or anomaly based. In this paper presented NIDS system belongs to anomaly based Neural Network Intrusion Detection System (NNIDS). The proposed NNIDS is able to successfully recognize learned malicious activities in a network environment. It was tested for the SYN flood attack, UDP flood attack, nMap scanning attack, and also for non-malicious communication.
Deep Learning has been proven more effective than conventional machine-learning algorithms in solving classification problem with high dimensionality and complex features, especially when trained with big data. In this paper, a deep learning binomial classifier for Network Intrusion Detection System is proposed and experimentally evaluated using the UNSW-NB15 dataset. Three different experiments were executed in order to determine the optimal activation function, then to select the most important features and finally to test the proposed model on unseen data. The evaluation results demonstrate that the proposed classifier outperforms other models in the literature with 98.99% accuracy and 0.56% false alarm rate on unseen data.
This paper presents a novel feature learning model for cyber security tasks. We propose to use Auto-encoders (AEs), as a generative model, to learn latent representation of different feature sets. We show how well the AE is capable of automatically learning a reasonable notion of semantic similarity among input features. Specifically, the AE accepts a feature vector, obtained from cyber security phenomena, and extracts a code vector that captures the semantic similarity between the feature vectors. This similarity is embedded in an abstract latent representation. Because the AE is trained in an unsupervised fashion, the main part of this success comes from appropriate original feature set that is used in this paper. It can also provide more discriminative features in contrast to other feature engineering approaches. Furthermore, the scheme can reduce the dimensionality of the features thereby signicantly minimising the memory requirements. We selected two different cyber security tasks: networkbased anomaly intrusion detection and Malware classication. We have analysed the proposed scheme with various classifiers using publicly available datasets for network anomaly intrusion detection and malware classifications. Several appropriate evaluation metrics show improvement compared to prior results.
In the paper, we demonstrate a neuromorphic cognitive computing approach for Network Intrusion Detection System (IDS) for cyber security using Deep Learning (DL). The algorithmic power of DL has been merged with fast and extremely power efficient neuromorphic processors for cyber security. In this implementation, the data has been numerical encoded to train with un-supervised deep learning techniques called Auto Encoder (AE) in the training phase. The generated weights of AE are used as initial weights for the supervised training phase using neural networks. The final weights are converted to discrete values using Discrete Vector Factorization (DVF) for generating crossbar weight, synaptic weights, and thresholds for neurons. Finally, the generated crossbar weights, synaptic weights, threshold, and leak values are mapped to crossbars and neurons. In the testing phase, the encoded test samples are converted to spiking form by using hybrid encoding technique. The model has been deployed and tested on the IBM Neurosynaptic Core Simulator (NSCS) and on actual IBM TrueNorth neurosynaptic chip. The experimental results show around 90.12% accuracy for network intrusion detection for cyber security on the physical neuromorphic chip. Furthermore, we have investigated the proposed system not only for detection of malicious packets but also for classifying specific types of attacks and achieved 81.31% recognition accuracy. The neuromorphic implementation provides incredible detection and classification accuracy for network intrusion detection with extremely low power.