Biblio
The smart grid is a complex cyber-physical system (CPS) that poses challenges related to scale, integration, interoperability, processes, governance, and human elements. The US National Institute of Standards and Technology (NIST) and its government, university and industry collaborators, developed an approach, called CPS Framework, to reasoning about CPS across multiple levels of concern and competency, including trustworthiness, privacy, reliability, and regulatory. The approach uses ontology and reasoning techniques to achieve a greater understanding of the interdependencies among the elements of the CPS Framework model applied to use cases. This paper demonstrates that the approach extends naturally to automated and manual decision-making for smart grids: we apply it to smart grid use cases, and illustrate how it can be used to analyze grid topologies and address concerns about the smart grid. Smart grid stakeholders, whose decision making may be assisted by this approach, include planners, designers and operators.
Event logs that originate from information systems enable comprehensive analysis of business processes, e.g., by process model discovery. However, logs potentially contain sensitive information about individual employees involved in process execution that are only partially hidden by an obfuscation of the event data. In this paper, we therefore address the risk of privacy-disclosure attacks on event logs with pseudonymized employee information. To this end, we introduce PRETSA, a novel algorithm for event log sanitization that provides privacy guarantees in terms of k-anonymity and t-closeness. It thereby avoids disclosure of employee identities, their membership in the event log, and their characterization based on sensitive attributes, such as performance information. Through step-wise transformations of a prefix-tree representation of an event log, we maintain its high utility for discovery of a performance-annotated process model. Experiments with real-world data demonstrate that sanitization with PRETSA yields event logs of higher utility compared to methods that exploit frequency-based filtering, while providing the same privacy guarantees.
In this paper, new image encryption based on singular value decomposition (SVD), fractional discrete cosine transform (FrDCT) and the chaotic system is proposed for the security of medical image. Reliability, vitality, and efficacy of medical image encryption are strengthened by it. The proposed method discusses the benefits of FrDCT over fractional Fourier transform. The key sensitivity of the proposed algorithm for different medical images inspires us to make a platform for other researchers. Theoretical and statistical tests are carried out demonstrating the high-level security of the proposed algorithm.
Massive multiple-input multiple-output (mMIMO) with perfect channel state information (CSI) can lead array power gain increments proportional to the number of antennas. Despite this fact constrains on power amplification still exist due to envelope variations of high order constellation signals. These constrains can be overpassed by a transmitter with several amplification branches, with each one associated to a component signal that results from the decomposition of a multilevel constellation as a sum of several quasi constant envelope signals that are sent independently. When combined with antenna arrays at the end of each amplification branch the security improves due to the energy separation achieved by beamforming. However, to avoid distortion on the signal resulting from the combination of all components at channel level all the beams of signal components should be directed in same direction. In such conditions it is crucial to assess the impact of misalignments between beams associated to each user, which is the purpose of this work. The set of results presented here show the good tolerance against misalignments of these transmission structures.
Deep packet inspection via regular expression (RE) matching is a crucial task of network intrusion detection systems (IDSes), which secure Internet connection against attacks and suspicious network traffic. Monitoring high-speed computer networks (100 Gbps and faster) in a single-box solution demands that the RE matching, traditionally based on finite automata (FAs), is accelerated in hardware. In this paper, we describe a novel FPGA architecture for RE matching that is able to process network traffic beyond 100 Gbps. The key idea is to reduce the required FPGA resources by leveraging approximate nondeterministic FAs (NFAs). The NFAs are compiled into a multi-stage architecture starting with the least precise stage with a high throughput and ending with the most precise stage with a low throughput. To obtain the reduced NFAs, we propose new approximate reduction techniques that take into account the profile of the network traffic. Our experiments showed that using our approach, we were able to perform matching of large sets of REs from SNORT, a popular IDS, on unprecedented network speeds.
Withgrowing times and technology, and the data related to it is increasing on daily basis and so is the daunting task to manage it. The present solution to this problem i.e our present databases, are not the long-term solutions. These data volumes need to be stored safely and retrieved safely to use. This paper presents an overview of security issues for big data. Big Data encompasses data configuration, distribution and analysis of the data that overcome the drawbacks of traditional data processing technology. Big data manages, stores and acquires data in a speedy and cost-effective manner with the help of tools, technologies and frameworks.
Security of data in the Internet of Things (IoT) deals with Encryption to provide a stable secure system. The IoT device possess a constrained Main Memory and Secondary Memory that mandates the use of Elliptic Curve Cryptographic (ECC) scheme. The Scalar Multiplication has a great impact on the ECC implementations in reducing the Computation and Space Complexity, thereby enhancing the performance of an IoT System providing high Security and Privacy. The proposed High Speed Split Multiplier (HSSM) for ECC in IoT is a lightweight Multiplication technique that uses Split Multiplication with Pseudo-Mersenne Prime Number and Montgomery Curve to withstand the Power Analysis Attack. The proposed algorithm reduces the Computation Time and the Space Complexity of the Cryptographic operations in terms of Clock cycles and RAM when compared with Liu et al.,’s multiplication algorithms [1].
The article explores the question of the effective implementation of arithmetic operations with points of an elliptic curve given over a prime field. Given that the basic arithmetic operations with points of an elliptic curve are the operations of adding points and doubling points, we study the question of implementing the arithmetic operations of adding and doubling points in various coordinate systems using the weighted number system and using the Residue Number System (RNS). We have shown that using the fourmodule RNS allows you to get an average gain for the operation of adding points of the elliptic curve of 8.67% and for the operation of doubling the points of the elliptic curve of 8.32% compared to the implementation using the operation of modular multiplication with special moduli from NIST FIPS 186.
Modern large scale technical systems often face iterative changes on their behaviours with the requirement of validated quality which is not easy to achieve completely with traditional testing. Regression verification is a powerful tool for the formal correctness analysis of software-driven systems. By proving that a new revision of the software behaves similarly as the original version of the software, some of the trust that the old software and system had earned during the validation processes or operation histories can be inherited to the new revision. This trust inheritance by the formal analysis relies on a number of implicit assumptions which are not self-evident but easy to miss, and may lead to a false sense of safety induced by a misunderstood regression verification processes. This paper aims at pointing out hidden, implicit assumptions of regression verification in the context of cyber-physical systems by making them explicit using practical examples. The explicit trust inheritance analysis would clarify for the engineers to understand the extent of the trust that regression verification provides and consequently facilitate them to utilize this formal technique for the system validation.
Virtual reality (VR) recently is a promising technique in both industry and academia due to its potential applications in immersive experiences including website, game, tourism, or museum. VR technique provides an amazing 3-Dimensional (3D) experiences by requiring a very high amount of elements such as images, texture, depth, focus length, etc. However, in order to apply VR technique to various devices, especially in mobiles, ultra-high transmission rate and extremely low latency are really big challenge. Considering this problem, this paper proposes a novel combination model by transforming the computing capability of VR device into an equivalent caching amount while remaining low latency and fast transmission. In addition, Classic caching models are used to computing and catching capabilities which is easily apply to multi-user models.
Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a three-dimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system.
Techniques applied in response to detrimental digital incidents vary in many respects according to their attributes. Models of techniques exist in current research but are typically restricted to some subset with regards to the discipline of the incident. An enormous collection of techniques is actually available for use. There is no single model representing all these techniques. There is no current categorisation of digital forensics reactive techniques that classify techniques according to the attribute of function and nor is there an attempt to classify techniques in a means that goes beyond a subset. In this paper, an ontology that depicts digital forensic reactive techniques classified by function is presented. The ontology itself contains additional information for each technique useful for merging into a cognate system where the relationship between techniques and other facets of the digital investigative process can be defined. A number of existing techniques were collected and described according to their function - a verb. The function then guided the placement and classification of the techniques in the ontology according to the ontology development process. The ontology contributes to a knowledge base for digital forensics - essentially useful as a resource for the various people operating in the field of digital forensics. The benefit of this that the information can be queried, assumptions can be made explicit, and there is a one-stop-shop for digital forensics reactive techniques with their place in the investigation detailed.
In this paper, we present an extensive evaluation of face recognition and verification approaches performed by the European COST Action MULTI-modal Imaging of FOREnsic SciEnce Evidence (MULTI-FORESEE). The aim of the study is to evaluate various face recognition and verification methods, ranging from methods based on facial landmarks to state-of-the-art off-the-shelf pre-trained Convolutional Neural Networks (CNN), as well as CNN models directly trained for the task at hand. To fulfill this objective, we carefully designed and implemented a realistic data acquisition process, that corresponds to a typical face verification setup, and collected a challenging dataset to evaluate the real world performance of the aforementioned methods. Apart from verifying the effectiveness of deep learning approaches in a specific scenario, several important limitations are identified and discussed through the paper, providing valuable insight for future research directions in the field.
The world is continuously developing, and people's needs are increasing as well; so too are the number of thieves increasing, especially electronic thieves. For that reason, companies and individuals are always searching for experts who will protect them from thieves, and these experts are called digital investigators. Digital forensics has a number of branches and different parts, and image forensics is one of them. The budget for the images branch goes up every day in response to the need. In this paper we offer some information about images and image forensics, image components and how they are stored in digital devices and how they can be deleted and recovered. We offer general information about digital forensics, focusing on image forensics.
Antifragile systems enhance their capabilities and become stronger when exposed to adverse conditions, stresses or attacks, making antifragility a desirable property for cyber defence systems that operate in contested military environments. Self-improvement in autonomic systems refers to the improvement of their self-* capabilities, so that they are able to (a) better handle previously known (anticipated) situations, and (b) deal with previously unknown (unanticipated) situations. In this position paper, we present a vision of using self-improvement through learning to achieve antifragility in autonomic cyber defence systems. We first enumerate some of the major challenges associated with realizing distributed self-improvement. We then propose a reference model for middleware frameworks for self-improving autonomic systems and a set of desirable features of such frameworks.