Biblio
Digital forensic investigators today are faced with numerous problems when recovering footprints of criminal activity that involve the use of computer systems. Investigators need the ability to recover evidence in a forensically sound manner, even when criminals actively work to alter the integrity, veracity, and provenance of data, applications and software that are used to support illicit activities. In many ways, operating systems (OS) can be strengthened from a technological viewpoint to support verifiable, accurate, and consistent recovery of system data when needed for forensic collection efforts. In this paper, we extend the ideas for forensic-friendly OS design by proposing the use of a practical form of computing on encrypted data (CED) and computing with encrypted functions (CEF) which builds upon prior work on component encryption (in circuits) and white-box cryptography (in software). We conduct experiments on sample programs to provide analysis of the approach based on security and efficiency, illustrating how component encryption can strengthen key OS functions and improve tamper-resistance to anti-forensic activities. We analyze the tradeoff space for use of the algorithm in a holistic approach that provides additional security and comparable properties to fully homomorphic encryption (FHE).
In this paper, we initiate the study of garbled protocols - a generalization of Yao's garbled circuits construction to distributed protocols. More specifically, in a garbled protocol construction, each party can independently generate a garbled protocol component along with pairs of input labels. Additionally, it generates an encoding of its input. The evaluation procedure takes as input the set of all garbled protocol components and the labels corresponding to the input encodings of all parties and outputs the entire transcript of the distributed protocol. We provide constructions for garbling arbitrary protocols based on standard computational assumptions on bilinear maps (in the common random string model). Next, using garbled protocols we obtain a general compiler that compresses any arbitrary round multiparty secure computation protocol into a two-round UC secure protocol. Previously, two-round multiparty secure computation protocols were only known assuming witness encryption or learning-with errors. Benefiting from our generic approach we also obtain protocols (i) for the setting of random access machines (RAM programs) while keeping communication and computational costs proportional to running times, while (ii) making only a black-box use of the underlying group, eliminating the need for any expensive non-black-box group operations. Our results are obtained by a simple but powerful extension of the non-interactive zero-knowledge proof system of Groth, Ostrovsky and Sahai [Journal of ACM, 2012].
Access control offers mechanisms to control and limit the actions or operations that are performed by a user on a set of resources in a system. Many access control models exist that are able to support this basic requirement. One of the properties examined in the context of these models is their ability to successfully restrict access to resources. Nevertheless, considering only restriction of access may not be enough in some environments, as in critical infrastructures. The protection of systems in this type of environment requires a new line of enquiry. It is essential to ensure that appropriate access is always possible, even when users and resources are subjected to challenges of various sorts. Resilience in access control is conceived as the ability of a system not to restrict but rather to ensure access to resources. In order to demonstrate the application of resilience in access control, we formally define an attribute based access control model (ABAC) based on guidelines provided by the National Institute of Standards and Technology (NIST). We examine how ABAC-based resilience policies can be specified in temporal logic and how these can be formally verified. The verification of resilience is done using an automated model checking technique, which eventually may lead to reducing the overall complexity required for the verification of resilience policies and serve as a valuable tool for administrators.
In this talk, I will discuss my lab's work in the emerging field of adversarial stylometry and machine learning. Machine learning algorithms are increasingly being used in security and privacy domains, in areas that go beyond intrusion or spam detection. For example, in digital forensics, questions often arise about the authors of documents: their identity, demographic background, and whether they can be linked to other documents. The field of stylometry uses linguistic features and machine learning techniques to answer these questions. We have applied stylometry to difficult domains such as underground hacker forums, open source projects (code), and tweets. I will discuss our Doppelgnger Finder algorithm, which enables us to group Sybil accounts on underground forums and detect blogs from Twitter feeds and reddit comments. In addition, I will discuss our work attributing unknown source code and binaries.
In this paper, an advanced security and stability defense framework that utilizes multisource power system data to enhance the power system security and resilience is proposed. The framework consists of early warning, preventive control, on-line state awareness and emergency control, requires in-depth collaboration between power engineering and data science. To realize this framework in practice, a cross-disciplinary research topic — the big data analytics for power system security and resilience enhancement, which consists of data converting, data cleaning and integration, automatic labelling and learning model establishing, power system parameter identification and feature extraction using developed big data learning techniques, and security analysis and control based on the extracted knowledge — is deeply investigated. Domain considerations of power systems and specific data science technologies are studied. The future technique roadmap for emerging problems is proposed.
In this paper, we propose a novel method, based on keystroke dynamics, to distinguish between fake and truthful personal information written via a computer keyboard. Our method does not need any prior knowledge about the user who is providing data. To our knowledge, this is the first work that associates the typing human behavior with the production of lies regarding personal information. Via experimental analysis involving 190 subjects, we assess that this method is able to distinguish between truth and lies on specific types of autobiographical information, with an accuracy higher than 75%. Specifically, for information usually required in online registration forms (e.g., name, surname and email), the typing behavior diverged significantly between truthful or untruthful answers. According to our results, keystroke analysis could have a great potential in detecting the veracity of self-declared information, and it could be applied to a large number of practical scenarios requiring users to input personal data remotely via keyboard.
Robotic vehicles and especially autonomous robotic vehicles can be attractive targets for attacks that cross the cyber-physical divide, that is cyber attacks or sensory channel attacks affecting the ability to navigate or complete a mission. Detection of such threats is typically limited to knowledge-based and vehicle-specific methods, which are applicable to only specific known attacks, or methods that require computation power that is prohibitive for resource-constrained vehicles. Here, we present a method based on Bayesian Networks that can not only tell whether an autonomous vehicle is under attack, but also whether the attack has originated from the cyber or the physical domain. We demonstrate the feasibility of the approach on an autonomous robotic vehicle built in accordance with the Generic Vehicle Architecture specification and equipped with a variety of popular communication and sensing technologies. The results of experiments involving command injection, rogue node and magnetic interference attacks show that the approach is promising.
Usually, the air gap will appear inside the composite insulators and it will lead to serious accident. In order to detect these internal defects in composite insulators operated in the transmission lines, a new non-destructive technique has been proposed. In the study, the mathematical analysis model of the composite insulators inner defects, which is about heat diffusion, has been build. The model helps to analyze the propagation process of heat loss and judge the structure and defects under the surface. Compared with traditional detection methods and other non-destructive techniques, the technique mentioned above has many advantages. In the study, air defects of composite insulators have been made artificially. Firstly, the artificially fabricated samples are tested by flash thermography, and this method shows a good performance to figure out the structure or defects under the surface. Compared the effect of different excitation between flash and hair drier, the artificially samples have a better performance after heating by flash. So the flash excitation is better. After testing by different pollution on the surface, it can be concluded that different pollution don't have much influence on figuring out the structure or defect under the surface, only have some influence on heat diffusion. Then the defective composite insulators from work site are detected and the image of defect is clear. This new active thermography system can be detected quickly, efficiently and accurately, ignoring the influence of different pollution and other environmental restrictions. So it will have a broad prospect of figuring out the defeats and structure in composite insulators even other styles of insulators.
Complex safety-critical devices require dependable communication. Dependability includes confidentiality and integrity as much as safety. Encrypting gateways with demilitarized zones, Multiple Independent Levels of Security architectures and the infamous Air Gap are diverse integration patterns for safety-critical infrastructure. Though resource restricted embedded safety devices still lack simple, certifiable, and efficient cryptography implementations. Following the recommended formal methods approach for safety-critical devices, we have implemented proven cryptography algorithms in the qualified model based language Scade as the Safety Leveraged Implementation of Data Encryption (SLIDE) library. Optimization for the synchronous dataflow language is discussed in the paper. The implementation for public-key based encryption and authentication is evaluated for real-world performance. The feasibility is shown by execution time benchmarks on an industrial safety microcontroller platform running a train control safety application.
Tracing and integrating security requirements throughout the development process is a key challenge in security engineering. In socio-technical systems, security requirements for the organizational and technical aspects of a system are currently dealt with separately, giving rise to substantial misconceptions and errors. In this paper, we present a model-based security engineering framework for supporting the system design on the organizational and technical level. The key idea is to allow the involved experts to specify security requirements in the languages they are familiar with: business analysts use BPMN for procedural system descriptions; system developers use UML to design and implement the system architecture. Security requirements are captured via the language extensions SecBPMN2 and UMLsec. We provide a model transformation to bridge the conceptual gap between SecBPMN2 and UMLsec. Using UMLsec policies, various security properties of the resulting architecture can be verified. In a case study featuring an air traffic management system, we show how our framework can be practically applied.
This paper presents a new approach for a dynamic curtailment method for renewable energy sources that guarantees fulfilling of (n-1)-security criteria of the system. Therefore, it is applicable to high voltage distribution grids and has compliance to their planning guidelines. The proposed dynamic curtailment method specifically reduces the power feed-in of renewable energy sources up to a level, where no thermal constraint is exceeded in the (n-1)-state of the system. Based on AC distribution factors, a new formulation of line outage distribution factors is presented that is applicable for outages consisting of a single line or multiple segment lines. The proposed method is tested using a planning study of a real German high voltage distribution grid. The results show that any thermal loading limits are exceeded by using the dynamic curtailment approach. Therefore, a significant reduction of the grid reinforcement can be achieved by using a small amount of curtailed annual energy from renewable energy sources.
Opportunistic Networks are delay-tolerant mobile networks with intermittent node contacts in which data is transferred with the store-carry-forward principle. Owners of smartphones and smart objects form such networks due to their social behaviour. Opportunistic Networking can be used in remote areas with no access to the Internet, to establish communication after disasters, in emergency situations or to bypass censorship, but also in parallel to familiar networking. In this work, we create a mobile network application that connects Android devices over Wi-Fi, offers identification and encryption, and gathers information for routing in the network. The network application is constructed in such a way that third party applications can use the network application as network layer to send and receive data packets. We create secure and reliable connections while maintaining a high transmission speed, and with the gathered information about the network we offer knowledge for state of the art routing protocols. We conduct tests on connectivity, transmission range and speed, battery life and encryption speed and show a proof of concept for routing in the network.
The symmetric block ciphers, which represent a core element for building cryptographic communications systems and protocols, are used in providing message confidentiality, authentication and integrity. Various limitations in hardware and software resources, especially in terminal devices used in mobile communications, affect the selection of appropriate cryptosystem and its parameters. In this paper, an implementation of three symmetric ciphers (DES, 3DES, AES) used in different operating modes are analyzed on Android platform. The cryptosystems' performance is analyzed in different scenarios using several variable parameters: cipher, key size, plaintext size and number of threads. Also, the influence of parallelization supported by multi-core CPUs on cryptosystem performance is analyzed. Finally, some conclusions about the parameter selection for optimal efficiency are given.
Cryptography is the fascinating science that deals with constructing and destructing the secret codes. The evolving digitization in this modern era possesses cryptography as one of its backbones to perform the transactions with confidentiality and security wherever the authentication is required. With the modern technology that has evolved, the use of codes has exploded, enriching cryptology and empowering citizens. One of the most important things that encryption provides anyone using any kind of computing device is `privacy'. There is no way to have true privacy with strong security, the method with which we are dealing with is to make the cipher text more robust to be by-passed. In current work, the well known and renowned Caesar cipher and Rail fence cipher techniques are combined with a cross language cipher technique and the detailed comparative analysis amongst them is carried out. The simulations have been carried out on Eclipse Juno version IDE for executions and Java, an open source language has been used to implement these said techniques.
Next generation 5G wireless networks pose several important security challenges. One fundamental challenge is key management between the two communicating parties. The goal is to establish a common secret key through an unsecured wireless medium. In this paper, we introduce a new physical layer paradigm for secure key exchange between the legitimate communication parties in the presence of a passive eavesdropper. The proposed method ensures secrecy via pre-equalization and guarantees reliable communications by the use of Low Density Parity Check (LDPC) codes. One of the main findings of this paper is to demonstrate through simulations that the diversity order of the eavesdropper will be zero unless the main and eavesdropping channels are almost correlated, while the probability of key mismatch between the legitimate transmitter and receiver will be low. Simulation results demonstrate that the proposed approach achieves very low secret key mismatch between the legitimate users, while ensuring very high error probability at the eavesdropper.
Cloud computing offers many advantages as flexibility or resource efficiency and can significantly reduce costs. However, when sensitive data is outsourced to a cloud provider, classified records can leak. To protect data owners and application providers from a privacy breach data must be encrypted before it is uploaded. In this work, we present a distributed key management scheme that handles user-specific keys in a single-tenant scenario. The underlying database is encrypted and the secret key is split into parts and only reconstructed temporarily in memory. Our scheme distributes shares of the key to the different entities. We address bootstrapping, key recovery, the adversary model and the resulting security guarantees.
Near-sensor data analytics is a promising direction for internet-of-things endpoints, as it minimizes energy spent on communication and reduces network load - but it also poses security concerns, as valuable data are stored or sent over the network at various stages of the analytics pipeline. Using encryption to protect sensitive data at the boundary of the on-chip analytics engine is a way to address data security issues. To cope with the combined workload of analytics and encryption in a tight power envelope, we propose Fulmine, a system-on-chip (SoC) based on a tightly-coupled multi-core cluster augmented with specialized blocks for compute-intensive data processing and encryption functions, supporting software programmability for regular computing tasks. The Fulmine SoC, fabricated in 65-nm technology, consumes less than 20mW on average at 0.8V achieving an efficiency of up to 70pJ/B in encryption, 50pJ/px in convolution, or up to 25MIPS/mW in software. As a strong argument for real-life flexible application of our platform, we show experimental results for three secure analytics use cases: secure autonomous aerial surveillance with a state-of-the-art deep convolutional neural network (CNN) consuming 3.16pJ per equivalent reduced instruction set computer operation, local CNN-based face detection with secured remote recognition in 5.74pJ/op, and seizure detection with encrypted data collection from electroencephalogram within 12.7pJ/op.
This paper describes biometric-based cryptographic techniques for providing confidential communications and strong, mutual and multifactor authentication on the Internet of Things. The described security techniques support the goals of universal access when users are allowed to select from multiple choice alternatives to authenticate their identities. By using a Biometric Authenticated Key Exchange (BAKE) protocol, user credentials are protected against phishing and Man-in-the-Middle attacks. Forward secrecy is achieved using a Diffie-Hellman key establishment scheme with fresh random values each time the BAKE protocol is operated. Confidentiality is achieved using lightweight cryptographic algorithms that are well suited for implementation in resource constrained environments, those limited by processing speed, limited memory and power availability. Lightweight cryptography can offer strong confidentiality solutions that are practical to implement in Internet of Things systems, where efficient execution, and small memory requirements and code size are required.
The modular multilevel converter with series and parallel connectivity was shown to provide advantages in several industrial applications. Its reliability largely depends on the absence of failures in the power semiconductors. We propose and analyze a fault-diagnosis technique to identify shorted switches based on features generated through wavelet transform of the converter output and subsequent classification in support vector machines. The multi-class support vector machine is trained with multiple recordings of the output of each fault condition as well as the converter under normal operation. Simulation results reveal that the proposed method has high classification latency and high robustness. Except for the monitoring of the output, which is required for the converter control in any case, this method does not require additional module sensors.
In Energy Internet mode, a large number of alarm information is generated when equipment exception and multiple faults in large power grid, which seriously affects the information collection, fault analysis and delays the accident treatment for the monitors. To this point, this paper proposed a method for power grid monitoring to monitor and diagnose fault in real time, constructed the equipment fault logical model based on five section alarm information, built the standard fault information set, realized fault information optimization, fault equipment location, fault type diagnosis, false-report message and missing-report message analysis using matching algorithm. The validity and practicality of the proposed method by an actual case was verified, which can shorten the time of obtaining and analyzing fault information, accelerate the progress of accident treatment, ensure the safe and stable operation of power grid.
Transmission lines' monitoring systems produce a large amount of data that hinders faults diagnosis. For this reason, approaches that can acquire and automatically interpret the information coming from lines' monitoring are needed. Furthermore, human errors stemming from operator dependent real-time decision need to be reduced. In this paper a multiple faults diagnosis method to determine transmission lines' operating conditions is proposed. Different scenarios, including insulator chains contamination with different types and concentrations of pollutants were modeled by equivalents circuits. Their performance were characterized by leakage current (LC) measurements and related to specific fault modes. Features extraction's algorithm relying on the difference between normal and faulty conditions were used to define qualitative trends for the diagnosis of various fault modes.