Biblio
This paper presents Checked C, an extension to C designed to support spatial safety, implemented in Clang and LLVM. Checked C's design is distinguished by its focus on backward-compatibility, incremental conversion, developer control, and enabling highly performant code. Like past approaches to a safer C, Checked C employs a form of checked pointer whose accesses can be statically or dynamically verified. Performance evaluation on a set of standard benchmark programs shows overheads to be relatively low. More interestingly, Checked C introduces the notions of a checked region and bounds-safe interfaces.
State estimation allows continuous monitoring of a power system by estimating the power system state variables from measurement data. Unfortunately, the measurement data provided by the devices can serve as attack vectors for false data injection attacks. As more components are connected to the internet, power system is exposed to various known and unknown cyber threats. Previous investigations have shown that false data can be injected on data from traditional meters that bypasses bad data detection systems. This paper extends this investigation by giving an overview of cyber security threats to phasor measurement units, assessing the impact of false data injection on hybrid state estimators and suggesting security recommendations. Simulations are performed on IEEE-30 and 118 bus test systems.
Over the past decade, the reliance on Unmanned Aerial Systems (UAS) to carry out critical missions has grown drastically. With an increased reliance on UAS as mission assets and the dependency of UAS on cyber resources, cyber security of UAS must be improved by adopting sound security principles and relevant technologies from the computing community. On the other hand, the traditional avionics community, being aware of the importance of cyber security, is looking at new architecture and designs that can accommodate both the traditional safety oriented principles as well as the cyber security principles and techniques. It is with the effective and timely convergence of these domains that a holistic approach and co-design can meet the unique requirements of modern systems and operations. In this paper, authors from both the cyber security and avionics domains describe our joint effort and insights obtained during the course of designing secure and resilient embedded avionics systems.
Additive manufacturing (AM, or 3D printing) is a novel manufacturing technology that has been adopted in industrial and consumer settings. However, the reliance of this technology on computerization has raised various security concerns. In this paper, we address issues associated with sabotage via tampering during the 3D printing process by presenting an approach that can verify the integrity of a 3D printed object. Our approach operates on acoustic side-channel emanations generated by the 3D printer’s stepper motors, which results in a non-intrusive and real-time validation process that is difficult to compromise. The proposed approach constitutes two algorithms. The first algorithm is used to generate a master audio fingerprint for the verifiable unaltered printing process. The second algorithm is applied when the same 3D object is printed again, and this algorithm validates the monitored 3D printing process by assessing the similarity of its audio signature with the master audio fingerprint. To evaluate the quality of the proposed thresholds, we identify the detectability thresholds for the following minimal tampering primitives: insertion, deletion, replacement, and modification of a single tool path command. By detecting the deviation at the time of occurrence, we can stop the printing process for compromised objects, thus saving time and preventing material waste. We discuss various factors that impact the method, such as background noise, audio device changes and different audio recorder positions.
The rise of social networks during the last 10 years has created a situation in which up to 100 million new images and photographs are uploaded and shared by users every day. This environment poses a ideal background for those who wish to communicate covertly by the use of steganography. It also creates a new set of challenges for steganalysts, who have to shift their field of work away from a purely scientific laboratory environment and into a diverse real-world scenario, while at the same time having to deal with entirely new problems, such as the detection of steganographic channels or the impact that even a low false positive rate has when investigating the millions of images which are shared every day on social networks. We evaluate how to address these challenges with traditional steganographic and statistical methods, rather then using high performance computing and machine learning. By the double embedding attack on the well-known F5 steganographic algorithm we achieve a false positive rate well below known attacks.
In this paper, we analyze the effectiveness of quantum algorithms to solve some classical computing complexities. In fact, we focus in this study on several famous quantum algorithms, where we discussed their impact on classical computing using in computer science.
With the wide use of smart device made huge amount of information arise. This information needed new methods to deal with it from that perspective big data concept arise. Most of the concerns on big data are given to handle data without concentrating on its security. Encryption is the best use to keep data safe from malicious users. However, ordinary encryption methods are not suitable for big data. Selective encryption is an encryption method that encrypts only the important part of the message. However, we deal with uncertainty to evaluate the important part of the message. The problem arises when the important part is not encrypted. This is the motivation of the paper. In this paper we propose security framework to secure important and unimportant portion of the message to overcome the uncertainty. However, each will take a different encryption technique for better performance without losing security. The framework selects the important parts of the message to be encrypted with a strong algorithm and the weak part with a medium algorithm. The important of the word is defined according to how its origin frequently appears. This framework is applied on amazon EC2 (elastic compute cloud). A comparison between the proposed framework, the full encryption method and Toss-A-Coin method are performed according to encryption time and throughput. The results showed that the proposed method gives better performance according to encryption time, throughput than full encryption.
The advent of smart grids offers us the opportunity to better manage the electricity grids. One of the most interesting challenges in the modern grids is the consumer demand management. Indeed, the development in Information and Communication Technologies (ICTs) encourages the development of demand-side management systems. In this paper, we propose a distributed energy demand scheduling approach that uses minimal interactions between consumers to optimize the energy demand. We formulate the consumption scheduling as a constrained optimization problem and use game theory to solve this problem. On one hand, the proposed approach aims to reduce the total energy cost of a building's consumers. This imposes the cooperation between all the consumers to achieve the collective goal. On the other hand, the privacy of each user must be protected, which means that our distributed approach must operate with a minimal information exchange. The performance evaluation shows that the proposed approach reduces the total energy cost, each consumer's individual cost, as well as the peak to average ratio.
We introduce FairSwap – an efficient protocol for fair exchange of digital goods using smart contracts. A fair exchange protocol allows a sender S to sell a digital commodity x for a fixed price p to a receiver R. The protocol is said to be secure if R only pays if he receives the correct x. Our solution guarantees fairness by relying on smart contracts executed over decentralized cryptocurrencies, where the contract takes the role of an external judge that completes the exchange in case of disagreement. While in the past there have been several proposals for building fair exchange protocols over cryptocurrencies, our solution has two distinctive features that makes it particular attractive when users deal with large commodities. These advantages are: (1) minimizing the cost for running the smart contract on the blockchain, and (2) avoiding expensive cryptographic tools such as zero-knowledge proofs. In addition to our new protocols, we provide formal security definitions for smart contract based fair exchange, and prove security of our construction. Finally, we illustrate several applications of our basic protocol and evaluate practicality of our approach via a prototype implementation for fairly selling large files over the cryptocurrency Ethereum. This article is summarized in: the morning paper an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer
Blockchain-based cryptocurrencies secure a decentralized consensus protocol by incentives. The protocol participants, called miners, generate (mine) a series of blocks, each containing monetary transactions created by system users. As incentive for participation, miners receive newly minted currency and transaction fees paid by transaction creators. Blockchain bandwidth limits lead users to pay increasing fees in order to prioritize their transactions. However, most prior work focused on models where fees are negligible. In a notable exception, Carlsten et al. [17] postulated that if incentives come only from fees then a mining gap would form\textasciitilde— miners would avoid mining when the available fees are insufficient. In this work, we analyze cryptocurrency security in realistic settings, taking into account all elements of expenses and rewards. To study when gaps form, we analyze the system as a game we call the gap game. We analyze the game with a combination of symbolic and numeric analysis tools in a wide range of scenarios. Our analysis confirms Carlsten et al.'s postulate; indeed, we show that gaps form well before fees are the only incentive, and analyze the implications on security. Perhaps surprisingly, we show that different miners choose different gap sizes to optimize their utility, even when their operating costs are identical. Alarmingly, we see that the system incentivizes large miner coalitions, reducing system decentralization. We describe the required conditions to avoid the incentive misalignment, providing guidelines for future cryptocurrency design.
ARM has become the leading processor architecture for mobile and IoT devices, while it has recently started claiming a bigger slice of the server market pie as well. As such, it will not be long before malware more regularly target the ARM architecture. Therefore, the stealthy operation of Virtual Machine Introspection (VMI) is an obligation to successfully analyze and proactively mitigate this growing threat. Stealthy VMI has proven itself perfectly suitable for malware analysis on Intel's architecture, yet, it often lacks the foundation required to be equally effective on ARM.
This paper presents an image technique Discrete Wavelet Transform and Singular Value Decomposition for image steganography. We are using a text file and convert into an image as watermark and embed watermarks into the cover image. We evaluate performance and compare this method with other methods like Least Significant Bit, Discrete Cosine Transform, and Discrete Wavelet Transform using Peak Signal Noise Ratio and Mean Squared Error. The result of this experiment showed that combine of Discrete Wavelet Transform and Singular Value Decomposition performance is better than the Least Significant Bit, Discrete Cosine Transform, and Discrete Wavelet Transform. The result of Peak Signal Noise Ratio obtained from Discrete Wavelet Transform and Singular Value Decomposition method is 57.0519 and 56.9520 while the result of Mean Squared Error is 0.1282 and 0.1311. Future work for this research is to add the encryption method on the data to be entered so that if there is an attack then the encryption method can secure the data becomes more secure.
Converged Multi-Level Secure systems allow users to interact with and freely move between applications and data of varying sensitivity on a single user interface. They promise unprecedented usability and security, especially in security-critical environments like Defence. Yet these promises rely on hard assumptions about secure user behaviour. We present initial work to test the validity of these assumptions in the absence of deception by an adversary. We conducted a user study with 21 participants on the Cross Domain Desktop Compositor. Chief amongst our findings is that the vast majority of participants (19 of 21) behave securely, even when doing so requires more effort than to behave insecurely. Our findings suggest that there is large scope for further research on converged Multi-Level Secure systems, and highlight the value of user studies to complement formal security analyses of critical systems.
Modern operating systems, such as iOS, use multiple access control policies to define an overall protection system. However, the complexity of these policies and their interactions can hide policy flaws that compromise the security of the protection system. We propose iOracle, a framework that logically models the iOS protection system such that queries can be made to automatically detect policy flaws. iOracle models policies and runtime context extracted from iOS firmware images, developer resources, and jailbroken devices, and iOracle significantly reduces the complexity of queries by modeling policy semantics. We evaluate iOracle by using it to successfully triage executables likely to have policy flaws and comparing our results to the executables exploited in four recent jailbreaks. When applied to iOS 10, iOracle identifies previously unknown policy flaws that allow attackers to modify or bypass access control policies. For compromised system processes, consequences of these policy flaws include sandbox escapes (with respect to read/write file access) and changing the ownership of arbitrary files. By automating the evaluation of iOS access control policies, iOracle provides a practical approach to hardening iOS security by identifying policy flaws before they are exploited.
To improve cyber defense, researchers have developed algorithms to allocate limited defense resources optimally. Through signaling theory, we have learned that it is possible to trick the human mind when using deceptive signals. The present work is an initial step towards developing a psychological theory of cyber deception. We use simulations to investigate how humans might make decisions under various conditions of deceptive signals in cyber-attack scenarios. We created an Instance-Based Learning (IBL) model of the attacker decisions using the ACT-R cognitive architecture. We ran simulations against the optimal deceptive signaling algorithm and against four alternative deceptive signal schemes. Our results show that the optimal deceptive algorithm is more effective at reducing the probability of attack and protecting assets compared to other signaling conditions, but it is not perfect. These results shed some light on the expected effectiveness of deceptive signals for defense. The implications of these findings are discussed.
To improve cyber defense, researchers have developed algorithms to allocate limited defense resources optimally. Through signaling theory, we have learned that it is possible to trick the human mind when using deceptive signals. The present work is an initial step towards developing a psychological theory of cyber deception. We use simulations to investigate how humans might make decisions under various conditions of deceptive signals in cyber-attack scenarios. We created an Instance-Based Learning (IBL) model of the attacker decisions using the ACT-R cognitive architecture. We ran simulations against the optimal deceptive signaling algorithm and against four alternative deceptive signal schemes. Our results show that the optimal deceptive algorithm is more effective at reducing the probability of attack and protecting assets compared to other signaling conditions, but it is not perfect. These results shed some light on the expected effectiveness of deceptive signals for defense. The implications of these findings are discussed.
Critical Infrastructures (CIs) use Supervisory Control And Data Acquisition (SCADA) systems for remote control and monitoring. Sophisticated security measures are needed to address malicious intrusions, which are steadily increasing in number and variety due to the massive spread of connectivity and standardisation of open SCADA protocols. Traditional Intrusion Detection Systems (IDSs) cannot detect attacks that are not already present in their databases. Therefore, in this paper, we assess Machine Learning (ML) for intrusion detection in SCADA systems using a real data set collected from a gas pipeline system and provided by the Mississippi State University (MSU). The contribution of this paper is two-fold: 1) The evaluation of four techniques for missing data estimation and two techniques for data normalization, 2) The performances of Support Vector Machine (SVM), and Random Forest (RF) are assessed in terms of accuracy, precision, recall and F1score for intrusion detection. Two cases are differentiated: binary and categorical classifications. Our experiments reveal that RF detect intrusions effectively, with an F1score of respectively \textbackslashtextgreater 99%.
The increasing publication of large amounts of data, theoretically anonymous, can lead to a number of attacks on the privacy of people. The publication of sensitive data without exposing the data owners is generally not part of the software developers concerns. The regulations for the data privacy-preserving create an appropriate scenario to focus on privacy from the perspective of the use or data exploration that takes place in an organization. The increasing number of sanctions for privacy violations motivates the systematic comparison of three known machine learning algorithms in order to measure the usefulness of the data privacy preserving. The scope of the evaluation is extended by comparing them with a known privacy preservation metric. Different parameter scenarios and privacy levels are used. The use of publicly available implementations, the presentation of the methodology, explanation of the experiments and the analysis allow providing a framework of work on the problem of the preservation of privacy. Problems are shown in the measurement of the usefulness of the data and its relationship with the privacy preserving. The findings motivate the need to create optimized metrics on the privacy preferences of the owners of the data since the risks of predicting sensitive attributes by means of machine learning techniques are not usually eliminated. In addition, it is shown that there may be a hundred percent, but it cannot be measured. As well as ensuring adequate performance of machine learning models that are of interest to the organization that data publisher.
Attack simulations may be used to assess the cyber security of systems. In such simulations, the steps taken by an attacker in order to compromise sensitive system assets are traced, and a time estimate may be computed from the initial step to the compromise of assets of interest. Attack graphs constitute a suitable formalism for the modeling of attack steps and their dependencies, allowing the subsequent simulation. To avoid the costly proposition of building new attack graphs for each system of a given type, domain-specific attack languages may be used. These languages codify the generic attack logic of the considered domain, thus facilitating the modeling, or instantiation, of a specific system in the domain. Examples of possible cyber security domains suitable for domain-specific attack languages are generic types such as cloud systems or embedded systems but may also be highly specialized kinds, e.g. Ubuntu installations; the objects of interest as well as the attack logic will differ significantly between such domains. In this paper, we present the Meta Attack Language (MAL), which may be used to design domain-specific attack languages such as the aforementioned. The MAL provides a formalism that allows the semi-automated generation as well as the efficient computation of very large attack graphs. We declare the formal background to MAL, define its syntax and semantics, exemplify its use with a small domain-specific language and instance model, and report on the computational performance.
With the recent advances in information and communication technology, Web and Mobile Internet applications have become a part of our daily lives. These developments have also emerged Information Security concept due to the necessity of protecting information of institutions from Internet attackers. There are many security approaches to provide information security in Enterprise applications. However, using only one of these approaches may not be efficient enough to obtain security. This paper describes a Multi-Layered Framework of implementing two-factor and single sign-on authentication together. The proposed framework generates unique one-time passwords (OTP), which are used to authenticate application data. Nevertheless, using only OTP mechanism does not meet security requirements. Therefore, implementing a separate authentication application which has single sign-on capability is necessary.
We consider linear time-invariant networks with unknown interaction topology where only a subset of the nodes, termed manifest, can be directly controlled and observed. The remaining nodes are termed latent and their number is also unknown. Our goal is to identify the transfer function of the manifest subnetwork and determine whether interactions between manifest nodes are direct or mediated by latent nodes. We show that, if there are no inputs to the latent nodes, then the manifest transfer function can be approximated arbitrarily well in the $H_ınfty}$-norm sense by the transfer function of an auto-regressive model. Motivated by this result, we present a least-squares estimation method to construct the auto-regressive model from measured data. We establish that the least-squares matrix estimate converges in probability to the matrix sequence defining the desired auto-regressive model as the length of data and the model order grow. We also show that the least-squares auto-regressive method guarantees an arbitrarily small $H_ınfty$-norm error in the approximation of the manifest transfer function, exponentially decaying once the model order exceeds a certain threshold. Finally, we show that when the latent subnetwork is acyclic, the proposed method achieves perfect identification of the manifest transfer function above a specific model order as the length of the data increases. Various examples illustrate our results.
To appear
Hardware Trojans have become in the last decade a major threat in the Integrated Circuit industry. Many techniques have been proposed in the literature aiming at detecting such malicious modifications in fabricated ICs. For the most critical circuits, prevention methods are also of interest. The goal of such methods is to prevent the insertion of a Hardware Trojan thanks to ad-hoc design rules. In this paper, we present a novel prevention technique based on approximation. An approximate logic circuit is a circuit that performs a possibly different but closely related logic function, so that it can be used for error detection or error masking where it overlaps with the original circuit. We will show how this technique can successfully detect the presence of Hardware Trojans, with a solution that has a smaller impact than triplication.
The collection of students' sensible data raises adverse reactions against Learning Analytics that decreases the confidence in its adoption. The laws and policies that surround the use of educational data are not enough to ensure privacy, security, validity, integrity and reliability of students' data. This problem has been detected through literature review and can be solved if a technological layer of automated checking rules is added above these policies. The aim of this thesis is to research about an emerging technology such as blockchain to preserve the identity of students and secure their data. In a first stage a systematic literature review will be conducted in order to set the context of the research. Afterwards, and through the scientific method, we will develop a blockchain based solution to automate rules and constraints with the aim to let students the governance of their data and to ensure data privacy and security.
This article describes a privacy policy framework that can represent and reason about complex privacy policies. By using a Common Data Model together with a formal shareability theory, this framework enables the specification of expressive policies in a concise way without burdening the user with technical details of the underlying formalism. We also build a privacy policy decision engine that implements the framework and that has been deployed as the policy decision point in a novel enterprise privacy prototype system. Our policy decision engine supports two main uses: (1) interfacing with user interfaces for the creation, validation, and management of privacy policies; and (2) interfacing with systems that manage data requests and replies by coordinating privacy policy engine decisions and access to (encrypted) databases using various privacy enhancing technologies.