Biblio
The term "Cyber Physical System" (CPS) has been used in the recent years to describe a system type, which makes use of powerful communication networks to functionally combine systems that were previously thought of as independent. The common theme of CPSs is that through communication, CPSs can make decisions together and achieve common goals. Yet, in contrast to traditional system types such as embedded systems, the functional dependence between CPSs can change dynamically at runtime. Hence, their functional dependence may cause unforeseen runtime behavior, e.g., when a CPS becomes unavailable, but others depend on its correct operation. During development of any individual CPS, this runtime behavior must hence be predicted, and the system must be developed with the appropriate level of robustness. Since at present, research is mainly concerned with the impact of functional dependence in CPS on development, the impact on runtime behavior is mere conjecture. In this paper, we present AirborneCPS, a simulation tool for functionally dependent CPSs which simulates runtime behavior and aids in the identification of undesired functional interaction.
Contemporary enterprise systems focus primarily on performance and development/maintenance costs. Dealing with cyber-threats and system compromise is relegated to good coding (i.e., defensive programming) and secure environment (e.g., patched OS, firewalls, etc.). This approach, while a necessary start, is not sufficient. Such security relies on no missteps, and compromise only need a single flaw; consequently, we must design for compromise and mitigate its impact. One approach is to utilize fine-grained modularization and isolation. In such a system, decomposition ensures that compromise of a single module presents limited and known risk to data/resource theft and denial. We propose mechanisms for automating such modular composition and consider its system performance impact.
Complex software is built by composing components implementing largely independent blocks of functionality. However, once the sources are compiled into an executable, that modularity is lost. This is unfortunate for code recipients, for whom knowing the components has many potential benefits, such as improved program understanding for reverse-engineering, identifying shared code across different programs, binary code reuse, and authorship attribution. A novel approach for decomposing such source-free program executables into components is here proposed. Given an executable, the approach first statically builds a decomposition graph, where nodes are functions and edges capture three types of relationships: code locality, data references, and function calls. It then applies a graph-theoretic approach to partition the functions into disjoint components. A prototype implementation, BCD, demonstrates the approach's efficacy: Evaluation of BCD with 25 C++ binary programs to recover the methods belonging to each class achieves high precision and recall scores for these tested programs.
There is an inevitable trade-off between spatial and spectral resolutions in optical remote sensing images. A number of data fusion techniques of multimodal images with different spatial and spectral characteristics have been developed to generate optical images with both spatial and spectral high resolution. Although some of the techniques take the spectral and spatial blurring process into account, there is no method that attempts to retrieve an optical image with both spatial and spectral high resolution, a spectral blurring filter and a spectral response simultaneously. In this paper, we propose a new framework of spatial resolution enhancement by a fusion of multiple optical images with different characteristics based on tensor decomposition. An optical image with both spatial and spectral high resolution, together with a spatial blurring filter and a spectral response, is generated via canonical polyadic (CP) decomposition of a set of tensors. Experimental results featured that relatively reasonable results were obtained by regularization based on nonnegativity and coupling.
Modern personal computers have embraced increasingly powerful Graphics Processing Units (GPUs). Recently, GPU-based graphics acceleration in web apps (i.e., applications running inside a web browser) has become popular. WebGL is the main effort to provide OpenGL-like graphics for web apps and it is currently used in 53% of the top-100 websites. Unfortunately, WebGL has posed serious security concerns as several attack vectors have been demonstrated through WebGL. Web browsers\guillemotright solutions to these attacks have been reactive: discovered vulnerabilities have been patched and new runtime security checks have been added. Unfortunately, this approach leaves the system vulnerable to zero-day vulnerability exploits, especially given the large size of the Trusted Computing Base of the graphics plane. We present Sugar, a novel operating system solution that enhances the security of GPU acceleration for web apps by design. The key idea behind Sugar is using a dedicated virtual graphics plane for a web app by leveraging modern GPU virtualization solutions. A virtual graphics plane consists of a dedicated virtual GPU (or vGPU) as well as all the software graphics stack (including the device driver). Sugar enhances the system security since a virtual graphics plane is fully isolated from the rest of the system. Despite GPU virtualization overhead, we show that Sugar achieves high performance. Moreover, unlike current systems, Sugar is able to use two underlying physical GPUs, when available, to co-render the User Interface (UI): one GPU is used to provide virtual graphics planes for web apps and the other to provide the primary graphics plane for the rest of the system. Such a design not only provides strong security guarantees, it also provides enhanced performance isolation.
As an extension of Network Function Virtualization, microservice architectures are a promising way to design future network services. At the same time, Information-Centric Networking architectures like NDN would benefit from this paradigm to offer more design choices for the network architect while facilitating the deployment and the operation of the network. We propose $μ$NDN, an orchestrated suite of microservices as an alternative way to implement NDN forwarding and support functions. We describe seven essential micro-services we developed, explain the design choices behind our solution and how it is orchestrated. We evaluate each service in isolation and the entire microservice architecture through two realistic scenarios to show its ability to react and mitigate some performance and security issues thanks to the orchestration. Our results show that $μ$NDN can replace a monolithic NDN forwarder while being more powerful and scalable.
Networked control systems improve the efficiency of cyber-physical plants both functionally, by the availability of data generated even in far-flung locations, and operationally, by the adoption of standard protocols. A side-effect, however, is that now the safety and stability of a local process and, in turn, of the entire plant are more vulnerable to malicious agents. Leveraging the communication infrastructure, the authors here present the design of networked control systems with built-in resilience. Specifically, the paper addresses attacks known as false data injections that originate within compromised sensors. In the proposed framework for closed-loop control, the feedback signal is constructed by weighted consensus of estimates of the process state gathered from other interconnected processes. Observers are introduced to generate the state estimates from the local data. Side-channel monitors are attached to each primary sensor in order to assess proper code execution. These monitors provide estimates of the trust assigned to each observer output and, more importantly, independent of it; these estimates serve as weights in the consensus algorithm. The authors tested the concept on a multi-sensor networked physical experiment with six primary sensors. The weighted consensus was demonstrated to yield a feedback signal within specified accuracy even if four of the six primary sensors were injecting false data.
Software deobfuscation is a key challenge in malware analysis to understand the internal logic of the code and establish adequate countermeasures. In order to defeat recent obfuscation techniques, state-of-the-art generic deobfuscation methodologies are based on dynamic symbolic execution (DSE). However, DSE suffers from limitations such as code coverage and scalability. In the race to counter and remove the most advanced obfuscation techniques, there is a need to reduce the amount of code to cover. To that extend, we propose a novel deobfuscation approach based on semantic equivalence, called DoSE. With DoSE, we aim to improve and complement DSE-based deobfuscation techniques by statically eliminating obfuscation transformations (built on code-reuse). This improves the code coverage. Our method's novelty comes from the transposition of existing binary diffing techniques, namely semantic equivalence checking, to the purpose of the deobfuscation of untreated techniques, such as two-way opaque constructs, that we encounter in surreptitious software. In order to challenge DoSE, we used both known malwares such as Cryptowall, WannaCry, Flame and BitCoinMiner and obfuscated code samples. Our experimental results show that DoSE is an efficient strategy of detecting obfuscation transformations based on code-reuse with low rates of false positive and/or false negative results in practice, and up to 63% of code reduction on certain types of malwares.
Due to expansion of Internet and huge dataset, many organizations started to use cloud. Cloud Computing moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. Due to this cloud faces many threats. In this work, we study the problem of ensuring the integrity of data storage in Cloud Computing. To reduce the computational cost at user side during the integrity verification of their data, the notion of public verifiability has been proposed. Our approach is to create a new entity names Cloud Service Controller (CSC) which will help us to reduce the trust on the Third Party Auditor (TPA). We have strengthened the security model by using AES Encryption with SHA-S12 & tag generation. In this paper we get a brief introduction about the file upload phase, integrity of the file & Proof of Retrievability of the file.
Named Data Networking (NDN) is the most mature proposal of the Information Centric Networking paradigm, a clean-slate approach for the Future Internet. Although NDN was designed to tackle security issues inherent to IP networks natively, newly introduced security attacks in its transitional phase threaten NDN's practical deployment. Therefore, a security monitoring plane for NDN is indispensable before any potential deployment of this novel architecture in an operating context by any provider. We propose an approach for the monitoring and anomaly detection in NDN nodes leveraging Bayesian Network techniques. A list of monitored metrics is introduced as a quantitative measure to feature the behavior of an NDN node. By leveraging the hypothesis testing theory, a micro detector is developed to detect whenever the metric significantly changes from its normal behavior. A Bayesian network structure that correlates alarms from micro detectors is designed based on the expert knowledge of the NDN specification and the NFD implementation. The relevance and performance of our security monitoring approach are demonstrated by considering the Content Poisoning Attack (CPA), one of the most critical attacks in NDN, through numerous experiment data collected from a real NDN deployment.
Modern cyber-physical systems are complex networked computing systems that electronically control physical systems. Autonomous road vehicles are an important and increasingly ubiquitous instance. Unfortunately, their increasing complexity often leads to security vulnerabilities. Network connectivity exposes these vulnerable systems to remote software attacks that can result in real-world physical damage, including vehicle crashes and loss of control authority. We introduce an integrated architecture to provide provable security and safety assurance for cyber-physical systems by ensuring that safety-critical operations and control cannot be unintentionally affected by potentially malicious parts of the system. Fine-grained information flow control is used to design both hardware and software, determining how low-integrity information can affect high-integrity control decisions. This security assurance is used to improve end-to-end security across the entire cyber-physical system. We demonstrate this integrated approach by developing a mobile robotic testbed modeling a self-driving system and testing it with a malicious attack.
We present an intelligent system that focus on how to ensure the stability of ZigBee network automatically. First, we discussed on the character of ZigBee compared with WIFI. Pointed out advantage of ZigBee resides in security, stability, low power consumption and better expandability. Second, figuring out the shortcomings of ZigBee on application is that physical limitation of the frequency band and weak ability on diffraction, especially coming across a wall or a door in the actual environment of home. The third, to put forward a method which can be used to ensure the strength of ZigBee signal. The method is to detect the strength of ZigBee relay in advance. And then, to compare it with the threshold value which had been defined in previous. The threshold value of strength of ZigBee is the minimal and tolerable value which can ensure stable transmission of ZigBee. If the detected value is out of the range of threshold, system will prompt up warning message which can be used to hint user to add ZigBee reply between the original ZigBee node and ZigBee gateway.
Dynamic interaction appears in many real-world scenarios where players are able to observe (perhaps imperfectly) the actions of another player and react accordingly. We consider the baseline representation of dynamic games - the extensive form - and focus on computing Stackelberg equilibrium (SE), where the leader commits to a strategy to which the follower plays a best response. For one-shot games (e.g., security games), strategy-generation (SG) algorithms offer dramatic speed-up by incrementally expanding the strategy spaces. However, a direct application of SG to extensive-form games (EFGs) does not bring a similar speed-up since it typically results in a nearly-complete strategy space. Our contributions are twofold: (1) for the first time we introduce an algorithm that allows us to incrementally expand the strategy space to find a SE in EFGs; (2) we introduce a heuristic variant of the algorithm that is theoretically incomplete, but in practice allows us to find exact (or close-to optimal) Stackelberg equilibrium by constructing a significantly smaller strategy space. Our experimental evaluation confirms that we are able to compute SE by considering only a fraction of the strategy space that often leads to a significant speed-up in computation times.