Biblio
The only book of its kind, Cyber-Physical Systems addresses CPS from three perspectives. First, it presents the challenges and innovations associated with this class of systems as they have arisen in a wide spectrum of application domains. Second, it describes the foundations that underlie CPS solutions, both in terms of what we know and emerging research challenges. Finally, it offers offer guiding principles for all levels, from specific design and analysis advice for practitioners to high-level perspectives that can guide the direction of new innovations.
As systems continue to evolve they rely less on human decision-making and more on computational intelligence. This trend in conjunction with the available technologies for providing advanced sensing, measurement, process control, and communication lead towards the new field of the CyberPhysical System (CPS). CyberPhysical systems are expected to play a major role in the design and development of future engineering platforms with new capabilities that far exceed today's levels of autonomy, functionality and usability. Although these systems exhibit remarkable characteristics, their design and implementation is a challenging issue, as numerous (heterogeneous) components and services have to be appropriately modeled and simulated together. The problem of designing efficient CPS becomes far more challenging in case the target system has to meet also real-time constraints.
CyberPhysical Systems: Decision Making Mechanisms and Applications describes essential theory, recent research and large-scale user cases that addresses urgent challenges in CPS architectures. In particular, it includes chapters on:
- Decision making for large scale CPS
- Modeling of CPS with emphasis at the control mechanisms
- Hardware/software implementation of the control mechanisms
- Fault-tolerant and reliability issues for the control mechanisms
- CyberPhysical user-cases that incorporate challenging decision making
With the rapid development of smart grid, smart meters are deployed at energy consumers' premises to collect real-time usage data. Although such a communication model can help the control center of the energy producer to improve the efficiency and reliability of electricity delivery, it also leads to some security issues. For example, this real-time data involves the customers' privacy. Attackers may violate the privacy for house breaking, or they may tamper with the transmitted data for their own benefits. For this purpose, many data aggregation schemes are proposed for privacy preservation. However, rare of them cares about both the data aggregation and fine-grained access control to improve the data utility. In this paper, we proposes a data aggregation scheme based on attribute decision tree. Security analysis illustrates that our scheme can achieve the data integrity, data privacy preservation and fine- grained data access control. Experiment results show that our scheme are more efficient than existing schemes.
Application of trust principals in internet of things (IoT) has allowed to provide more trustworthy services among the corresponding stakeholders. The most common method of assessing trust in IoT applications is to estimate trust level of the end entities (entity-centric) relative to the trustor. In these systems, trust level of the data is assumed to be the same as the trust level of the data source. However, most of the IoT based systems are data centric and operate in dynamic environments, which need immediate actions without waiting for a trust report from end entities. We address this challenge by extending our previous proposals on trust establishment for entities based on their reputation, experience and knowledge, to trust estimation of data items [1-3]. First, we present a hybrid trust framework for evaluating both data trust and entity trust, which will be enhanced as a standardization for future data driven society. The modules including data trust metric extraction, data trust aggregation, evaluation and prediction are elaborated inside the proposed framework. Finally, a possible design model is described to implement the proposed ideas.
Deregulated electricity markets rely on a two-settlement system consisting of day-ahead and real-time markets, across which electricity price is volatile. In such markets, locational marginal pricing is widely adopted to set electricity prices and manage transmission congestion. Locational marginal prices are vulnerable to measurement errors. Existing studies show that if the adversaries are omniscient, they can design profitable attack strategies without being detected by the residue-based bad data detectors. This paper focuses on a more realistic setting, in which the attackers have only partial and imperfect information due to their limited resources and restricted physical access to the grid. Specifically, the attackers are assumed to have uncertainties about the state of the grid, and the uncertainties are modeled stochastically. Based on this model, this paper offers a framework for characterizing the optimal stochastic guarantees for the effectiveness of the attacks and the associated pricing impacts.
As cloud computing becomes increasingly pervasive, it is critical for cloud providers to support basic security controls. Although major cloud providers tout such features, relatively little is known in many cases about their design and implementation. In this paper, we describe several security features in OpenStack, a widely-used, open source cloud computing platform. Our contributions to OpenStack range from key management and storage encryption to guaranteeing the integrity of virtual machine (VM) images prior to boot. We describe the design and implementation of these features in detail and provide a security analysis that enumerates the threats that each mitigates. Our performance evaluation shows that these security features have an acceptable cost-in some cases, within the measurement error observed in an operational cloud deployment. Finally, we highlight lessons learned from our real-world development experiences from contributing these features to OpenStack as a way to encourage others to transition their research into practice.
The start-up value of an SRAM cell is unique, random, and unclonable as it is determined by the inherent process mismatch between transistors. These properties make SRAM an attractive circuit for generating encryption keys. The primary challenge for SRAM based key generation, however, is the poor stability when the circuit is subject to random noise, temperature and voltage changes, and device aging. Temporal majority voting (TMV) and bit masking were used in previous works to identify and store the location of unstable or marginally stable SRAM cells. However, TMV requires a long test time and significant hardware resources. In addition, the number of repetitive power-ups required to find the most stable cells is prohibitively high. To overcome the shortcomings of TMV, we propose a novel data remanence based technique to detect SRAM cells with the highest stability for reliable key generation. This approach requires only two remanence tests: writing `1' (or `0') to the entire array and momentarily shutting down the power until a few cells flip. We exploit the fact that the cells that are easily flipped are the most robust cells when written with the opposite data. The proposed method is more effective in finding the most stable cells in a large SRAM array than a TMV scheme with 1,000 power-up tests. Experimental studies show that the 256-bit key generated from a 512 kbit SRAM using the proposed data remanence method is 100% stable under different temperatures, power ramp up times, and device aging.
Now a day's cloud technology is a new example of computing that pays attention to more computer user, government agencies and business. Cloud technology brought more advantages particularly in every-present services where everyone can have a right to access cloud computing services by internet. With use of cloud computing, there is no requirement for physical servers or hardware that will help the computer system of company, networks and internet services. One of center services offered by cloud technology is storing the data in remote storage space. In the last few years, storage of data has been realized as important problems in information technology. In cloud computing data storage technology, there are some set of significant policy issues that includes privacy issues, anonymity, security, government surveillance, telecommunication capacity, liability, reliability and among others. Although cloud technology provides a lot of benefits, security is the significant issues between customer and cloud. Normally cloud computing technology has more customers like as academia, enterprises, and normal users who have various incentives to go to cloud. If the clients of cloud are academia, security result on computing performance and for this types of clients cloud provider's needs to discover a method to combine performance and security. In this research paper the more significant issue is security but with diverse vision. High performance might be not as dangerous for them as academia. In our paper, we design an efficient secure and verifiable outsourcing protocol for outsourcing data. We develop extended QP problem protocol for storing and outsourcing a data securely. To achieve the data security correctness, we validate the result returned through the cloud by Karush\_Kuhn\_Tucker conditions that are sufficient and necessary for the most favorable solution.
Data security has become an issue of increasing importance, especially for Web applications and distributed databases. One solution is using cryptographic algorithms whose improvement has become a constant concern. The increasing complexity of these algorithms involves higher execution times, leading to an application performance decrease. This paper presents a comparison of execution times for three algorithms using asymmetric keys, depending on the size of the encryption/decryption keys: RSA, ElGamal, and ECIES. For this algorithms comparison, a benchmark using Java APIs and an application for testing them on a test database was created.
The risk posed by insider threats has usually been approached by analyzing the behavior of users solely in the cyber domain. In this paper, we show the viability of using physical movement logs, collected via a building access control system, together with an understanding of the layout of the building housing the system's assets, to detect malicious insider behavior that manifests itself in the physical domain. In particular, we propose a systematic framework that uses contextual knowledge about the system and its users, learned from historical data gathered from a building access control system, to select suitable models for representing movement behavior. We then explore the online usage of the learned models, together with knowledge about the layout of the building being monitored, to detect malicious insider behavior. Finally, we show the effectiveness of the developed framework using real-life data traces of user movement in railway transit stations.
To appear
Distributed Denial of Service (DDoS) attack is a congestion-based attack that makes both the network and host-based resources unavailable for legitimate users, sending flooding attack packets to the victim's resources. The non-existence of predefined rules to correctly identify the genuine network flow made the task of DDoS attack detection very difficult. In this paper, a combination of unsupervised data mining techniques as intrusion detection system are introduced. The entropy concept in term of windowing the incoming packets is applied with data mining technique using Clustering Using Representative (CURE) as cluster analysis to detect the DDoS attack in network flow. The data is mainly collected from DARPA2000, CAIDA2007 and CAIDA2008 datasets. The proposed approach has been evaluated and compared with several existing approaches in terms of accuracy, false alarm rate, detection rate, F. measure and Phi coefficient. Results indicates the superiority of the proposed approach with four out five detected phases, more than 99% accuracy rate 96.29% detection rate, around 0% false alarm rate 97.98% F-measure, and 97.98% Phi coefficient.