Biblio
This paper investigates the privacy-preserving problem of the distributed consensus-based energy management considering both generation units and responsive demands in smart grid. First, we reveal the private information of consumers including the electricity consumption and the sensitivity of the electricity consumption to the electricity price can be disclosed without any privacy-preserving strategy. Then, we propose a privacy-preserving algorithm to preserve the private information of consumers through designing the secret functions, and adding zero-sum and exponentially decreasing noises. We also prove that the proposed algorithm can preserve the privacy while keeping the optimality of the final state and the convergence performance unchanged. Extensive simulations validate the theoretical results and demonstrate the effectiveness of the proposed algorithm.
As the key component of the smart grid, smart meters fill in the gap between electrical utilities and household users. Todays smart meters are capable of collecting household power information in real-time, providing precise power dispatching control services for electrical utilities and informing real-time power price for users, which significantly improve the user experiences. However, the use of data also brings a concern about privacy leakage and the trade-off between data usability and user privacy becomes an vital problem. Existing works propose privacy-utility trade-off frameworks against statistical inference attack. However, these algorithms are basing on distorted data, and will produce cumulative errors when tracing household power usage and lead to false power state estimation, mislead dispatching control, and become an obstacle for practical application. Furthermore, previous works consider power usage as discrete variables in their optimization problems while realistic smart meter data is continuous variable. In this paper, we propose a mechanism to estimate the trade-off between utility and privacy on a continuous time-series distorted dataset, where we extend previous optimization problems to continuous variables version. Experiments results on smart meter dataset reveal that the proposed mechanism is able to prevent inference to sensitive appliances, preserve insensitive appliances, as well as permit electrical utilities to trace household power usage periodically efficiently.
Security is an important requirement of every reactive system of the smart gird. The devices connected to the smart system in smart grid are exhaustively used to provide digital information to outside world. The security of such a system is an essential requirement. The most important component of such smart systems is Operating System (OS). This paper mainly focuses on the security of OS by incorporating Access Control Mechanism (ACM) which will improve the efficiency of the smart system. The formal methods use applied mathematics for modelling and analysing of smart systems. In the proposed work Formal Security Analysis (FSA) is used with model checking and hence it helped to prove the security of smart systems. When an Operating System (OS) takes into consideration, it never comes to a halt state. In the proposed work a Transition System (TS) is designed and the desired rules of security are provided by using Linear Temporal Logics (LTL). Unlike other propositional and predicate logic, LTL can model reactive systems with a prediction for the future state of the systems. In the proposed work, Simple Promela Interpreter (SPIN) is used as a model checker that takes LTL and TS of the system as input. Hence it is possible to derive the Büchi automaton from LTL logics and that provides traces of both successful and erroneous computations. Comparison of Büchi automaton with the transition behaviour of the OS will provide the details of security violation in the system. Validation of automaton operations on infinite computational sequences verify that whether systems are provably secure or not. Hence the proposed formal security analysis will provably ensures the security of smart systems in the area of smart grid applications.
The symmetric block ciphers, which represent a core element for building cryptographic communications systems and protocols, are used in providing message confidentiality, authentication and integrity. Various limitations in hardware and software resources, especially in terminal devices used in mobile communications, affect the selection of appropriate cryptosystem and its parameters. In this paper, an implementation of three symmetric ciphers (DES, 3DES, AES) used in different operating modes are analyzed on Android platform. The cryptosystems' performance is analyzed in different scenarios using several variable parameters: cipher, key size, plaintext size and number of threads. Also, the influence of parallelization supported by multi-core CPUs on cryptosystem performance is analyzed. Finally, some conclusions about the parameter selection for optimal efficiency are given.
In the Content-Centric Networking (CCN) architecture, content confidentiality is treated as an application-layer concern. Data is only encrypted if the producer and consumer agree on a suitable access control policy and enforcement mechanism. In contrast, transport encryption in TCP/IP applications is increasingly opportunistic for better privacy. This type of encryption is woefully lacking in CCN. To that end, we present TRAPS, a protocol to enable transparent packet security and opportunistic encryption for all CCN data. TRAPS builds on the assumption that knowledge of a name gives one access to the corresponding content; otherwise, by design, the content remains encrypted and secure. TRAPS builds on recent advances in memory hard functions and message-locked encryption to protect data in transit. We show that the security of TRAPS is dependent on the distribution of content names and argue that it can be significantly improved if secure sessions are used to transmit small pieces of information from producers to consumers. Our performance assessment indicates TRAPS is capable of providing opportunistic encryption to CCN without significant throughput loss for reasonable packet throughput measurements.
With the robots being applied for more and more fields, security issues attracted more attention. In this paper, we propose that the key center in cloud send a polynomial information to each robot component, the component would put their information into the polynomial to get a group of new keys using in next slot, then check and update its key groups after success in hash. Because of the degree of the polynomial is higher than the number of components, even if an attacker got all the key values of the components, he also cannot restore the polynomial. The information about the keys will be discarded immediately after used, so an attacker cannot obtain the session key used before by the invasion of a component. This article solves the security problems about robotic system caused by cyber-attacks and physical attacks.
Authentication is one of the key aspects of securing applications and systems alike. While in most existing systems this is achieved using usernames and passwords it has been continuously shown that this authentication method is not secure. Studies that have been conducted have shown that these systems have vulnerabilities which lead to cases of impersonation and identity theft thus there is need to improve such systems to protect sensitive data. In this research, we explore the combination of the user's location together with traditional usernames and passwords as a multi factor authentication system to make authentication more secure. The idea involves comparing a user's mobile device location with that of the browser and comparing the device's Bluetooth key with the key used during registration. We believe by leveraging existing technologies such as Bluetooth and GPS we can reduce implementation costs whilst improving security.
As a consequence of the recent development of situational awareness technologies for smart grids, the gathering and analysis of data from multiple sources offer a significant opportunity for enhanced fault diagnosis. In order to achieve improved accuracy for both fault detection and classification, a novel combined data analytics technique is presented and demonstrated in this paper. The proposed technique is based on a segmented approach to Bayesian modelling that provides probabilistic graphical representations of both electrical power and data communication networks. In this manner, the reliability of both the data communication and electrical power networks are considered in order to improve overall power system transmission line fault diagnosis.
Web Application becomes the leading solution for the utilization of systems that need access globally, distributed, cost-effective, as well as the diversity of the content that can run on this technology. At the same time web application security have always been a major issue that must be considered due to the fact that 60% of Internet attacks targeting web application platform. One of the biggest impacts on this technology is Cross Site Scripting (XSS) attack, the most frequently occurred and are always in the TOP 10 list of Open Web Application Security Project (OWASP). Vulnerabilities in this attack occur in the absence of checking, testing, and the attention about secure coding practices. There are several alternatives to prevent the attacks that associated with this threat. Network Intrusion Detection System can be used as one solution to prevent the influence of XSS Attack. This paper investigates the XSS attack recognition and detection using regular expression pattern matching and a preprocessing method. Experiments are conducted on a testbed with the aim to reveal the behaviour of the attack.
Large-scale sensing and actuation infrastructures have allowed buildings to achieve significant energy savings; at the same time, these technologies introduce significant privacy risks that must be addressed. In this paper, we present a framework for modeling the trade-off between improved control performance and increased privacy risks due to occupancy sensing. More specifically, we consider occupancy-based HVAC control as the control objective and the location traces of individual occupants as the private variables. Previous studies have shown that individual location information can be inferred from occupancy measurements. To ensure privacy, we design an architecture that distorts the occupancy data in order to hide individual occupant location information while maintaining HVAC performance. Using mutual information between the individual's location trace and the reported occupancy measurement as a privacy metric, we are able to optimally design a scheme to minimize privacy risk subject to a control performance guarantee. We evaluate our framework using real-world occupancy data: first, we verify that our privacy metric accurately assesses the adversary's ability to infer private variables from the distorted sensor measurements; then, we show that control performance is maintained through simulations of building operations using these distorted occupancy readings.
We propose a privacy-preserving framework for learning visual classifiers by leveraging distributed private image data. This framework is designed to aggregate multiple classifiers updated locally using private data and to ensure that no private information about the data is exposed during and after its learning procedure. We utilize a homomorphic cryptosystem that can aggregate the local classifiers while they are encrypted and thus kept secret. To overcome the high computational cost of homomorphic encryption of high-dimensional classifiers, we (1) impose sparsity constraints on local classifier updates and (2) propose a novel efficient encryption scheme named doublypermuted homomorphic encryption (DPHE) which is tailored to sparse high-dimensional data. DPHE (i) decomposes sparse data into its constituent non-zero values and their corresponding support indices, (ii) applies homomorphic encryption only to the non-zero values, and (iii) employs double permutations on the support indices to make them secret. Our experimental evaluation on several public datasets shows that the proposed approach achieves comparable performance against state-of-the-art visual recognition methods while preserving privacy and significantly outperforms other privacy-preserving methods.
Deep learning techniques have demonstrated the ability to perform a variety of object recognition tasks using visible imager data; however, deep learning has not been implemented as a means to autonomously detect and assess targets of interest in a physical security system. We demonstrate the use of transfer learning on a convolutional neural network (CNN) to significantly reduce training time while keeping detection accuracy of physical security relevant targets high. Unlike many detection algorithms employed by video analytics within physical security systems, this method does not rely on temporal data to construct a background scene; targets of interest can halt motion indefinitely and still be detected by the implemented CNN. A key advantage of using deep learning is the ability for a network to improve over time. Periodic retraining can lead to better detection and higher confidence rates. We investigate training data size versus CNN test accuracy using physical security video data. Due to the large number of visible imagers, significant volume of data collected daily, and currently deployed human in the loop ground truth data, physical security systems present a unique environment that is well suited for analysis via CNNs. This could lead to the creation of algorithmic element that reduces human burden and decreases human analyzed nuisance alarms.
In this paper, a game-theoretical solution concept is utilized to tackle the collusion attack in a SDN-based framework. In our proposed setting, the defenders (i.e., switches) are incentivized not to collude with the attackers in a repeated-game setting that utilizes a reputation system. We first illustrate our model and its components. We then use a socio-rational approach to provide a new anti-collusion solution that shows cooperation with the SDN controller is always Nash Equilibrium due to the existence of a long-term utility function in our model.
Each year, thousands of software vulnerabilities are discovered and reported to the public. Unpatched known vulnerabilities are a significant security risk. It is imperative that software vendors quickly provide patches once vulnerabilities are known and users quickly install those patches as soon as they are available. However, most vulnerabilities are never actually exploited. Since writing, testing, and installing software patches can involve considerable resources, it would be desirable to prioritize the remediation of vulnerabilities that are likely to be exploited. Several published research studies have reported moderate success in applying machine learning techniques to the task of predicting whether a vulnerability will be exploited. These approaches typically use features derived from vulnerability databases (such as the summary text describing the vulnerability) or social media posts that mention the vulnerability by name. However, these prior studies share multiple methodological shortcomings that inflate predictive power of these approaches. We replicate key portions of the prior work, compare their approaches, and show how selection of training and test data critically affect the estimated performance of predictive models. The results of this study point to important methodological considerations that should be taken into account so that results reflect real-world utility.
Once organizations have the security incident and breaches, they have to pay tremendous costs. Although visible cost, such as the incident response cost, customer follow-up care, and legal cost are predictable and calculable, it is tough to evaluate and estimate the invisible damage, such as losing customer loyalty, reputation impact, and the damage of branding. This paper proposes a new method, called "Event Study Methodology with Twitter Sentimental Analysis" to evaluate the invisible cost. This method helps to assess the impact of the security breach and the impact on corporate valuation.
In smart grid, large quantities of data is collected from various applications, such as smart metering substation state monitoring, electric energy data acquisition, and smart home. Big data acquired in smart grid applications is usually sensitive. For instance, in order to dispatch accurately and support the dynamic price, lots of smart meters are installed at user's house to collect the real-time data, but all these collected data are related to user privacy. In this paper, we propose a data aggregation scheme based on secret sharing with fault tolerance in smart grid, which ensures that control center gets the integrated data without revealing user's privacy. Meanwhile, we also consider fault tolerance during the data aggregation. At last, we analyze the security of our scheme and carry out experiments to validate the results.
Data outsourcing in cloud is emerging as a successful paradigm that benefits organizations and enterprises with high-performance, low-cost, scalable data storage and sharing services. However, this paradigm also brings forth new challenges for data confidentiality because the outsourced are not under the physic control of the data owners. The existing schemes to achieve the security and usability goal usually apply encryption to the data before outsourcing them to the storage service providers (SSP), and disclose the decryption keys only to authorized user. They cannot ensure the security of data while operating data in cloud where the third-party services are usually semi-trustworthy, and need lots of time to deal with the data. We construct a privacy data management system appending hierarchical access control called HAC-DMS, which can not only assure security but also save plenty of time when updating data in cloud.
We propose a new class of post-quantum digital signature schemes that: (a) derive their security entirely from the security of symmetric-key primitives, believed to be quantum-secure, and (b) have extremely small keypairs, and, (c) are highly parameterizable. In our signature constructions, the public key is an image y=f(x) of a one-way function f and secret key x. A signature is a non-interactive zero-knowledge proof of x, that incorporates a message to be signed. For this proof, we leverage recent progress of Giacomelli et al. (USENIX'16) in constructing an efficient Σ-protocol for statements over general circuits. We improve this Σ-protocol to reduce proof sizes by a factor of two, at no additional computational cost. While this is of independent interest as it yields more compact proofs for any circuit, it also decreases our signature sizes. We consider two possibilities to make the proof non-interactive: the Fiat-Shamir transform and Unruh's transform (EUROCRYPT'12, '15,'16). The former has smaller signatures, while the latter has a security analysis in the quantum-accessible random oracle model. By customizing Unruh's transform to our application, the overhead is reduced to 1.6x when compared to the Fiat-Shamir transform, which does not have a rigorous post-quantum security analysis. We implement and benchmark both approaches and explore the possible choice of f, taking advantage of the recent trend to strive for practical symmetric ciphers with a particularly low number of multiplications and end up using Low MC (EUROCRYPT'15).
Authentication and encryption within an embedded system environment using cameras, sensors, thermostats, autonomous vehicles, medical implants, RFID, etc. is becoming increasing important with ubiquitious wireless connectivity. Hardware-based authentication and encryption offer several advantages in these types of resource-constrained applications, including smaller footprints and lower energy consumption. Bitstring and key generation implemented with Physical Unclonable Functions or PUFs can further reduce resource utilization for authentication and encryption operations and reduce overall system cost by eliminating on-chip non-volatile-memory (NVM). In this paper, we propose a dynamic partial reconfiguration (DPR) strategy for implementing both authentication and encryption using a PUF for bitstring and key generation on FPGAs as a means of optimizing the utilization of the limited area resources. We show that the time and energy penalties associated with DPR are small in modern SoC-based architectures, such as the Xilinx Zynq SoC, and therefore, the overall approach is very attractive for emerging resource-constrained IoT applications.
Cyber-physical systems (CPS) are interconnections of heterogeneous hardware and software components (e.g., sensors, actuators, physical systems/processes, computational nodes and controllers, and communication subsystems). Increasing network connectivity of CPS computational nodes facilitates maintenance and on-demand reprogrammability and reduces operator workload. However, such increasing connectivity also raises the potential for cyber-attacks that attempt unauthorized modifications of run-time parameters or control logic in the computational nodes to hamper process stability or performance. In this paper, we analyze the effectiveness of real-time monitoring using digital and analog side channels. While analog side channels might not typically provide sufficient granularity to observe each iteration of a periodic loop in the code in the CPS device, the temporal averaging inherent to side channel sensory modalities enables observation of persistent changes to the contents of a computational loop through their resulting effect on the level of activity of the device. Changes to code can be detected by observing readings from side channel sensors over a period of time. Experimental studies are performed on an ARM-based single board computer.
Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.