Biblio
Peer-to-peer (P2P) energy trading is one of the promising approaches for implementing decentralized electricity market paradigms. In the P2P trading, each actor negotiates directly with a set of trading partners. Since the physical network or grid is used for energy transfer, power losses are inevitable, and grid-related costs always occur during the P2P trading. A proper market clearing mechanism is required for the P2P energy trading between different producers and consumers. This paper proposes a decentralized market clearing mechanism for the P2P energy trading considering the privacy of the agents, power losses as well as the utilization fees for using the third party owned network. Grid-related costs in the P2P energy trading are considered by calculating the network utilization fees using an electrical distance approach. The simulation results are presented to verify the effectiveness of the proposed decentralized approach for market clearing in P2P energy trading.
Demand response has emerged as one of the most promising methods for the deployment of sustainable energy systems. Attempts to democratize demand response and establish programs for residential consumers have run into scalability issues and risks of leaking sensitive consumer data. In this work, we propose a privacy-friendly, incentive-based demand response market, where consumers offer their flexibility to utilities in exchange for a financial compensation. Consumers submit encrypted offer which are aggregated using Computation Over Encrypted Data to ensure consumer privacy and the scalability of the approach. The optimal allocation of flexibility is then determined via double-auctions, along with the optimal consumption schedule for the users with respect to the day-ahead electricity prices, thus also shielding participants from high electricity prices. A case study is presented to show the effectiveness of the proposed approach.
Cloud computing provides customers with enormous compute power and storage capacity, allowing them to deploy their computation and data-intensive applications without having to invest in infrastructure. Many firms use cloud computing as a means of relocating and maintaining resources outside of their enterprise, regardless of the cloud server's location. However, preserving the data in cloud leads to a number of issues related to data loss, accountability, security etc. Such fears become a great barrier to the adoption of the cloud services by users. Cloud computing offers a high scale storage facility for internet users with reference to the cost based on the usage of facilities provided. Privacy protection of a user's data is considered as a challenge as the internal operations offered by the service providers cannot be accessed by the users. Hence, it becomes necessary for monitoring the usage of the client's data in cloud. In this research, we suggest an effective cloud storage solution for accessing patient medical records across hospitals in different countries while maintaining data security and integrity. In the suggested system, multifactor authentication for user login to the cloud, homomorphic encryption for data storage with integrity verification, and integrity verification have all been implemented effectively. To illustrate the efficacy of the proposed strategy, an experimental investigation was conducted.
Static information flow control (IFC) systems provide the ability to restrict data flows within a program, enabling vulnerable functionality or confidential data to be statically isolated from unsecured data or program logic. Despite the wide applicability of IFC as a mechanism for guaranteeing confidentiality and integrity -- the fundamental properties on which computer security relies -- existing IFC systems have seen little use, requiring users to reason about complicated mechanisms such as lattices of security labels and dual notions of confidentiality and integrity within these lattices. We propose a system that diverges significantly from previous work on information flow control, opting to reason directly about the data that programmers already work with. In doing so, we naturally and seamlessly combine the clasically separate notions of confidentiality and integrity into one unified framework, further simplifying reasoning. We motivate and showcase our work through two case studies on TLS private key management: one for Rocket, a popular Rust web framework, and another for Conduit, a server implementation for the Matrix messaging service written in Rust.
This study contributes to the analysis of civil society and knowledge by examining mobilizations by civil society organizations and grassroots networks in opposition to wireless smart meters in the United States. Three types of mobilizations are reviewed: grassroots anti-smart-meter networks, privacy organizations, and organizations that advocate for reduced exposure to non-ionizing electromagnetic fields. The study shows different relationships to scientific knowledge that include publicizing risks and conducting citizen science, identifying non-controversial areas of future research, and pointing to deeper problems of undone science (a particular type of non-knowledge that emerges when actors mobilize in the public interest and find an absence or low volume of research that could have been used to support their concerns). By comparing different types of knowledge claims made by the civil society organizations and networks, the study examines the conditions under which mobilized civil society generates positive responses from incumbent organizations versus resistance and undone science.
The complexity and scale of modern software programs often lead to overlooked programming errors and security vulnerabilities. Developers often rely on automatic tools, like static analysis tools, to look for bugs and vulnerabilities. Static analysis tools are widely used because they can understand nontrivial program behaviors, scale to millions of lines of code, and detect subtle bugs. However, they are known to generate an excess of false alarms which hinder their utilization as it is counterproductive for developers to go through a long list of reported issues, only to find a few true positives. One of the ways proposed to suppress false positives is to use machine learning to identify them. However, training machine learning models requires good quality labeled datasets. For this purpose, we developed D2A [3], a differential analysis based approach that uses the commit history of a code repository to create a labeled dataset of Infer [2] static analysis output.
Smart metering is a mechanism through which fine-grained electricity usage data of consumers is collected periodically in a smart grid. However, a growing concern in this regard is that the leakage of consumers' consumption data may reveal their daily life patterns as the state-of-the-art metering strategies lack adequate security and privacy measures. Many proposed solutions have demonstrated how the aggregated metering information can be transformed to obscure individual consumption patterns without affecting the intended semantics of smart grid operations. In this paper, we expose a complete break of such an existing privacy preserving metering scheme [10] by determining individual consumption patterns efficiently, thus compromising its privacy guarantees. The underlying methodol-ogy of this scheme allows us to - i) retrieve the lower bounds of the privacy parameters and ii) establish a relationship between the privacy preserved output readings and the initial input readings. Subsequently, we present a rigorous experimental validation of our proposed attacking methodology using real-life dataset to highlight its efficacy. In summary, the present paper queries: Is the Whole lesser than its Parts? for such privacy aware metering algorithms which attempt to reduce the information leakage of aggregated consumption patterns of the individuals.
Self-adaptive systems commonly operate in heterogeneous contexts and need to consider multiple quality attributes. Human stakeholders often express their quality preferences by defining utility functions, which are used by self-adaptive systems to automatically generate adaptation plans. However, the adaptation space of realistic systems is large and it is obscure how utility functions impact the generated adaptation behavior, as well as structural, behavioral, and quality constraints. Moreover, human stakeholders are often not aware of the underlying tradeoffs between quality attributes. To address this issue, we present an approach that uses machine learning techniques (dimensionality reduction, clustering, and decision tree learning) to explain the reasoning behind automated planning. Our approach focuses on the tradeoffs between quality attributes and how the choice of weights in utility functions results in different plans being generated. We help humans understand quality attribute tradeoffs, identify key decisions in adaptation behavior, and explore how differences in utility functions result in different adaptation alternatives. We present two systems to demonstrate the approach’s applicability and consider its potential application to 24 exemplar self-adaptive systems. Moreover, we describe our assessment of the tradeoff between the information reduction and the amount of explained variance retained by the results obtained with our approach.
In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem. Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to see the forest for the trees. In this paper we present ExTrA (Explaining Tradeoffs of software Architecture design spaces), an approach to analyzing architectural design spaces that addresses these limitations and provides a basis for explaining design tradeoffs. Our approach employs dimensionality reduction techniques employed in machine learning pipelines like Principal Component Analysis (PCA) and Decision Tree Learning (DTL) to enable architects to understand how design decisions contribute to the satisfaction of extra-functional properties across the design space. Our results show feasibility of the approach in two case studies and evidence that combining complementary techniques like PCA and DTL is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces.
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and detect problems that the system is unaware of. One way of achieving this synergy is by placing the human operator on the loop—i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, an explanation can play an important role in allowing the human operator to understand why the system is making certain decisions and improve the level of knowledge that the operator has about the system. This, in turn, may improve the operator’s capability to intervene and, if necessary, override the decisions being made by the system. However, explanations may incur costs, in terms of delay in actions and the possibility that a human may make a bad judgment. Hence, it is not always obvious whether an explanation will improve overall utility and, if so, then what kind of explanation should be provided to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic system adaptation approach that leverages a probabilistic reasoning technique to determine when an explanation should be used to improve overall system utility. We evaluate our explanation framework in the context of a realistic industrial control system with adaptive behaviors.
Advanced Encryption Standard (AES) algorithm plays an important role in a data security application. In general S-box module in AES will give maximum confusion and diffusion measures during AES encryption and cause significant path delay overhead. In most cases, either L UTs or embedded memories are used for S- box computations which are vulnerable to attacks that pose a serious risk to real-world applications. In this paper, implementation of the composite field arithmetic-based Sub-bytes and inverse Sub-bytes operations in AES is done. The proposed work includes an efficient multiple round AES cryptosystem with higher-order transformation and composite field s-box formulation with some possible inner stage pipelining schemes which can be used for throughput rate enhancement along with path delay optimization. Finally, input biometric-driven key generation schemes are used for formulating the cipher key dynamically, which provides a higher degree of security for the computing devices.