Biblio
Active consumers have now been empowered thanks to the smart grid concept. To avoid fossil fuels, the demand side must provide flexibility through Demand Response events. However, selecting the proper participants for an event can be complex due to response uncertainty. The authors design a Contextual Consumer Rate to identify the trustworthy participants according to previous performances. In the present case study, the authors address the problem of new players with no information. In this way, two different methods were compared to predict their rate. Besides, the authors also refer to the consumer privacy testing of the dataset with and without information that could lead to the participant identification. The results found to prove that, for the proposed methodology, private information does not have a high impact to attribute a rate.
Demand response has emerged as one of the most promising methods for the deployment of sustainable energy systems. Attempts to democratize demand response and establish programs for residential consumers have run into scalability issues and risks of leaking sensitive consumer data. In this work, we propose a privacy-friendly, incentive-based demand response market, where consumers offer their flexibility to utilities in exchange for a financial compensation. Consumers submit encrypted offer which are aggregated using Computation Over Encrypted Data to ensure consumer privacy and the scalability of the approach. The optimal allocation of flexibility is then determined via double-auctions, along with the optimal consumption schedule for the users with respect to the day-ahead electricity prices, thus also shielding participants from high electricity prices. A case study is presented to show the effectiveness of the proposed approach.
Objective measures are ubiquitous in the formulation, design and implementation of deep space missions. Tour durations, flyby altitudes, propellant budgets, power consumption, and other metrics are essential to developing and managing NASA missions. But beyond the simple metrics of cost and workforce, it has been difficult to identify objective, quantitative measures that assist in evaluating choices made during formulation or implementation phases in terms of their impact on flight operations. As part of the development of the Europa Clipper Mission system, a set of operations metrics have been defined along with the necessary design information and software tooling to calculate them. We have applied these methods and metrics to help assess the impact to the flight team on the six options for the Clipper Tour that are currently being vetted for selection in the fall of 2021. To generate these metrics, the Clipper MOS team first designed the set of essential processes by which flight operations will be conducted, using a standard approach and template to identify (among other aspects) timelines for each process, along with their time constraints (e.g., uplinks for sequence execution). Each of the resulting 50 processes is documented in a common format and concurred by stakeholders. Process timelines were converted into generic schedules and workforce-loaded using COTS scheduling software, based on the inputs of the process authors and domain experts. Custom code was generated to create an operations schedule for a specific portion of Clipper's prime mission, with instances of a given process scheduled based on specific timing rules (e.g., process X starts once per week on Thursdays) or relative to mission events (e.g., sequence generation process begins on a Monday, at least three weeks before each Europa closest approach). Over a 5-month period, and for each of six Clipper candidate tours, the result was a 20,000+ line, workforce-loaded schedule that documents all of the process-driven work effort at the level of individual roles, along with a significant portion of the level-of-effort work. Post-processing code calculated the absolute and relative number of work hours during a nominal 5 day / 40 hour work week, the work effort during 2nd and 3rd shift, as well as 1st shift on weekends. The resultant schedules and shift tables were used to generate objective measures that can be related to both human factors and to operational risk and showed that Clipper tours which utilize 6:1 resonant (21.25 day) orbits instead of 4:1 resonant (14.17 day) orbits during the first dozen or so Europa flybys are advantageous to flight operations. A similar approach can be extended to assist missions in more objective assessments of a number of mission issues and trades, including tour selection and spacecraft design for operability.
Concurrency programs often induce buggy results due to the unexpected interaction among threads. The detection of these concurrency bugs costs a lot because they usually appear under a specific execution trace. How to virtually explore different thread schedules to detect concurrency bugs efficiently is an important research topic. Many techniques have been proposed, including lightweight techniques like adaptive randomized scheduling (ARS) and heavyweight techniques like maximal causality reduction (MCR). Compared to heavyweight techniques, ARS is efficient in exploring different schedulings and achieves state-of-the-art performance. However, it will lead to explore large numbers of redundant thread schedulings, which will reduce the efficiency. Moreover, it suffers from the “cold start” issue, when little information is available to guide the distance calculation at the beginning of the exploration. In this work, we propose a Heuristic-Enhanced Adaptive Randomized Scheduling (HARS) algorithm, which improves ARS to detect concurrency bugs guided with novel distance metrics and heuristics obtained from existing research findings. Compared with the adaptive randomized scheduling method, it can more effectively distinguish the traces that may contain concurrency bugs and avoid redundant schedules, thus exploring diverse thread schedules effectively. We conduct an evaluation on 45 concurrency Java programs. The evaluation results show that our algorithm performs more stably in terms of effectiveness and efficiency in detecting concurrency bugs. Notably, HARS detects hard-to-expose bugs more effectively, where the buggy traces are rare or the bug triggering conditions are tricky.
Over the past decade, distributed CSMA, which forms the basis for WiFi, has been deployed ubiquitously to provide seamless and high-speed mobile internet access. However, distributed CSMA might not be ideal for future IoT/M2M applications, where the density of connected devices/sensors/controllers is expected to be orders of magnitude higher than that in present wireless networks. In such high-density networks, the overhead associated with completely distributed MAC protocols will become a bottleneck. Moreover, IoT communications are likely to have strict QoS requirements, for which the `best-effort' scheduling by present WiFi networks may be unsuitable. This calls for a clean-slate redesign of the wireless MAC taking into account the requirements for future IoT/M2M networks. In this paper, we propose a reservation-based (for minimal overhead) wireless MAC designed specifically with IoT/M2M applications in mind.
The high penetration of third-party intellectual property (3PIP) brings a high risk of malicious inclusions and data leakage in products due to the planted hardware Trojans, and system level security constraints have recently been proposed for MPSoCs protection against hardware Trojans. However, secret communication still can be established in the context of the proposed security constraints, and thus, another type of security constraints is also introduced to fully prevent such malicious inclusions. In addition, fulfilling the security constraints incurs serious overhead of schedule length, and a two-stage performance-constrained task scheduling algorithm is then proposed to maintain most of the security constraints. In the first stage, the schedule length is iteratively reduced by assigning sets of adjacent tasks into the same core after calculating the maximum weight independent set of a graph consisting of all timing critical paths. In the second stage, tasks are assigned to proper IP vendors and scheduled to time periods with a minimization of cores required. The experimental results show that our work reduces the schedule length of a task graph, while only a small number of security constraints are violated.