Biblio
Objective measures are ubiquitous in the formulation, design and implementation of deep space missions. Tour durations, flyby altitudes, propellant budgets, power consumption, and other metrics are essential to developing and managing NASA missions. But beyond the simple metrics of cost and workforce, it has been difficult to identify objective, quantitative measures that assist in evaluating choices made during formulation or implementation phases in terms of their impact on flight operations. As part of the development of the Europa Clipper Mission system, a set of operations metrics have been defined along with the necessary design information and software tooling to calculate them. We have applied these methods and metrics to help assess the impact to the flight team on the six options for the Clipper Tour that are currently being vetted for selection in the fall of 2021. To generate these metrics, the Clipper MOS team first designed the set of essential processes by which flight operations will be conducted, using a standard approach and template to identify (among other aspects) timelines for each process, along with their time constraints (e.g., uplinks for sequence execution). Each of the resulting 50 processes is documented in a common format and concurred by stakeholders. Process timelines were converted into generic schedules and workforce-loaded using COTS scheduling software, based on the inputs of the process authors and domain experts. Custom code was generated to create an operations schedule for a specific portion of Clipper's prime mission, with instances of a given process scheduled based on specific timing rules (e.g., process X starts once per week on Thursdays) or relative to mission events (e.g., sequence generation process begins on a Monday, at least three weeks before each Europa closest approach). Over a 5-month period, and for each of six Clipper candidate tours, the result was a 20,000+ line, workforce-loaded schedule that documents all of the process-driven work effort at the level of individual roles, along with a significant portion of the level-of-effort work. Post-processing code calculated the absolute and relative number of work hours during a nominal 5 day / 40 hour work week, the work effort during 2nd and 3rd shift, as well as 1st shift on weekends. The resultant schedules and shift tables were used to generate objective measures that can be related to both human factors and to operational risk and showed that Clipper tours which utilize 6:1 resonant (21.25 day) orbits instead of 4:1 resonant (14.17 day) orbits during the first dozen or so Europa flybys are advantageous to flight operations. A similar approach can be extended to assist missions in more objective assessments of a number of mission issues and trades, including tour selection and spacecraft design for operability.
With the objective to eliminate the input current sensor in a totem-pole boost power factor corrector (PFC) for its low-cost design, a novel discretized sampling-based robust control scheme is proposed in this work. The proposed control methodology proves to be beneficial due to its ease of implementation and its ability to support high-frequency operation, while being able to eliminate one sensor and, thus, enhancing reliability and cost-effectiveness. In addition, detailed closed-loop stability analysis is carried out for the controller in discrete domain to ascertain brisk dynamic operation when subjected to sudden load fluctuations. To establish the robustness of the proposed control scheme, a detailed sensitivity analysis of the closed-loop performance metrics with respect to undesired changes and inherent uncertainty in system parameters is presented in this article. A comparison with the state-of-the-art (SOA) methods is provided, and conclusive results in terms of better dynamic performance are also established. To verify and elaborate on the specifics of the proposed scheme, a detailed simulation study is conducted, and the results show 25% reduction in response time as compared to SOA approaches. A 500-W boost PFC prototype is developed and tested with the proposed control scheme to evaluate and benchmark the system steady-state and dynamic performance. A total harmonic distortion of 1.68% is obtained at the rated load with a resultant power factor of 0.998 (lag), which proves the effectiveness and superiority of the proposed control scheme.
Conference Name: IEEE Journal of Emerging and Selected Topics in Industrial Electronics
With the increasing number of catastrophic weather events and resulting disruption in the energy supply to essential loads, the distribution grid operators’ focus has shifted from reliability to resiliency against high impact, low-frequency events. Given the enhanced automation to enable the smarter grid, there are several assets/resources at the disposal of electric utilities to enhances resiliency. However, with a lack of comprehensive resilience tools for informed operational decisions and planning, utilities face a challenge in investing and prioritizing operational control actions for resiliency. The distribution system resilience is also highly dependent on system attributes, including network, control, generating resources, location of loads and resources, as well as the progression of an extreme event. In this work, we present a novel multi-stage resilience measure called the Anticipate-Withstand-Recover (AWR) metrics. The AWR metrics are based on integrating relevant ‘system characteristics based factors’, before, during, and after the extreme event. The developed methodology utilizes a pragmatic and flexible approach by adopting concepts from the national emergency preparedness paradigm, proactive and reactive controls of grid assets, graph theory with system and component constraints, and multi-criteria decision-making process. The proposed metrics are applied to provide decision support for a) the operational resilience and b) planning investments, and validated for a real system in Alaska during the entirety of the event progression.
Recent approaches have proven the effectiveness of local outlier factor-based outlier detection when applied over traffic flow probability distributions. However, these approaches used distance metrics based on the Bhattacharyya coefficient when calculating probability distribution similarity. Consequently, the limited expressiveness of the Bhattacharyya coefficient restricted the accuracy of the methods. The crucial deficiency of the Bhattacharyya distance metric is its inability to compare distributions with non-overlapping sample spaces over the domain of natural numbers. Traffic flow intensity varies greatly, which results in numerous non-overlapping sample spaces, rendering metrics based on the Bhattacharyya coefficient inappropriate. In this work, we address this issue by exploring alternative distance metrics and showing their applicability in a massive real-life traffic flow data set from 26 vital intersections in The Hague. The results on these data collected from 272 sensors for more than two years show various advantages of the Earth Mover's distance both in effectiveness and efficiency.
Flexibility and speed in the development of new industrial machines are essential factors for the success of capital goods industries. When assembling a printed circuit board (PCB), since all the components are surface mounted devices (SMD), the whole process is automatic. However, in many PCBs, it is necessary to place components that are not SMDs, called pin through hole components (PTH), having to be inserted manually, which leads to delays in the production line. This work proposes and validates a prototype work cell based on a collaborative robot and vision systems whose objective is to insert these components in a completely autonomous or semi-autonomous way. Different tests were made to validate this work cell, showing the correct implementation and the possibility of replacing the human worker on this PCB assembly task.
Bus factor is a metric that identifies how resilient is the project to the sudden engineer turnover. It states the minimal number of engineers that have to be hit by a bus for a project to be stalled. Even though the metric is often discussed in the community, few studies consider its general relevance. Moreover, the existing tools for bus factor estimation focus solely on the data from version control systems, even though there exists other channels for knowledge generation and distribution. With a survey of 269 engineers, we find that the bus factor is perceived as an important problem in collective development, and determine the highest impact channels of knowledge generation and distribution in software development teams. We also propose a multimodal bus factor estimation algorithm that uses data on code reviews and meetings together with the VCS data. We test the algorithm on 13 projects developed at JetBrains and compared its results to the results of the state-of-the-art tool by Avelino et al. against the ground truth collected in a survey of the engineers working on these projects. Our algorithm is slightly better in terms of both predicting the bus factor as well as key developers compared to the results of Avelino et al. Finally, we use the interviews and the surveys to derive a set of best practices to address the bus factor issue and proposals for the possible bus factor assessment tool.
With the arrival of several face-swapping applications such as FaceApp, SnapChat, MixBooth, FaceBlender and many more, the authenticity of digital media content is hanging on a very loose thread. On social media platforms, videos are widely circulated often at a high compression factor. In this work, we analyze several deep learning approaches in the context of deepfakes classification in high compression scenarios and demonstrate that a proposed approach based on metric learning can be very effective in performing such a classification. Using less number of frames per video to assess its realism, the metric learning approach using a triplet network architecture proves to be fruitful. It learns to enhance the feature space distance between the cluster of real and fake videos embedding vectors. We validated our approaches on two datasets to analyze the behavior in different environments. We achieved a state-of-the-art AUC score of 99.2% on the Celeb-DF dataset and accuracy of 90.71% on a highly compressed Neural Texture dataset. Our approach is especially helpful on social media platforms where data compression is inevitable.