Visible to the public Biblio

Filters: Keyword is work factor metrics  [Clear All Filters]
2022-12-01
Bindschadler, Duane, Hwangpo, Nari, Sarrel, Marc.  2022.  Metrics for Flight Operations: Application to Europa Clipper Tour Selection. 2022 IEEE Aerospace Conference (AERO). :1—12.

Objective measures are ubiquitous in the formulation, design and implementation of deep space missions. Tour durations, flyby altitudes, propellant budgets, power consumption, and other metrics are essential to developing and managing NASA missions. But beyond the simple metrics of cost and workforce, it has been difficult to identify objective, quantitative measures that assist in evaluating choices made during formulation or implementation phases in terms of their impact on flight operations. As part of the development of the Europa Clipper Mission system, a set of operations metrics have been defined along with the necessary design information and software tooling to calculate them. We have applied these methods and metrics to help assess the impact to the flight team on the six options for the Clipper Tour that are currently being vetted for selection in the fall of 2021. To generate these metrics, the Clipper MOS team first designed the set of essential processes by which flight operations will be conducted, using a standard approach and template to identify (among other aspects) timelines for each process, along with their time constraints (e.g., uplinks for sequence execution). Each of the resulting 50 processes is documented in a common format and concurred by stakeholders. Process timelines were converted into generic schedules and workforce-loaded using COTS scheduling software, based on the inputs of the process authors and domain experts. Custom code was generated to create an operations schedule for a specific portion of Clipper's prime mission, with instances of a given process scheduled based on specific timing rules (e.g., process X starts once per week on Thursdays) or relative to mission events (e.g., sequence generation process begins on a Monday, at least three weeks before each Europa closest approach). Over a 5-month period, and for each of six Clipper candidate tours, the result was a 20,000+ line, workforce-loaded schedule that documents all of the process-driven work effort at the level of individual roles, along with a significant portion of the level-of-effort work. Post-processing code calculated the absolute and relative number of work hours during a nominal 5 day / 40 hour work week, the work effort during 2nd and 3rd shift, as well as 1st shift on weekends. The resultant schedules and shift tables were used to generate objective measures that can be related to both human factors and to operational risk and showed that Clipper tours which utilize 6:1 resonant (21.25 day) orbits instead of 4:1 resonant (14.17 day) orbits during the first dozen or so Europa flybys are advantageous to flight operations. A similar approach can be extended to assist missions in more objective assessments of a number of mission issues and trades, including tour selection and spacecraft design for operability.

Chandwani, Ashwin, Dey, Saikat, Mallik, Ayan.  2022.  Parameter-Variation-Tolerant Robust Current Sensorless Control of a Single-Phase Boost PFC. IEEE Journal of Emerging and Selected Topics in Industrial Electronics. 3:933—945.

With the objective to eliminate the input current sensor in a totem-pole boost power factor corrector (PFC) for its low-cost design, a novel discretized sampling-based robust control scheme is proposed in this work. The proposed control methodology proves to be beneficial due to its ease of implementation and its ability to support high-frequency operation, while being able to eliminate one sensor and, thus, enhancing reliability and cost-effectiveness. In addition, detailed closed-loop stability analysis is carried out for the controller in discrete domain to ascertain brisk dynamic operation when subjected to sudden load fluctuations. To establish the robustness of the proposed control scheme, a detailed sensitivity analysis of the closed-loop performance metrics with respect to undesired changes and inherent uncertainty in system parameters is presented in this article. A comparison with the state-of-the-art (SOA) methods is provided, and conclusive results in terms of better dynamic performance are also established. To verify and elaborate on the specifics of the proposed scheme, a detailed simulation study is conducted, and the results show 25% reduction in response time as compared to SOA approaches. A 500-W boost PFC prototype is developed and tested with the proposed control scheme to evaluate and benchmark the system steady-state and dynamic performance. A total harmonic distortion of 1.68% is obtained at the rated load with a resultant power factor of 0.998 (lag), which proves the effectiveness and superiority of the proposed control scheme.

Conference Name: IEEE Journal of Emerging and Selected Topics in Industrial Electronics

Kandaperumal, Gowtham, Pandey, Shikhar, Srivastava, Anurag.  2022.  AWR: Anticipate, Withstand, and Recover Resilience Metric for Operational and Planning Decision Support in Electric Distribution System. IEEE Transactions on Smart Grid. 13:179—190.

With the increasing number of catastrophic weather events and resulting disruption in the energy supply to essential loads, the distribution grid operators’ focus has shifted from reliability to resiliency against high impact, low-frequency events. Given the enhanced automation to enable the smarter grid, there are several assets/resources at the disposal of electric utilities to enhances resiliency. However, with a lack of comprehensive resilience tools for informed operational decisions and planning, utilities face a challenge in investing and prioritizing operational control actions for resiliency. The distribution system resilience is also highly dependent on system attributes, including network, control, generating resources, location of loads and resources, as well as the progression of an extreme event. In this work, we present a novel multi-stage resilience measure called the Anticipate-Withstand-Recover (AWR) metrics. The AWR metrics are based on integrating relevant ‘system characteristics based factors’, before, during, and after the extreme event. The developed methodology utilizes a pragmatic and flexible approach by adopting concepts from the national emergency preparedness paradigm, proactive and reactive controls of grid assets, graph theory with system and component constraints, and multi-criteria decision-making process. The proposed metrics are applied to provide decision support for a) the operational resilience and b) planning investments, and validated for a real system in Alaska during the entirety of the event progression.

Andersen, Erik, Chiarandini, Marco, Hassani, Marwan, Jänicke, Stefan, Tampakis, Panagiotis, Zimek, Arthur.  2022.  Evaluation of Probability Distribution Distance Metrics in Traffic Flow Outlier Detection. 2022 23rd IEEE International Conference on Mobile Data Management (MDM). :64—69.

Recent approaches have proven the effectiveness of local outlier factor-based outlier detection when applied over traffic flow probability distributions. However, these approaches used distance metrics based on the Bhattacharyya coefficient when calculating probability distribution similarity. Consequently, the limited expressiveness of the Bhattacharyya coefficient restricted the accuracy of the methods. The crucial deficiency of the Bhattacharyya distance metric is its inability to compare distributions with non-overlapping sample spaces over the domain of natural numbers. Traffic flow intensity varies greatly, which results in numerous non-overlapping sample spaces, rendering metrics based on the Bhattacharyya coefficient inappropriate. In this work, we address this issue by exploring alternative distance metrics and showing their applicability in a massive real-life traffic flow data set from 26 vital intersections in The Hague. The results on these data collected from 272 sensors for more than two years show various advantages of the Earth Mover's distance both in effectiveness and efficiency.

Queirós, Mauro, Pereira, João Lobato, Leiras, Valdemar, Meireles, José, Fonseca, Jaime, Borges, João.  2022.  Work cell for assembling small components in PCB. 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA). :1—4.

Flexibility and speed in the development of new industrial machines are essential factors for the success of capital goods industries. When assembling a printed circuit board (PCB), since all the components are surface mounted devices (SMD), the whole process is automatic. However, in many PCBs, it is necessary to place components that are not SMDs, called pin through hole components (PTH), having to be inserted manually, which leads to delays in the production line. This work proposes and validates a prototype work cell based on a collaborative robot and vision systems whose objective is to insert these components in a completely autonomous or semi-autonomous way. Different tests were made to validate this work cell, showing the correct implementation and the possibility of replacing the human worker on this PCB assembly task.

Jabrayilzade, Elgun, Evtikhiev, Mikhail, Tüzün, Eray, Kovalenko, Vladimir.  2022.  Bus Factor in Practice. 2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). :97—106.

Bus factor is a metric that identifies how resilient is the project to the sudden engineer turnover. It states the minimal number of engineers that have to be hit by a bus for a project to be stalled. Even though the metric is often discussed in the community, few studies consider its general relevance. Moreover, the existing tools for bus factor estimation focus solely on the data from version control systems, even though there exists other channels for knowledge generation and distribution. With a survey of 269 engineers, we find that the bus factor is perceived as an important problem in collective development, and determine the highest impact channels of knowledge generation and distribution in software development teams. We also propose a multimodal bus factor estimation algorithm that uses data on code reviews and meetings together with the VCS data. We test the algorithm on 13 projects developed at JetBrains and compared its results to the results of the state-of-the-art tool by Avelino et al. against the ground truth collected in a survey of the engineers working on these projects. Our algorithm is slightly better in terms of both predicting the bus factor as well as key developers compared to the results of Avelino et al. Finally, we use the interviews and the surveys to derive a set of best practices to address the bus factor issue and proposals for the possible bus factor assessment tool.

2022-01-31
Dai, Wei, Berleant, Daniel.  2021.  Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. 2021 IEEE International Conference on Big Data (Big Data). :5085–5094.
Deep learning (DL) classifiers are often unstable in that they may change significantly when retested on perturbed images or low quality images. This paper adds to the fundamental body of work on the robustness of DL classifiers. We introduce a new two-dimensional benchmarking matrix to evaluate robustness of DL classifiers, and we also innovate a four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers. To measure robust DL classifiers, we create comprehensive 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. After collecting experimental results, we first report that using two-factor perturbed images improves both robustness and accuracy of DL classifiers. The two-factor perturbation includes (1) two digital perturbations (salt & pepper noise and Gaussian noise) applied in both sequences, and (2) one digital perturbation (salt & pepper noise) and a geometric perturbation (rotation) applied in both sequences. All source codes, related image sets, and results are shared on the GitHub website at https://github.com/caperock/robustai to support future academic research and industry projects.
Stevens, Clay, Soundy, Jared, Chan, Hau.  2021.  Exploring the Efficiency of Self-Organizing Software Teams with Game Theory. 2021 IEEE/ACM 43rd International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER). :36–40.
Over the last two decades, software development has moved away from centralized, plan-based management toward agile methodologies such as Scrum. Agile methodologies are founded on a shared set of core principles, including self-organizing software development teams. Such teams are promoted as a way to increase both developer productivity and team morale, which is echoed by academic research. However, recent works on agile neglect to consider strategic behavior among developers, particularly during task assignment-one of the primary functions of a self-organizing team. This paper argues that self-organizing software teams could be readily modeled using game theory, providing insight into how agile developers may act when behaving strategically. We support our argument by presenting a general model for self-assignment of development tasks based on and extending concepts drawn from established game theory research. We further introduce the software engineering community to two metrics drawn from game theory-the price-of-stability and price-of-anarchy-which can be used to gauge the efficiencies of self-organizing teams compared to centralized management. We demonstrate how these metrics can be used in a case study evaluating the hypothesis that smaller teams self-organize more efficiently than larger teams, with conditional support for that hypothesis. Our game-theoretic framework provides new perspective for the software engineering community, opening many avenues for future research.
Sandhu, Amandeep Kaur, Batth, Ranbir Singh.  2021.  A Hybrid approach to identify Software Reusable Components in Software Intelligence. 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM). :353–356.
Reusability is demarcated as the way of utilizing existing software components in software development. It plays a significant role in component-based software engineering. Extracting the components from the source code and checking the reusability factors is the most crucial part. Software Intelligence, a combination of data mining and artificial intelligence, helps to cope with the extraction and detection of reusability factor of the component. In this work prediction of reusability factor is considered. This paper proposes a hybrid PSO-NSGA III approach to detect whether the extracted component is reusable or not. The existing models lack in tuning the hyper parameters for prediction, which is considered in this work. The proposed approach was compared with four models, showing better outcomes in terms of performance metrics.
Alexopoulos, Ilias, Neophytou, Stelios, Kyriakides, Ioannis.  2021.  Identifying Metrics for an IoT Performance Estimation Framework. 2021 10th Mediterranean Conference on Embedded Computing (MECO). :1–6.
In this work we introduce a framework to support design decisions for heterogeneous IoT platforms and devices. The framework methodology as well as the development of software and hardware models are outlined. Specific factors that affect the performance of device are identified and formulated in a metric form. The performance aspects are embedded in a flexible and scalable framework for decision support. An indicative experimental setup investigates the applicability of the framework for a specific functional block. The experimental results are used to assess the significance of the framework under development.
Chang, Mai Lee, Trafton, Greg, McCurry, J. Malcolm, Lockerd Thomaz, Andrea.  2021.  Unfair! Perceptions of Fairness in Human-Robot Teams. 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). :905–912.
How team members are treated influences their performance in the team and their desire to be a part of the team in the future. Prior research in human-robot teamwork proposes fairness definitions for human-robot teaming that are based on the work completed by each team member. However, metrics that properly capture people’s perception of fairness in human-robot teaming remains a research gap. We present work on assessing how well objective metrics capture people’s perception of fairness. First, we extend prior fairness metrics based on team members’ capabilities and workload to a bigger team. We also develop a new metric to quantify the amount of time that the robot spends working on the same task as each person. We conduct an online user study (n=95) and show that these metrics align with perceived fairness. Importantly, we discover that there are bleed-over effects in people’s assessment of fairness. When asked to rate fairness based on the amount of time that the robot spends working with each person, participants used two factors (fairness based on the robot’s time and teammates’ capabilities). This bleed-over effect is stronger when people are asked to assess fairness based on capability. From these insights, we propose design guidelines for algorithms to enable robotic teammates to consider fairness in its decision-making to maintain positive team social dynamics and team task performance.
Yao, Chunxing, Sun, Zhenyao, Xu, Shuai, Zhang, Han, Ren, Guanzhou, Ma, Guangtong.  2021.  Optimal Parameters Design for Model Predictive Control using an Artificial Neural Network Optimized by Genetic Algorithm. 2021 13th International Symposium on Linear Drives for Industry Applications (LDIA). :1–6.
Model predictive control (MPC) has become one of the most attractive control techniques due to its outstanding dynamic performance for motor drives. Besides, MPC with constant switching frequency (CSF-MPC) maintains the advantages of MPC as well as constant frequency but the selection of weighting factors in the cost function is difficult for CSF-MPC. Fortunately, the application of artificial neural networks (ANN) can accelerate the selection without any additional computation burden. Therefore, this paper designs a specific artificial neural network optimized by genetic algorithm (GA-ANN) to select the optimal weighting factors of CSF-MPC for permanent magnet synchronous motor (PMSM) drives fed by three-level T-type inverter. The key performance metrics like THD and switching frequencies error (ferr) are extracted from simulation and this data are utilized to train and evaluate GA-ANN. The trained GA-ANN model can automatically and precisely select the optimal weighting factors for minimizing THD and ferr under different working conditions of PMSM. Furthermore, the experimental results demonstrate the validation of GA-ANN and robustness of optimal weighting factors under different torque loads. Accordingly, any arbitrary user-defined working conditions which combine THD and ferr can be defined and the optimum weighting factors can be fast and explicitly determined via the trained GA-ANN model.
Bergmans, Lodewijk, Schrijen, Xander, Ouwehand, Edwin, Bruntink, Magiel.  2021.  Measuring source code conciseness across programming languages using compression. 2021 IEEE 21st International Working Conference on Source Code Analysis and Manipulation (SCAM). :47–57.
It is well-known, and often a topic of heated debates, that programs in some programming languages are more concise than in others. This is a relevant factor when comparing or aggregating volume-impacted metrics on source code written in a combination of programming languages. In this paper, we present a model for measuring the conciseness of programming languages in a consistent, objective and evidence-based way. We present the approach, explain how it is founded on information theoretical principles, present detailed analysis steps and show the quantitative results of applying this model to a large benchmark of diverse commercial software applications. We demonstrate that our metric for language conciseness is strongly correlated with both an alternative analytical approach, and with a large scale developer survey, and show how its results can be applied to improve software metrics for multi-language applications.
Liu, Yong, Zhu, Xinghua, Wang, Jianzong, Xiao, Jing.  2021.  A Quantitative Metric for Privacy Leakage in Federated Learning. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3065–3069.
In the federated learning system, parameter gradients are shared among participants and the central modulator, while the original data never leave their protected source domain. However, the gradient itself might carry enough information for precise inference of the original data. By reporting their parameter gradients to the central server, client datasets are exposed to inference attacks from adversaries. In this paper, we propose a quantitative metric based on mutual information for clients to evaluate the potential risk of information leakage in their gradients. Mutual information has received increasing attention in the machine learning and data mining community over the past few years. However, existing mutual information estimation methods cannot handle high-dimensional variables. In this paper, we propose a novel method to approximate the mutual information between the high-dimensional gradients and batched input data. Experimental results show that the proposed metric reliably reflect the extent of information leakage in federated learning. In addition, using the proposed metric, we investigate the influential factors of risk level. It is proven that, the risk of information leakage is related to the status of the task model, as well as the inherent data distribution.
Freire, Sávio, Rios, Nicolli, Pérez, Boris, Castellanos, Camilo, Correal, Darío, Ramač, Robert, Mandić, Vladimir, Taušan, Nebojša, López, Gustavo, Pacheco, Alexia et al..  2021.  How Experience Impacts Practitioners' Perception of Causes and Effects of Technical Debt. 2021 IEEE/ACM 13th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE). :21–30.
Context: The technical debt (TD) metaphor helps to conceptualize the pending issues and trade-offs made during software development. Knowing TD causes can support in defining preventive actions and having information about effects aids in the prioritization of TD payment. Goal: To investigate the impact of the experience level on how practitioners perceive the most likely causes that lead to TD and the effects of TD that have the highest impacts on software projects. Method: We approach this topic by surveying 227 practitioners. Results: While experienced software developers focus on human factors as TD causes and external quality attributes as TD effects, low experienced developers seem to concentrate on technical issues as causes and internal quality issues and increased project effort as effects. Missing any of these types of causes could lead a team to miss the identification of important TD, or miss opportunities to preempt TD. On the other hand, missing important effects could hamper effective planning or erode the effectiveness of decisions about prioritizing TD items. Conclusion: Having software development teams composed of practitioners with a homogeneous experience level can erode the team's ability to effectively manage TD.
Peitek, Norman, Apel, Sven, Parnin, Chris, Brechmann, André, Siegmund, Janet.  2021.  Program Comprehension and Code Complexity Metrics: An fMRI Study. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :524–536.
Background: Researchers and practitioners have been using code complexity metrics for decades to predict how developers comprehend a program. While it is plausible and tempting to use code metrics for this purpose, their validity is debated, since they rely on simple code properties and rarely consider particularities of human cognition. Aims: We investigate whether and how code complexity metrics reflect difficulty of program comprehension. Method: We have conducted a functional magnetic resonance imaging (fMRI) study with 19 participants observing program comprehension of short code snippets at varying complexity levels. We dissected four classes of code complexity metrics and their relationship to neuronal, behavioral, and subjective correlates of program comprehension, overall analyzing more than 41 metrics. Results: While our data corroborate that complexity metrics can-to a limited degree-explain programmers' cognition in program comprehension, fMRI allowed us to gain insights into why some code properties are difficult to process. In particular, a code's textual size drives programmers' attention, and vocabulary size burdens programmers' working memory. Conclusion: Our results provide neuro-scientific evidence supporting warnings of prior research questioning the validity of code complexity metrics and pin down factors relevant to program comprehension. Future Work: We outline several follow-up experiments investigating fine-grained effects of code complexity and describe possible refinements to code complexity metrics.
2021-03-01
Said, S., Bouloiz, H., Gallab, M..  2020.  Identification and Assessment of Risks Affecting Sociotechnical Systems Resilience. 2020 IEEE 6th International Conference on Optimization and Applications (ICOA). :1–10.
Resilience is regarded nowadays as the ideal solution that can be envisaged by sociotechnical systems for coping with potential threats and crises. This being said, gaining and maintaining this ability is not always easy, given the multitude of risks driving the adverse and challenging events. This paper aims to propose a method consecrated to the assessment of risks directly affecting resilience. This work is conducted within the framework of risk assessment and resilience engineering approaches. A 5×5 matrix, dedicated to the identification and assessment of risk factors that constitute threats to the system resilience, has been elaborated. This matrix consists of two axes, namely, the impact on resilience metrics and the availability and effectiveness of resilience planning. Checklists serving to collect information about these two attributes are established and a case study is undertaken. In this paper, a new method for identifying and assessing risk factors menacing directly the resilience of a given system is presented. The analysis of these risks must be given priority to make the system more resilient to shocks.
Kerim, A., Genc, B..  2020.  Mobile Games Success and Failure: Mining the Hidden Factors. 2020 7th International Conference on Soft Computing Machine Intelligence (ISCMI). :167–171.
Predicting the success of a mobile game is a prime issue in game industry. Thousands of games are being released each day. However, a few of them succeed while the majority fail. Towards the goal of investigating the potential correlation between the success of a mobile game and its specific attributes, this work was conducted. More than 17 thousands games were considered for that reason. We show that specific game attributes, such as number of IAPs (In-App Purchases), belonging to the puzzle genre, supporting different languages and being produced by a mature developer highly and positively affect the success of the game in the future. Moreover, we show that releasing the game in July and not including any IAPs seems to be highly associated with the game’s failure. Our second main contribution, is the proposal of a novel success score metric that reflects multiple objectives, in contrast to evaluating only revenue, average rating or rating count. We also employ different machine learning models, namely, SVM (Support Vector Machine), RF (Random Forest) and Deep Learning (DL) to predict this success score metric of a mobile game given its attributes. The trained models were able to predict this score, as well as the rating average and rating count of a mobile game with more than 70% accuracy. This prediction can help developers before releasing their game to the market to avoid any potential disappointments.
Raj, C., Khular, L., Raj, G..  2020.  Clustering Based Incident Handling For Anomaly Detection in Cloud Infrastructures. 2020 10th International Conference on Cloud Computing, Data Science Engineering (Confluence). :611–616.
Incident Handling for Cloud Infrastructures focuses on how the clustering based and non-clustering based algorithms can be implemented. Our research focuses in identifying anomalies and suspicious activities that might happen inside a Cloud Infrastructure over available datasets. A brief study has been conducted, where a network statistics dataset the NSL-KDD, has been chosen as the model to be worked upon, such that it can mirror the Cloud Infrastructure and its components. An important aspect of cloud security is to implement anomaly detection mechanisms, in order to monitor the incidents that inhibit the development and the efficiency of the cloud. Several methods have been discovered which help in achieving our present goal, some of these are highlighted as the following; by applying algorithm such as the Local Outlier Factor to cancel the noise created by irrelevant data points, by applying the DBSCAN algorithm which can detect less denser areas in order to identify their cause of clustering, the K-Means algorithm to generate positive and negative clusters to identify the anomalous clusters and by applying the Isolation Forest algorithm in order to implement decision based approach to detect anomalies. The best algorithm would help in finding and fixing the anomalies efficiently and would help us in developing an Incident Handling model for the Cloud.
Perisetty, A., Bodempudi, S. T., Shaik, P. Rahaman, Kumar, B. L. N. Phaneendra.  2020.  Classification of Hyperspectral Images using Edge Preserving Filter and Nonlinear Support Vector Machine (SVM). 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS). :1050–1054.
Hyperspectral image is acquired with a special sensor in which the information is collected continuously. This sensor will provide abundant data from the scene captured. The high voluminous data in this image give rise to the extraction of materials and other valuable items in it. This paper proposes a methodology to extract rich information from the hyperspectral images. As the information collected in a contiguous manner, there is a need to extract spectral bands that are uncorrelated. A factor analysis based dimensionality reduction technique is employed to extract the spectral bands and a weight least square filter is used to get the spatial information from the data. Due to the preservation of edge property in the spatial filter, much information is extracted during the feature extraction phase. Finally, a nonlinear SVM is applied to assign a class label to the pixels in the image. The research work is tested on the standard dataset Indian Pines. The performance of the proposed method on this dataset is assessed through various accuracy measures. These accuracies are 96%, 92.6%, and 95.4%. over the other methods. This methodology can be applied to forestry applications to extract the various metrics in the real world.
Nasir, J., Norman, U., Bruno, B., Dillenbourg, P..  2020.  When Positive Perception of the Robot Has No Effect on Learning. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :313–320.
Humanoid robots, with a focus on personalised social behaviours, are increasingly being deployed in educational settings to support learning. However, crafting pedagogical HRI designs and robot interventions that have a real, positive impact on participants' learning, as well as effectively measuring such impact, is still an open challenge. As a first effort in tackling the issue, in this paper we propose a novel robot-mediated, collaborative problem solving activity for school children, called JUSThink, aiming at improving their computational thinking skills. JUSThink will serve as a baseline and reference for investigating how the robot's behaviour can influence the engagement of the children with the activity, as well as their collaboration and mutual understanding while working on it. To this end, this first iteration aims at investigating (i) participants' engagement with the activity (Intrinsic Motivation Inventory-IMI), their mutual understanding (IMIlike) and perception of the robot (Godspeed Questionnaire); (ii) participants' performance during the activity, using several performance and learning metrics. We carried out an extensive user-study in two international schools in Switzerland, in which around 100 children participated in pairs in one-hour long interactions with the activity. Surprisingly, we observe that while a teams' performance significantly affects how team members evaluate their competence, mutual understanding and task engagement, it does not affect their perception of the robot and its helpfulness, a fact which highlights the need for baseline studies and multi-dimensional evaluation metrics when assessing the impact of robots in educational activities.
Zhang, Y., Groves, T., Cook, B., Wright, N. J., Coskun, A. K..  2020.  Quantifying the impact of network congestion on application performance and network metrics. 2020 IEEE International Conference on Cluster Computing (CLUSTER). :162–168.
In modern high-performance computing (HPC) systems, network congestion is an important factor that contributes to performance degradation. However, how network congestion impacts application performance is not fully understood. As Aries network, a recent HPC network architecture featuring a dragonfly topology, is equipped with network counters measuring packet transmission statistics on each router, these network metrics can potentially be utilized to understand network performance. In this work, by experiments on a large HPC system, we quantify the impact of network congestion on various applications' performance in terms of execution time, and we correlate application performance with network metrics. Our results demonstrate diverse impacts of network congestion: while applications with intensive MPI operations (such as HACC and MILC) suffer from more than 40% extension in their execution times under network congestion, applications with less intensive MPI operations (such as Graph500 and HPCG) are mostly not affected. We also demonstrate that a stall-to-flit ratio metric derived from Aries network counters is positively correlated with performance degradation and, thus, this metric can serve as an indicator of network congestion in HPC systems.
Hynes, E., Flynn, R., Lee, B., Murray, N..  2020.  An Evaluation of Lower Facial Micro Expressions as an Implicit QoE Metric for an Augmented Reality Procedure Assistance Application. 2020 31st Irish Signals and Systems Conference (ISSC). :1–6.
Augmented reality (AR) has been identified as a key technology to enhance worker utility in the context of increasing automation of repeatable procedures. AR can achieve this by assisting the user in performing complex and frequently changing procedures. Crucial to the success of procedure assistance AR applications is user acceptability, which can be measured by user quality of experience (QoE). An active research topic in QoE is the identification of implicit metrics that can be used to continuously infer user QoE during a multimedia experience. A user's QoE is linked to their affective state. Affective state is reflected in facial expressions. Emotions shown in micro facial expressions resemble those expressed in normal expressions but are distinguished from them by their brief duration. The novelty of this work lies in the evaluation of micro facial expressions as a continuous QoE metric by means of correlation analysis to the more traditional and accepted post-experience self-reporting. In this work, an optimal Rubik's Cube solver AR application was used as a proof of concept for complex procedure assistance. This was compared with a paper-based procedure assistance control. QoE expressed by affect in normal and micro facial expressions was evaluated through correlation analysis with post-experience reports. The results show that the AR application yielded higher task success rates and shorter task durations. Micro facial expressions reflecting disgust correlated moderately to the questionnaire responses for instruction disinterest in the AR application.
Golagha, M., Pretschner, A., Briand, L. C..  2020.  Can We Predict the Quality of Spectrum-based Fault Localization? 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST). :4–15.
Fault localization and repair are time-consuming and tedious. There is a significant and growing need for automated techniques to support such tasks. Despite significant progress in this area, existing fault localization techniques are not widely applied in practice yet and their effectiveness varies greatly from case to case. Existing work suggests new algorithms and ideas as well as adjustments to the test suites to improve the effectiveness of automated fault localization. However, important questions remain open: Why is the effectiveness of these techniques so unpredictable? What are the factors that influence the effectiveness of fault localization? Can we accurately predict fault localization effectiveness? In this paper, we try to answer these questions by collecting 70 static, dynamic, test suite, and fault-related metrics that we hypothesize are related to effectiveness. Our analysis shows that a combination of only a few static, dynamic, and test metrics enables the construction of a prediction model with excellent discrimination power between levels of effectiveness (eight metrics yielding an AUC of .86; fifteen metrics yielding an AUC of.88). The model hence yields a practically useful confidence factor that can be used to assess the potential effectiveness of fault localization. Given that the metrics are the most influential metrics explaining the effectiveness of fault localization, they can also be used as a guide for corrective actions on code and test suites leading to more effective fault localization.
2021-01-15
Kumar, A., Bhavsar, A., Verma, R..  2020.  Detecting Deepfakes with Metric Learning. 2020 8th International Workshop on Biometrics and Forensics (IWBF). :1—6.

With the arrival of several face-swapping applications such as FaceApp, SnapChat, MixBooth, FaceBlender and many more, the authenticity of digital media content is hanging on a very loose thread. On social media platforms, videos are widely circulated often at a high compression factor. In this work, we analyze several deep learning approaches in the context of deepfakes classification in high compression scenarios and demonstrate that a proposed approach based on metric learning can be very effective in performing such a classification. Using less number of frames per video to assess its realism, the metric learning approach using a triplet network architecture proves to be fruitful. It learns to enhance the feature space distance between the cluster of real and fake videos embedding vectors. We validated our approaches on two datasets to analyze the behavior in different environments. We achieved a state-of-the-art AUC score of 99.2% on the Celeb-DF dataset and accuracy of 90.71% on a highly compressed Neural Texture dataset. Our approach is especially helpful on social media platforms where data compression is inevitable.