Visible to the public Biblio

Filters: Keyword is Man-machine systems  [Clear All Filters]
2023-02-17
Patel, Sabina M., Phillips, Elizabeth, Lazzara, Elizabeth H..  2022.  Updating the paradigm: Investigating the role of swift trust in human-robot teams. 2022 IEEE 3rd International Conference on Human-Machine Systems (ICHMS). :1–1.
With the influx of technology use and human-robot teams, it is important to understand how swift trust is developed within these teams. Given this influx, we plan to study how surface cues (i.e., observable characteristics) and imported information (i.e., knowledge from external sources or personal experiences) effect the development of swift trust. We hypothesize that human-like surface level cues and positive imported information will yield higher swift trust. These findings will help the assignment of human robot teams in the future.
2022-06-10
Nguyen, Tien N., Choo, Raymond.  2021.  Human-in-the-Loop XAI-enabled Vulnerability Detection, Investigation, and Mitigation. 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). :1210–1212.
The need for cyber resilience is increasingly important in our technology-dependent society, where computing systems, devices and data will continue to be the target of cyber attackers. Hence, we propose a conceptual framework called ‘Human-in-the-Loop Explainable-AI-Enabled Vulnerability Detection, Investigation, and Mitigation’ (HXAI-VDIM). Specifically, instead of resolving complex scenario of security vulnerabilities as an output of an AI/ML model, we integrate the security analyst or forensic investigator into the man-machine loop and leverage explainable AI (XAI) to combine both AI and Intelligence Assistant (IA) to amplify human intelligence in both proactive and reactive processes. Our goal is that HXAI-VDIM integrates human and machine in an interactive and iterative loop with security visualization that utilizes human intelligence to guide the XAI-enabled system and generate refined solutions.
2022-06-09
Yin, Weiru, Chai, Chen, Zhou, Ziyao, Li, Chenhao, Lu, Yali, Shi, Xiupeng.  2021.  Effects of trust in human-automation shared control: A human-in-the-loop driving simulation study. 2021 IEEE International Intelligent Transportation Systems Conference (ITSC). :1147–1154.
Human-automation shared control is proposed to reduce the risk of driver disengagement in Level-3 autonomous vehicles. Although previous studies have approved shared control strategy is effective to keep a driver in the loop and improve the driver's performance, over- and under-trust may affect the cooperation between the driver and the automation system. This study conducted a human-in-the-loop driving simulation experiment to assess the effects of trust on driver's behavior of shared control. An expert shared control strategy with longitudinal and lateral driving assistance was proposed and implemented in the experiment platform. Based on the experiment (N=24), trust in shared control was evaluated, followed by a correlation analysis of trust and behaviors. Moderating effects of trust on the relationship between gaze focalization and minimum of time to collision were then explored. Results showed that self-reported trust in shared control could be evaluated by three subscales respectively: safety, efficiency and ease of control, which all show stronger correlations with gaze focalization than other behaviors. Besides, with more trust in ease of control, there is a gentle decrease in the human-machine conflicts of mean brake inputs. The moderating effects show trust could enhance the decrease of minimum of time to collision as eyes-off-road time increases. These results indicate over-trust in automation will lead to unsafe behaviors, particularly monitoring behavior. This study contributes to revealing the link between trust and behavior in the context of human-automation shared control. It can be applied in improving the design of shared control and reducing risky behaviors of drivers by further trust calibration.
Dizaji, Lida Ghaemi, Hu, Yaoping.  2021.  Building And Measuring Trust In Human-Machine Systems. 2021 IEEE International Conference on Autonomous Systems (ICAS). :1–5.
In human-machine systems (HMS), trust placed by humans on machines is a complex concept and attracts increasingly research efforts. Herein, we reviewed recent studies on building and measuring trust in HMS. The review was based on one comprehensive model of trust – IMPACTS, which has 7 features of intention, measurability, performance, adaptivity, communication, transparency, and security. The review found that, in the past 5 years, HMS fulfill the features of intention, measurability, communication, and transparency. Most of the HMS consider the feature of performance. However, all of the HMS address rarely the feature of adaptivity and neglect the feature of security due to using stand-alone simulations. These findings indicate that future work considering the features of adaptivity and/or security is imperative to foster human trust in HMS.
2022-06-06
Matsushita, Haruka, Sato, Kaito, Sakura, Mamoru, Sawada, Kenji, Shin, Seiichi, Inoue, Masaki.  2020.  Rear-wheel steering control reflecting driver personality via Human-In-The-Loop System. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :356–362.
One of the typical autonomous driving systems is a human-machine cooperative system that intervenes in the driver operation. The autonomous driving needs to make consideration of the driver individuality in addition to safety. This paper considers a human-machine cooperative system balancing safety with the driver individuality using the Human-In-The-Loop System (HITLS) for rear-wheel steering control. This paper assumes that it is safe for HITLS to follow the target side-slip angle and target angular velocity without conflicts between the controller and driver operations. We propose HITLS using the primal-dual algorithm and the internal model control (IMC) type I-PD controller. In HITLS, the signal expander delimits the human-selectable operating range and the controller cooperates stably the human operation and automated control in that range. The primal-dual algorithm realizes the driver and the signal expander. Our outcomes are the making of the rear-wheel steering system which converges to the target value while reflecting the driver individuality.
Yeruva, Vijaya Kumari, Chandrashekar, Mayanka, Lee, Yugyung, Rydberg-Cox, Jeff, Blanton, Virginia, Oyler, Nathan A.  2020.  Interpretation of Sentiment Analysis with Human-in-the-Loop. 2020 IEEE International Conference on Big Data (Big Data). :3099–3108.
Human-in-the-Loop has been receiving special attention from the data science and machine learning community. It is essential to realize the advantages of human feedback and the pressing need for manual annotation to improve machine learning performance. Recent advancements in natural language processing (NLP) and machine learning have created unique challenges and opportunities for digital humanities research. In particular, there are ample opportunities for NLP and machine learning researchers to analyze data from literary texts and use these complex source texts to broaden our understanding of human sentiment using the human-in-the-loop approach. This paper presents our understanding of how human annotators differ from machine annotators in sentiment analysis tasks and how these differences can contribute to designing systems for the "human in the loop" sentiment analysis in complex, unstructured texts. We further explore the challenges and benefits of the human-machine collaboration for sentiment analysis using a case study in Greek tragedy and address some open questions about collaborative annotation for sentiments in literary texts. We focus primarily on (i) an analysis of the challenges in sentiment analysis tasks for humans and machines, and (ii) whether consistent annotation results are generated from multiple human annotators and multiple machine annotators. For human annotators, we have used a survey-based approach with about 60 college students. We have selected six popular sentiment analysis tools for machine annotators, including VADER, CoreNLP's sentiment annotator, TextBlob, LIME, Glove+LSTM, and RoBERTa. We have conducted a qualitative and quantitative evaluation with the human-in-the-loop approach and confirmed our observations on sentiment tasks using the Greek tragedy case study.
2022-05-23
Zhang, Zuyao, Gao, Jing.  2021.  Design of Immersive Interactive Experience of Intangible Cultural Heritage based on Flow Theory. 2021 13th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). :146–149.
At present, the limitation of intangible cultural experience lies in the lack of long-term immersive cultural experience for users. In order to solve this problem, this study divides the process from the perspective of Freudian psychology and combines the theoretical research on intangible cultural heritage and flow experience to get the preliminary research direction. Then, based on the existing interactive experience cases of intangible cultural heritage, a set of method model of immersive interactive experience of intangible cultural heritage based on flow theory is summarized through user interviews in this research. Finally, through data verification, the model is proved to be correct. In addition, this study offers some important insights into differences between primary users and experienced users, and proposed specific guiding suggestions for immersive interactive experience design of intangible cultural heritage based on flow theory in the future.
2020-02-10
Palacio, David N., McCrystal, Daniel, Moran, Kevin, Bernal-Cárdenas, Carlos, Poshyvanyk, Denys, Shenefiel, Chris.  2019.  Learning to Identify Security-Related Issues Using Convolutional Neural Networks. 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME). :140–144.
Software security is becoming a high priority for both large companies and start-ups alike due to the increasing potential for harm that vulnerabilities and breaches carry with them. However, attaining robust security assurance while delivering features requires a precarious balancing act in the context of agile development practices. One path forward to help aid development teams in securing their software products is through the design and development of security-focused automation. Ergo, we present a novel approach, called SecureReqNet, for automatically identifying whether issues in software issue tracking systems describe security-related content. Our approach consists of a two-phase neural net architecture that operates purely on the natural language descriptions of issues. The first phase of our approach learns high dimensional word embeddings from hundreds of thousands of vulnerability descriptions listed in the CVE database and issue descriptions extracted from open source projects. The second phase then utilizes the semantic ontology represented by these embeddings to train a convolutional neural network capable of predicting whether a given issue is security-related. We evaluated SecureReqNet by applying it to identify security-related issues from a dataset of thousands of issues mined from popular projects on GitLab and GitHub. In addition, we also applied our approach to identify security-related requirements from a commercial software project developed by a major telecommunication company. Our preliminary results are encouraging, with SecureReqNet achieving an accuracy of 96% on open source issues and 71.6% on industrial requirements.
2018-01-10
Bhattacharjee, S. Das, Talukder, A., Al-Shaer, E., Doshi, P..  2017.  Prioritized active learning for malicious URL detection using weighted text-based features. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :107–112.

Data analytics is being increasingly used in cyber-security problems, and found to be useful in cases where data volumes and heterogeneity make it cumbersome for manual assessment by security experts. In practical cyber-security scenarios involving data-driven analytics, obtaining data with annotations (i.e. ground-truth labels) is a challenging and known limiting factor for many supervised security analytics task. Significant portions of the large datasets typically remain unlabelled, as the task of annotation is extensively manual and requires a huge amount of expert intervention. In this paper, we propose an effective active learning approach that can efficiently address this limitation in a practical cyber-security problem of Phishing categorization, whereby we use a human-machine collaborative approach to design a semi-supervised solution. An initial classifier is learnt on a small amount of the annotated data which in an iterative manner, is then gradually updated by shortlisting only relevant samples from the large pool of unlabelled data that are most likely to influence the classifier performance fast. Prioritized Active Learning shows a significant promise to achieve faster convergence in terms of the classification performance in a batch learning framework, and thus requiring even lesser effort for human annotation. An useful feature weight update technique combined with active learning shows promising classification performance for categorizing Phishing/malicious URLs without requiring a large amount of annotated training samples to be available during training. In experiments with several collections of PhishMonger's Targeted Brand dataset, the proposed method shows significant improvement over the baseline by as much as 12%.