Biblio

Filters: Author is Huang, Chao  [Clear All Filters]
2022-06-09
Luo, Ruijiao, Huang, Chao, Peng, Yuntao, Song, Boyi, Liu, Rui.  2021.  Repairing Human Trust by Promptly Correcting Robot Mistakes with An Attention Transfer Model. 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE). :1928–1933.

In human-robot collaboration (HRC), human trust in the robot is the human expectation that a robot executes tasks with desired performance. A higher-level trust increases the willingness of a human operator to assign tasks, share plans, and reduce the interruption during robot executions, thereby facilitating human-robot integration both physically and mentally. However, due to real-world disturbances, robots inevitably make mistakes, decreasing human trust and further influencing collaboration. Trust is fragile and trust loss is triggered easily when robots show incapability of task executions, making the trust maintenance challenging. To maintain human trust, in this research, a trust repair framework is developed based on a human-to-robot attention transfer (H2R-AT) model and a user trust study. The rationale of this framework is that a prompt mistake correction restores human trust. With H2R-AT, a robot localizes human verbal concerns and makes prompt mistake corrections to avoid task failures in an early stage and to finally improve human trust. User trust study measures trust status before and after the behavior corrections to quantify the trust loss. Robot experiments were designed to cover four typical mistakes, wrong action, wrong region, wrong pose, and wrong spatial relation, validated the accuracy of H2R-AT in robot behavior corrections; a user trust study with 252 participants was conducted, and the changes in trust levels before and after corrections were evaluated. The effectiveness of the human trust repairing was evaluated by the mistake correction accuracy and the trust improvement.

Pang, Yijiang, Huang, Chao, Liu, Rui.  2021.  Synthesized Trust Learning from Limited Human Feedback for Human-Load-Reduced Multi-Robot Deployments. 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). :778–783.
Human multi-robot system (MRS) collaboration is demonstrating potentials in wide application scenarios due to the integration of human cognitive skills and a robot team’s powerful capability introduced by its multi-member structure. However, due to limited human cognitive capability, a human cannot simultaneously monitor multiple robots and identify the abnormal ones, largely limiting the efficiency of the human-MRS collaboration. There is an urgent need to proactively reduce unnecessary human engagements and further reduce human cognitive loads. Human trust in human MRS collaboration reveals human expectations on robot performance. Based on trust estimation, the work between a human and MRS will be reallocated that an MRS will self-monitor and only request human guidance in critical situations. Inspired by that, a novel Synthesized Trust Learning (STL) method was developed to model human trust in the collaboration. STL explores two aspects of human trust (trust level and trust preference), meanwhile accelerates the convergence speed by integrating active learning to reduce human workload. To validate the effectiveness of the method, tasks "searching victims in the context of city rescue" were designed in an open-world simulation environment, and a user study with 10 volunteers was conducted to generate real human trust feedback. The results showed that by maximally utilizing human feedback, the STL achieved higher accuracy in trust modeling with a few human feedback, effectively reducing human interventions needed for modeling an accurate trust, therefore reducing human cognitive load in the collaboration.
2022-02-03
Huang, Chao, Luo, Wenhao, Liu, Rui.  2021.  Meta Preference Learning for Fast User Adaptation in Human-Supervisory Multi-Robot Deployments. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :5851—5856.
As multi-robot systems (MRS) are widely used in various tasks such as natural disaster response and social security, people enthusiastically expect an MRS to be ubiquitous that a general user without heavy training can easily operate. However, humans have various preferences on balancing between task performance and safety, imposing different requirements onto MRS control. Failing to comply with preferences makes people feel difficult in operation and decreases human willingness of using an MRS. Therefore, to improve social acceptance as well as performance, there is an urgent need to adjust MRS behaviors according to human preferences before triggering human corrections, which increases cognitive load. In this paper, a novel Meta Preference Learning (MPL) method was developed to enable an MRS to fast adapt to user preferences. MPL based on meta learning mechanism can quickly assess human preferences from limited instructions; then, a neural network based preference model adjusts MRS behaviors for preference adaption. To validate method effectiveness, a task scenario "An MRS searches victims in an earthquake disaster site" was designed; 20 human users were involved to identify preferences as "aggressive", "medium", "reserved"; based on user guidance and domain knowledge, about 20,000 preferences were simulated to cover different operations related to "task quality", "task progress", "robot safety". The effectiveness of MPL in preference adaption was validated by the reduced duration and frequency of human interventions.
2018-05-11
Zhang, Daniel Yue, Zheng, Chao, Wang, Dong, Thain, Doug, Mu, Xin, Madey, Greg, Huang, Chao.  2017.  Towards scalable and dynamic social sensing using a distributed computing framework. Distributed Computing Systems (ICDCS), 2017 IEEE 37th International Conference on. :966–976.
Huang, Chao, Wang, Dong, Zhu, Shenglong.  2017.  Where are you from: Home location profiling of crowd sensors from noisy and sparse crowdsourcing data. INFOCOM 2017-IEEE Conference on Computer Communications, IEEE. :1–9.
Zhang, Daniel Yue, Han, Rungang, Wang, Dong, Huang, Chao.  2016.  On robust truth discovery in sparse social media sensing. Big Data (Big Data), 2016 IEEE International Conference on. :1076–1081.
Huang, Chao, Wang, Dong.  2016.  Topic-aware social sensing with arbitrary source dependency graphs. Proceedings of the 15th International Conference on Information Processing in Sensor Networks. :7.