Visible to the public Biblio

Filters: Author is Hou, Ming  [Clear All Filters]
2022-06-09
Hou, Ming.  2021.  Enabling Trust in Autonomous Human-Machine Teaming. 2021 IEEE International Conference on Autonomous Systems (ICAS). :1–1.
The advancement of AI enables the evolution of machines from relatively simple automation to completely autonomous systems that augment human capabilities with improved quality and productivity in work and life. The singularity is near! However, humans are still vulnerable. The COVID-19 pandemic reminds us of our limited knowledge about nature. The recent accidents involving Boeing 737 Max passengers ring the alarm again about the potential risks when using human-autonomy symbiosis technologies. A key challenge of safe and effective human-autonomy teaming is enabling “trust” between the human-machine team. It is even more challenging when we are facing insufficient data, incomplete information, indeterministic conditions, and inexhaustive solutions for uncertain actions. This calls for the imperative needs of appropriate design guidance and scientific methodologies for developing safety-critical autonomous systems and AI functions. The question is how to build and maintain a safe, effective, and trusted partnership between humans and autonomous systems. This talk discusses a context-based and interaction-centred design (ICD) approach for developing a safe and collaborative partnership between humans and technology by optimizing the interaction between human intelligence and AI. An associated trust model IMPACTS (Intention, Measurability, Performance, Adaptivity, Communications, Transparency, and Security) will also be introduced to enable the practitioners to foster an assured and calibrated trust relationship between humans and their partner autonomous systems. A real-world example of human-autonomy teaming in a military context will be explained to illustrate the utility and effectiveness of these trust enablers.
2019-12-16
Hou, Ming, Li, Dequan, Wu, Xiongjun, Shen, Xiuyu.  2019.  Differential Privacy of Online Distributed Optimization under Adversarial Nodes. 2019 Chinese Control Conference (CCC). :2172-2177.

Nowadays, many applications involve big data and big data analysis methods appear in many fields. As a preliminary attempt to solve the challenge of big data analysis, this paper presents a distributed online learning algorithm based on differential privacy. Since online learning can effectively process sensitive data, we introduce the concept of differential privacy in distributed online learning algorithms, with the aim at ensuring data privacy during online learning to prevent adversarial nodes from inferring any important data information. In particular, for different adversary models, we consider different type graphs to tolerate a limited number of adversaries near each regular node or tolerate a global limited number of adversaries.