Visible to the public Biblio

Filters: Keyword is autonomous agents  [Clear All Filters]
2023-02-17
Biström, Dennis, Westerlund, Magnus, Duncan, Bob, Jaatun, Martin Gilje.  2022.  Privacy and security challenges for autonomous agents : A study of two social humanoid service robots. 2022 IEEE International Conference on Cloud Computing Technology and Science (CloudCom). :230–237.
The development of autonomous agents have gained renewed interest, largely due to the recent successes of machine learning. Social robots can be considered a special class of autonomous agents that are often intended to be integrated into sensitive environments. We present experiences from our work with two specific humanoid social service robots, and highlight how eschewing privacy and security by design principles leads to implementations with serious privacy and security flaws. The paper introduces the robots as platforms and their associated features, ecosystems and cloud platforms that are required for certain use cases or tasks. The paper encourages design aims for privacy and security, and then in this light studies the implementation from two different manufacturers. The results show a worrisome lack of design focus in handling privacy and security. The paper aims not to cover all the security flaws and possible mitigations, but does look closer into the use of the WebSocket protocol and it’s challenges when used for operational control. The conclusions of the paper provide insights on how manufacturers can rectify the discovered security flaws and presents key policies like accountability when it comes to implementing technical features of autonomous agents.
ISSN: 2330-2186
2022-06-09
Dekarske, Jason, Joshi, Sanjay S..  2021.  Human Trust of Autonomous Agent Varies With Strategy and Capability in Collaborative Grid Search Task. 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS). :1–6.
Trust is an important emerging area of study in human-robot cooperation. Many studies have begun to look at the issue of robot (agent) capability as a predictor of human trust in the robot. However, the assumption that agent capability is the sole predictor of human trust could underestimate the complexity of the problem. This study aims to investigate the effects of agent-strategy and agent-capability in a visual search task. Fourteen subjects were recruited to partake in a web-based grid search task. They were each paired with a series of autonomous agents to search an on-screen grid to find a number of outlier objects as quickly as possible. Both the human and agent searched the grid concurrently and the human was able to see the movement of the agent. Each trial, a different autonomous agent with its assigned capability, used one of three search strategies to assist their human counterpart. After each trial, the autonomous agent reported the number of outliers it found, and the human subject was asked to determine the total number of outliers in the area. Some autonomous agents reported only a fraction of the outliers they encountered, thus coding a varying level of agent capability. Human subjects then evaluated statements related to the behavior, reliability, and trust of the agent. The results showed increased measures of trust and reliability with increasing capability. Additionally, the most legible search strategies received the highest average ratings in a measure of familiarity. Remarkably, given no prior information about capabilities or strategies that they would see, subjects were able to determine consistent trustworthiness of the agent. Furthermore, both capability and strategy of the agent had statistically significant effects on the human’s trust in the agent.
2021-03-29
Ouiazzane, S., Addou, M., Barramou, F..  2020.  Toward a Network Intrusion Detection System for Geographic Data. 2020 IEEE International conference of Moroccan Geomatics (Morgeo). :1—7.

The objective of this paper is to propose a model of a distributed intrusion detection system based on the multi-agent paradigm and the distributed file system (HDFS). Multi-agent systems (MAS) are very suitable to intrusion detection systems as they can address the issue of geographic data security in terms of autonomy, distribution and performance. The proposed system is based on a set of autonomous agents that cooperate and collaborate with each other to effectively detect intrusions and suspicious activities that may impact geographic information systems. Our system allows the detection of known and unknown computer attacks without any human intervention (Security Experts) unlike traditional intrusion detection systems that rely on knowledge bases as a mechanism to detect known attacks. The proposed model allows a real time detection of known and unknown attacks within large networks hosting geographic data.

2021-02-01
Hou, M..  2020.  IMPACT: A Trust Model for Human-Agent Teaming. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
A trust model IMPACT: Intention, Measurability, Predictability, Agility, Communication, and Transparency has been conceptualized to build human trust in autonomous agents. The six critical characteristics must be exhibited by the agents in order to gain and maintain the trust from their human partners towards an effective and collaborative team in achieving common goals. The IMPACT model guided a design of an intelligent adaptive decision aid for dynamic target engagement processes in a military context. Positive feedback from subject matter experts participated in a large scale joint exercise controlling multiple unmanned vehicles indicated the effectiveness of the decision aid. It also demonstrated the utility of the IMPACT model as design principles for building up a trusted human-agent teaming.
2020-02-10
Carneiro, Lucas R., Delgado, Carla A.D.M., da Silva, João C.P..  2019.  Social Analysis of Game Agents: How Trust and Reputation can Improve Player Experience. 2019 8th Brazilian Conference on Intelligent Systems (BRACIS). :485–490.
Video games normally use Artificial Intelligence techniques to improve Non-Player Character (NPC) behavior, creating a more realistic experience for their players. However, rational behavior in general does not consider social interactions between player and bots. Because of that, a new framework for NPCs was proposed, which uses a social bias to mix the default strategy of finding the best possible plays to win with a analysis to decide if other players should be categorized as allies or foes. Trust and reputation models were used together to implement this demeanor. In this paper we discuss an implementation of this framework inside the game Settlers of Catan. New NPC agents are created to this implementation. We also analyze the results obtained from simulations among agents and players to conclude how the use of trust and reputation in NPCs can create a better gaming experience.