Biblio
Computer networks and surging advancements of innovative information technology construct a critical infrastructure for network transactions of business entities. Information exchange and data access though such infrastructure is scrutinized by adversaries for vulnerabilities that lead to cyber-attacks. This paper presents an agent-based system modelling to conceptualize and extract explicit and latent structure of the complex enterprise systems as well as human interactions within the system to determine common vulnerabilities of the entity. The model captures emergent behavior resulting from interactions of multiple network agents including the number of workstations, regular, administrator and third-party users, external and internal attacks, defense mechanisms for the network setting, and many other parameters. A risk-based approach to modelling cybersecurity of a business entity is utilized to derive the rate of attacks. A neural network model will generalize the type of attack based on network traffic features allowing dynamic state changes. Rules of engagement to generate self-organizing behavior will be leveraged to appoint a defense mechanism suitable for the attack-state of the model. The effectiveness of the model will be depicted by time-state chart that shows the number of affected assets for the different types of attacks triggered by the entity risk and the time it takes to revert into normal state. The model will also associate a relevant cost per incident occurrence that derives the need for enhancement of security solutions.
The objective of this paper is to propose a model of a distributed intrusion detection system based on the multi-agent paradigm and the distributed file system (HDFS). Multi-agent systems (MAS) are very suitable to intrusion detection systems as they can address the issue of geographic data security in terms of autonomy, distribution and performance. The proposed system is based on a set of autonomous agents that cooperate and collaborate with each other to effectively detect intrusions and suspicious activities that may impact geographic information systems. Our system allows the detection of known and unknown computer attacks without any human intervention (Security Experts) unlike traditional intrusion detection systems that rely on knowledge bases as a mechanism to detect known attacks. The proposed model allows a real time detection of known and unknown attacks within large networks hosting geographic data.
Human behaviors are often prohibited, or permitted by social norms. Therefore, if autonomous agents interact with humans, they also need to reason about various legal rules, social and ethical social norms, so they would be trusted and accepted by humans. Inverse Reinforcement Learning (IRL) can be used for the autonomous agents to learn social norm-compliant behavior via expert demonstrations. However, norms are context-sensitive, i.e. different norms get activated in different contexts. For example, the privacy norm is activated for a domestic robot entering a bathroom where a person may be present, whereas it is not activated for the robot entering the kitchen. Representing various contexts in the state space of the robot, as well as getting expert demonstrations under all possible tasks and contexts is extremely challenging. Inspired by recent work on Modularized Normative MDP (MNMDP) and early work on context-sensitive RL, we propose a new IRL framework, Context-Sensitive Norm IRL (CNIRL). CNIRL treats states and contexts separately, and assumes that the expert determines the priority of every possible norm in the environment, where each norm is associated with a distinct reward function. The agent chooses the action to maximize its cumulative rewards. We present the CNIRL model and show that its computational complexity is scalable in the number of norms. We also show via two experimental scenarios that CNIRL can handle problems with changing context spaces.
The article is devoted to the analysis of the use of blockchain technology for self-organization of network communities. Network communities are characterized by the key role of trust in personal interactions, the need for repeated interactions, strong and weak ties within the network, social learning as the mechanism of self-organization. Therefore, in network communities reputation is the central component of social action, assessment of the situation, and formation of the expectations. The current proliferation of virtual network communities requires the development of appropriate technical infrastructure in the form of reputation systems - programs that provide calculation of network members reputation and organization of their cooperation and interaction. Traditional reputation systems have vulnerabilities in the field of information security and prevention of abusive behavior of agents. Overcoming these restrictions is possible through integration of reputation systems and blockchain technology that allows to increase transparency of reputation assessment system and prevent attempts of manipulation the system and social engineering. At the same time, the most promising is the use of blockchain-oracles to ensure communication between the algorithms of blockchain-based reputation system and the external information environment. The popularization of blockchain technology and its implementation in various spheres of social management, production control, economic exchange actualizes the problems of using digital technologies in political processes and their impact on the formation of digital authoritarianism, digital democracy and digital anarchism. The paper emphasizes that blockchain technology and reputation systems can equally benefit both the resources of government control and tools of democratization and public accountability to civil society or even practices of avoiding government. Therefore, it is important to take into account the problems of political institutionalization, path dependence and the creation of differentiated incentives as well as the technological aspects.
How does information regarding an adversary's intentions affect optimal system design? This paper addresses this question in the context of graphical coordination games where an adversary can indirectly influence the behavior of agents by modifying their payoffs. We study a situation in which a system operator must select a graph topology in anticipation of the action of an unknown adversary. The designer can limit her worst-case losses by playing a security strategy, effectively planning for an adversary which intends maximum harm. However, fine-grained information regarding the adversary's intention may help the system operator to fine-tune the defenses and obtain better system performance. In a simple model of adversarial behavior, this paper asks how much a system operator can gain by fine-tuning a defense for known adversarial intent. We find that if the adversary is weak, a security strategy is approximately optimal for any adversary type; however, for moderately-strong adversaries, security strategies are far from optimal.
With the recent advances in computing, artificial intelligence (AI) is quickly becoming a key component in the future of advanced applications. In one application in particular, AI has played a major role - that of revolutionizing traditional healthcare assistance. Using embodied interactive agents, or interactive robots, in healthcare scenarios has emerged as an innovative way to interact with patients. As an essential factor for interpersonal interaction, trust plays a crucial role in establishing and maintaining a patient-agent relationship. In this paper, we discuss a study related to healthcare in which we examine aspects of trust between humans and interactive robots during a therapy intervention in which the agent provides corrective feedback. A total of twenty participants were randomly assigned to receive corrective feedback from either a robotic agent or a human agent. Survey results indicate trust in a therapy intervention coupled with a robotic agent is comparable to that of trust in an intervention coupled with a human agent. Results also show a trend that the agent condition has a medium-sized effect on trust. In addition, we found that participants in the robot therapist condition are 3.5 times likely to have trust involved in their decision than the participants in the human therapist condition. These results indicate that the deployment of interactive robot agents in healthcare scenarios has the potential to maintain quality of health for future generations.