Biblio
This paper exploits the possibility of exposing the location of active eavesdropper in commodity passive RFID system. Such active eavesdropper can activate the commodity passive RFID tags to achieve data eavesdropping and jamming. In this paper, we show that these active eavesdroppers can be significantly detrimental to the commodity passive RFID system on RFID data security and system feasibility. We believe that the best way to defeat the active eavesdropper in the commodity passive RFID system is to expose the location of the active eavesdropper and kick it out. To do so, we need to localize the active eavesdropper. However, we cannot extract the channel from the active eavesdropper, since we do not know what the active eavesdropper's transmission and the interference from the tag's backscattered signals. So, we propose an approach to mitigate the tag's interference and cancel out the active eavesdropper's transmission to obtain the subtraction-and-division features, which will be used as the input of the machine learning model to predict the location of active eavesdropper. Our preliminary results show the average accuracy of 96% for predicting the active eavesdropper's position in four grids of the surveillance plane.
A significant percentage of cyber security incidents can be prevented by changing human behaviors. The humans in the loop include the system administrators, software developers, end users and the personnel responsible for securing the system. Each of these group of people work in a given context and are affected by both soft factors such as management influences and workload and more tangible factors in the real world such as errors in procedures and scanning devices, faulty code or the usability of the systems they work with.
Human behaviors are often prohibited, or permitted by social norms. Therefore, if autonomous agents interact with humans, they also need to reason about various legal rules, social and ethical social norms, so they would be trusted and accepted by humans. Inverse Reinforcement Learning (IRL) can be used for the autonomous agents to learn social norm-compliant behavior via expert demonstrations. However, norms are context-sensitive, i.e. different norms get activated in different contexts. For example, the privacy norm is activated for a domestic robot entering a bathroom where a person may be present, whereas it is not activated for the robot entering the kitchen. Representing various contexts in the state space of the robot, as well as getting expert demonstrations under all possible tasks and contexts is extremely challenging. Inspired by recent work on Modularized Normative MDP (MNMDP) and early work on context-sensitive RL, we propose a new IRL framework, Context-Sensitive Norm IRL (CNIRL). CNIRL treats states and contexts separately, and assumes that the expert determines the priority of every possible norm in the environment, where each norm is associated with a distinct reward function. The agent chooses the action to maximize its cumulative rewards. We present the CNIRL model and show that its computational complexity is scalable in the number of norms. We also show via two experimental scenarios that CNIRL can handle problems with changing context spaces.