Visible to the public Biblio

Filters: Keyword is AI and Privacy  [Clear All Filters]
2019-01-31
Postnikoff, Brittany, Goldberg, Ian.  2018.  Robot Social Engineering: Attacking Human Factors with Non-Human Actors. Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. :313–314.

Social robots may make use of social abilities such as persuasion, commanding obedience, and lying. Meanwhile, the field of computer security and privacy has shown that these interpersonal skills can be applied by humans to perform social engineering attacks. Social engineering attacks are the deliberate application of manipulative social skills by an individual in an attempt to achieve a goal by convincing others to do or say things that may or may not be in their best interests. In our work we argue that robot social engineering attacks are already possible and that defenses should be developed to protect against these attacks. We do this by defining what a robot social engineer is, outlining how previous research has demonstrated robot social engineering, and discussing the risks that can accompany robot social engineering attacks.

Abou-Zahra, Shadi, Brewer, Judy, Cooper, Michael.  2018.  Artificial Intelligence (AI) for Web Accessibility: Is Conformance Evaluation a Way Forward? Proceedings of the Internet of Accessible Things. :20:1–20:4.

The term "artificial intelligence" is a buzzword today and is heavily used to market products, services, research, conferences, and more. It is scientifically disputed which types of products and services do actually qualify as "artificial intelligence" versus simply advanced computer technologies mimicking aspects of natural intelligence. Yet it is undisputed that, despite often inflationary use of the term, there are mainstream products and services today that for decades were only thought to be science fiction. They range from industrial automation, to self-driving cars, robotics, and consumer electronics for smart homes, workspaces, education, and many more contexts. Several technological advances enable what is commonly referred to as "artificial intelligence". It includes connected computers and the Internet of Things (IoT), open and big data, low cost computing and storage, and many more. Yet regardless of the definition of the term artificial intelligence, technological advancements in this area provide immense potential, especially for people with disabilities. In this paper we explore some of these potential in the context of web accessibility. We review some existing products and services, and their support for web accessibility. We propose accessibility conformance evaluation as one potential way forward, to accelerate the uptake of artificial intelligence, to improve web accessibility.

Bahirat, Paritosh, He, Yangyang, Menon, Abhilash, Knijnenburg, Bart.  2018.  A Data-Driven Approach to Developing IoT Privacy-Setting Interfaces. 23rd International Conference on Intelligent User Interfaces. :165–176.

User testing is often used to inform the development of user interfaces (UIs). But what if an interface needs to be developed for a system that does not yet exist? In that case, existing datasets can provide valuable input for UI development. We apply a data-driven approach to the development of a privacy-setting interface for Internet-of-Things (IoT) devices. Applying machine learning techniques to an existing dataset of users' sharing preferences in IoT scenarios, we develop a set of "smart" default profiles. Our resulting interface asks users to choose among these profiles, which capture their preferences with an accuracy of 82%—a 14% improvement over a naive default setting and a 12% improvement over a single smart default setting for all users.

Bahirat, Paritosh, Sun, Qizhang, Knijnenburg, Bart P..  2018.  Scenario Context V/s Framing and Defaults in Managing Privacy in Household IoT. Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion. :63:1–63:2.

The Internet of Things provides household device users with an ability to connect and manage numerous devices over a common platform. However, the sheer number of possible privacy settings creates issues such as choice overload. This article outlines a data-driven approach to understand how users make privacy decisions in household IoT scenarios. We demonstrate that users are not just influenced by the specifics of the IoT scenario, but also by aspects immaterial to the decision, such as the default setting and its framing.

Golbeck, Jennifer.  2018.  Surveillance or Support?: When Personalization Turns Creepy 23rd International Conference on Intelligent User Interfaces. :5–5.

Personalization, recommendations, and user modeling can be powerful tools to improve people's experiences with technology and to help them find information. However, we also know that people underestimate how much of their personal information is used by our technology and they generally do not understand how much algorithms can discover about them. Both privacy and ethical technology have issues of consent at their heart. While many personalization systems assume most users would consent to the way they employ personal data, research shows this is not necessarily the case. This talk will look at how to consider issues of privacy and consent when users cannot explicitly state their preferences, The Creepy Factor, and how to balance users' concerns with the benefits personalized technology can offer.

Wagner, Alan R..  2018.  An Autonomous Architecture That Protects the Right to Privacy. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. :330–334.

The advent and widespread adoption of wearable cameras and autonomous robots raises important issues related to privacy. The mobile cameras on these systems record and may re-transmit enormous amounts of video data that can then be used to identify, track, and characterize the behavior of the general populous. This paper presents a preliminary computational architecture designed to preserve specific types of privacy over a video stream by identifying categories of individuals, places, and things that require higher than normal privacy protection. This paper describes the architecture as a whole as well as preliminary results testing aspects of the system. Our intention is to implement and test the system on ground robots and small UAVs and demonstrate that the system can provide selective low-level masking or deletion of data requiring higher privacy protection.

Manikonda, Lydia, Deotale, Aditya, Kambhampati, Subbarao.  2018.  What's Up with Privacy?: User Preferences and Privacy Concerns in Intelligent Personal Assistants Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. :229–235.

The recent breakthroughs in Artificial Intelligence (AI) have allowed individuals to rely on automated systems for a variety of reasons. Some of these systems are the currently popular voice-enabled systems like Echo by Amazon and Home by Google that are also called as Intelligent Personal Assistants (IPAs). Though there are rising concerns about privacy and ethical implications, users of these IPAs seem to continue using these systems. We aim to investigate to what extent users are concerned about privacy and how they are handling these concerns while using the IPAs. By utilizing the reviews posted online along with the responses to a survey, this paper provides a set of insights about the detected markers related to user interests and privacy challenges. The insights suggest that users of these systems irrespective of their concerns about privacy, are generally positive in terms of utilizing IPAs in their everyday lives. However, there is a significant percentage of users who are concerned about privacy and take further actions to address related concerns. Some percentage of users expressed that they do not have any privacy concerns but when they learned about the "always listening" feature of these devices, their concern about privacy increased.

Riazi, M. Sadegh, Koushanfar, Farinaz.  2018.  Privacy-Preserving Deep Learning and Inference. Proceedings of the International Conference on Computer-Aided Design. :18:1–18:4.

We provide a systemization of knowledge of the recent progress made in addressing the crucial problem of deep learning on encrypted data. The problem is important due to the prevalence of deep learning models across various applications, and privacy concerns over the exposure of deep learning IP and user's data. Our focus is on provably secure methodologies that rely on cryptographic primitives and not trusted third parties/platforms. Computational intensity of the learning models, together with the complexity of realization of the cryptography algorithms hinder the practical implementation a challenge. We provide a summary of the state-of-the-art, comparison of the existing solutions, as well as future challenges and opportunities.

Menet, Fran\c cois, Berthier, Paul, Gagnon, Michel, Fernandez, José M..  2018.  Spartan Networks: Self-Feature-Squeezing Networks for Increased Robustness in Adversarial Settings. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :2246–2248.

Deep Learning Models are vulnerable to adversarial inputs, samples modified in order to maximize error of the system. We hereby introduce Spartan Networks, Deep Learning models that are inherently more resistant to adverarial examples, without doing any input preprocessing out of the network or adversarial training. These networks have an adversarial layer within the network designed to starve the network of information, using a new activation function to discard data. This layer trains the neural network to filter-out usually-irrelevant parts of its input. These models thus have a slightly lower precision, but report a higher robustness under attack than unprotected models.

Toch, Eran, Birman, Yoni.  2018.  Towards Behavioral Privacy: How to Understand AI's Privacy Threats in Ubiquitous Computing. Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. :931–936.

Human behavior is increasingly sensed and recorded and used to create models that accurately predict the behavior of consumers, employees, and citizens. While behavioral models are important in many domains, the ability to predict individuals' behavior is in the focus of growing privacy concerns. The legal and technological measures for privacy do not adequately recognize and address the ability to infer behavior and traits. In this position paper, we first analyze the shortcoming of existing privacy theories in addressing AI's inferential abilities. We then point to legal and theoretical frameworks that can adequately describe the potential of AI to negatively affect people's privacy. We then present a technical privacy measure that can help bridge the divide between legal and technical thinking with respect to AI and privacy.