Visible to the public Biblio

Filters: Keyword is AI  [Clear All Filters]
2021-05-25
Laato, Samuli, Farooq, Ali, Tenhunen, Henri, Pitkamaki, Tinja, Hakkala, Antti, Airola, Antti.  2020.  AI in Cybersecurity Education- A Systematic Literature Review of Studies on Cybersecurity MOOCs. 2020 IEEE 20th International Conference on Advanced Learning Technologies (ICALT). :6—10.

Machine learning (ML) techniques are changing both the offensive and defensive aspects of cybersecurity. The implications are especially strong for privacy, as ML approaches provide unprecedented opportunities to make use of collected data. Thus, education on cybersecurity and AI is needed. To investigate how AI and cybersecurity should be taught together, we look at previous studies on cybersecurity MOOCs by conducting a systematic literature review. The initial search resulted in 72 items and after screening for only peer-reviewed publications on cybersecurity online courses, 15 studies remained. Three of the studies concerned multiple cybersecurity MOOCs whereas 12 focused on individual courses. The number of published work evaluating specific cybersecurity MOOCs was found to be small compared to all available cybersecurity MOOCs. Analysis of the studies revealed that cybersecurity education is, in almost all cases, organised based on the topic instead of used tools, making it difficult for learners to find focused information on AI applications in cybersecurity. Furthermore, there is a gab in academic literature on how AI applications in cybersecurity should be taught in online courses.

2021-02-03
Aliman, N.-M., Kester, L..  2020.  Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses. 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :130—137.

Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.

2021-01-11
Whyte, C..  2020.  Problems of Poison: New Paradigms and "Agreed" Competition in the Era of AI-Enabled Cyber Operations. 2020 12th International Conference on Cyber Conflict (CyCon). 1300:215–232.
Few developments seem as poised to alter the characteristics of security in the digital age as the advent of artificial intelligence (AI) technologies. For national defense establishments, the emergence of AI techniques is particularly worrisome, not least because prototype applications already exist. Cyber attacks augmented by AI portend the tailored manipulation of human vectors within the attack surface of important societal systems at great scale, as well as opportunities for calamity resulting from the secondment of technical skill from the hacker to the algorithm. Arguably most important, however, is the fact that AI-enabled cyber campaigns contain great potential for operational obfuscation and strategic misdirection. At the operational level, techniques for piggybacking onto routine activities and for adaptive evasion of security protocols add uncertainty, complicating the defensive mission particularly where adversarial learning tools are employed in offense. Strategically, AI-enabled cyber operations offer distinct attempts to persistently shape the spectrum of cyber contention may be able to pursue conflict outcomes beyond the expected scope of adversary operation. On the other, AI-augmented cyber defenses incorporated into national defense postures are likely to be vulnerable to "poisoning" attacks that predict, manipulate and subvert the functionality of defensive algorithms. This article takes on two primary tasks. First, it considers and categorizes the primary ways in which AI technologies are likely to augment offensive cyber operations, including the shape of cyber activities designed to target AI systems. Then, it frames a discussion of implications for deterrence in cyberspace by referring to the policy of persistent engagement, agreed competition and forward defense promulgated in 2018 by the United States. Here, it is argued that the centrality of cyberspace to the deployment and operation of soon-to-be-ubiquitous AI systems implies new motivations for operation within the domain, complicating numerous assumptions that underlie current approaches. In particular, AI cyber operations pose unique measurement issues for the policy regime.
2020-08-07
Hasan, Kamrul, Shetty, Sachin, Ullah, Sharif.  2019.  Artificial Intelligence Empowered Cyber Threat Detection and Protection for Power Utilities. 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC). :354—359.
Cyber threats have increased extensively during the last decade, especially in smart grids. Cybercriminals have become more sophisticated. Current security controls are not enough to defend networks from the number of highly skilled cybercriminals. Cybercriminals have learned how to evade the most sophisticated tools, such as Intrusion Detection and Prevention Systems (IDPS), and Advanced Persistent Threat (APT) is almost invisible to current tools. Fortunately, the application of Artificial Intelligence (AI) may increase the detection rate of IDPS systems, and Machine Learning (ML) techniques can mine data to detect different attack stages of APT. However, the implementation of AI may bring other risks, and cybersecurity experts need to find a balance between risk and benefits.
Torkzadehmahani, Reihaneh, Kairouz, Peter, Paten, Benedict.  2019.  DP-CGAN: Differentially Private Synthetic Data and Label Generation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :98—104.
Generative Adversarial Networks (GANs) are one of the well-known models to generate synthetic data including images, especially for research communities that cannot use original sensitive datasets because they are not publicly accessible. One of the main challenges in this area is to preserve the privacy of individuals who participate in the training of the GAN models. To address this challenge, we introduce a Differentially Private Conditional GAN (DP-CGAN) training framework based on a new clipping and perturbation strategy, which improves the performance of the model while preserving privacy of the training dataset. DP-CGAN generates both synthetic data and corresponding labels and leverages the recently introduced Renyi differential privacy accountant to track the spent privacy budget. The experimental results show that DP-CGAN can generate visually and empirically promising results on the MNIST dataset with a single-digit epsilon parameter in differential privacy.
Ramezanian, Sara, Niemi, Valtteri.  2019.  Privacy Preserving Cyberbullying Prevention with AI Methods in 5G Networks. 2019 25th Conference of Open Innovations Association (FRUCT). :265—271.
Children and teenagers that have been a victim of bullying can possibly suffer its psychological effects for a lifetime. With the increase of online social media, cyberbullying incidents have been increased as well. In this paper we discuss how we can detect cyberbullying with AI techniques, using term frequency-inverse document frequency. We label messages as benign or bully. We want our method of cyberbullying detection to be privacy-preserving, such that the subscribers' benign messages should not be revealed to the operator. Moreover, the operator labels subscribers as normal, bully and victim. The operator utilizes policy control in 5G networks, to protect victims of cyberbullying from harmful traffic.
Chen, Huili, Cammarota, Rosario, Valencia, Felipe, Regazzoni, Francesco.  2019.  PlaidML-HE: Acceleration of Deep Learning Kernels to Compute on Encrypted Data. 2019 IEEE 37th International Conference on Computer Design (ICCD). :333—336.

Machine Learning as a Service (MLaaS) is becoming a popular practice where Service Consumers, e.g., end-users, send their data to a ML Service and receive the prediction outputs. However, the emerging usage of MLaaS has raised severe privacy concerns about users' proprietary data. PrivacyPreserving Machine Learning (PPML) techniques aim to incorporate cryptographic primitives such as Homomorphic Encryption (HE) and Multi-Party Computation (MPC) into ML services to address privacy concerns from a technology standpoint. Existing PPML solutions have not been widely adopted in practice due to their assumed high overhead and integration difficulty within various ML front-end frameworks as well as hardware backends. In this work, we propose PlaidML-HE, the first end-toend HE compiler for PPML inference. Leveraging the capability of Domain-Specific Languages, PlaidML-HE enables automated generation of HE kernels across diverse types of devices. We evaluate the performance of PlaidML-HE on different ML kernels and demonstrate that PlaidML-HE greatly reduces the overhead of the HE primitive compared to the existing implementations.

Dilmaghani, Saharnaz, Brust, Matthias R., Danoy, Grégoire, Cassagnes, Natalia, Pecero, Johnatan, Bouvry, Pascal.  2019.  Privacy and Security of Big Data in AI Systems: A Research and Standards Perspective. 2019 IEEE International Conference on Big Data (Big Data). :5737—5743.

The huge volume, variety, and velocity of big data have empowered Machine Learning (ML) techniques and Artificial Intelligence (AI) systems. However, the vast portion of data used to train AI systems is sensitive information. Hence, any vulnerability has a potentially disastrous impact on privacy aspects and security issues. Nevertheless, the increased demands for high-quality AI from governments and companies require the utilization of big data in the systems. Several studies have highlighted the threats of big data on different platforms and the countermeasures to reduce the risks caused by attacks. In this paper, we provide an overview of the existing threats which violate privacy aspects and security issues inflicted by big data as a primary driving force within the AI/ML workflow. We define an adversarial model to investigate the attacks. Additionally, we analyze and summarize the defense strategies and countermeasures of these attacks. Furthermore, due to the impact of AI systems in the market and the vast majority of business sectors, we also investigate Standards Developing Organizations (SDOs) that are actively involved in providing guidelines to protect the privacy and ensure the security of big data and AI systems. Our far-reaching goal is to bridge the research and standardization frame to increase the consistency and efficiency of AI systems developments guaranteeing customer satisfaction while transferring a high degree of trustworthiness.

Zhu, Tianqing, Yu, Philip S..  2019.  Applying Differential Privacy Mechanism in Artificial Intelligence. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1601—1609.
Artificial Intelligence (AI) has attracted a large amount of attention in recent years. However, several new problems, such as privacy violations, security issues, or effectiveness, have been emerging. Differential privacy has several attractive properties that make it quite valuable for AI, such as privacy preservation, security, randomization, composition, and stability. Therefore, this paper presents differential privacy mechanisms for multi-agent systems, reinforcement learning, and knowledge transfer based on those properties, which proves that current AI can benefit from differential privacy mechanisms. In addition, the previous usage of differential privacy mechanisms in private machine learning, distributed machine learning, and fairness in models is discussed, bringing several possible avenues to use differential privacy mechanisms in AI. The purpose of this paper is to deliver the initial idea of how to integrate AI with differential privacy mechanisms and to explore more possibilities to improve AIs performance.
Mehta, Brijesh B., Gupta, Ruchika, Rao, Udai Pratap, Muthiyan, Mukesh.  2019.  A Scalable (\$\textbackslashtextbackslashalpha, k\$)-Anonymization Approach using MapReduce for Privacy Preserving Big Data Publishing. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—6.
Different tools and sources are used to collect big data, which may create privacy issues. k-anonymity, l-diversity, t-closeness etc. privacy preserving data publishing approaches are used data de-identification, but as multiple sources is used to collect the data, chance of re-identification is very high. Anonymization large data is not a trivial task, hence, privacy preserving approaches scalability has become a challenging research area. Researchers explore it by proposing algorithms for scalable anonymization. We further found that in some scenarios efficient anonymization is not enough, timely anonymization is also required. Hence, to incorporate the velocity of data with Scalable k-Anonymization (SKA) approach, we propose a novel approach, Scalable ( α, k)-Anonymization (SAKA). Our proposed approach outperforms in terms of information loss and running time as compared to existing approaches. With best of our knowledge, this is the first proposed scalable anonymization approach for the velocity of data.
Smith, Gary.  2019.  Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe. 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). :150—153.
Artificial Intelligence (AI) can and does use individual's data to make predictions about their wants, their needs, their influences on them and predict what they could do. The use of individual's data naturally raises privacy concerns. This article focuses on AI, the privacy issue against the backdrop of the endless growth of the Digital Universe where Big Data, AI, Data Analytics and 5G Technology live and grow in The Internet of Things (IoT).
Moriai, Shiho.  2019.  Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH). :198—198.

We aim at creating a society where we can resolve various social challenges by incorporating the innovations of the fourth industrial revolution (e.g. IoT, big data, AI, robot, and the sharing economy) into every industry and social life. By doing so the society of the future will be one in which new values and services are created continuously, making people's lives more conformable and sustainable. This is Society 5.0, a super-smart society. Security and privacy are key issues to be addressed to realize Society 5.0. Privacy-preserving data analytics will play an important role. In this talk we show our recent works on privacy-preserving data analytics such as privacy-preserving logistic regression and privacy-preserving deep learning. Finally, we show our ongoing research project under JST CREST “AI”. In this project we are developing privacy-preserving financial data analytics systems that can detect fraud with high security and accuracy. To validate the systems, we will perform demonstration tests with several financial institutions and solve the problems necessary for their implementation in the real world.

Nawaz, A., Gia, T. N., Queralta, J. Peña, Westerlund, T..  2019.  Edge AI and Blockchain for Privacy-Critical and Data-Sensitive Applications. 2019 Twelfth International Conference on Mobile Computing and Ubiquitous Network (ICMU). :1—2.
The edge and fog computing paradigms enable more responsive and smarter systems without relying on cloud servers for data processing and storage. This reduces network load as well as latency. Nonetheless, the addition of new layers in the network architecture increases the number of security vulnerabilities. In privacy-critical systems, the appearance of new vulnerabilities is more significant. To cope with this issue, we propose and implement an Ethereum Blockchain based architecture with edge artificial intelligence to analyze data at the edge of the network and keep track of the parties that access the results of the analysis, which are stored in distributed databases.
Liu, Bo, Xiong, Jian, Wu, Yiyan, Ding, Ming, Wu, Cynthia M..  2019.  Protecting Multimedia Privacy from Both Humans and AI. 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). :1—6.
With the development of artificial intelligence (AI), multimedia privacy issues have become more challenging than ever. AI-assisted malicious entities can steal private information from multimedia data more easily than humans. Traditional multimedia privacy protection only considers the situation when humans are the adversaries, therefore they are ineffective against AI-assisted attackers. In this paper, we develop a new framework and new algorithms that can protect image privacy from both humans and AI. We combine the idea of adversarial image perturbation which is effective against AI and the obfuscation technique for human adversaries. Experiments show that our proposed methods work well for all types of attackers.
2020-07-20
Stroup, Ronald L., Niewoehner, Kevin R..  2019.  Application of Artificial Intelligence in the National Airspace System – A Primer. 2019 Integrated Communications, Navigation and Surveillance Conference (ICNS). :1–14.

The National Airspace System (NAS), as a portion of the US' transportation system, has not yet begun to model or adopt integration of Artificial Intelligence (AI) technology. However, users of the NAS, i.e., Air transport operators, UAS operators, etc. are beginning to use this technology throughout their operations. At issue within the broader aviation marketplace, is the continued search for a solution set to the persistent daily delays and schedule perturbations that occur within the NAS. Despite billions invested through the NAS Modernization Program, the delays persist in the face of reduced demand for commercial routings. Every delay represents an economic loss to commercial transport operators, passengers, freighters, and any business depending on the transportation performance. Therefore, the FAA needs to begin to address from an advanced concepts perspective, what this wave of new technology will affect as it is brought to bear on various operations performance parameters, including safety, security, efficiency, and resiliency solution sets. This paper is the first in a series of papers we are developing to explore the application of AI in the National Airspace System (NAS). This first paper is meant to get everyone in the aviation community on the same page, a primer if you will, to start the technical discussions. This paper will define AI; the capabilities associated with AI; current use cases within the aviation ecosystem; and how to prepare for insertion of AI in the NAS. The next series of papers will look at NAS Operations Theory utilizing AI capabilities and eventually leading to a future intelligent NAS (iNAS) environment.

2020-02-17
Thomopoulos, Stelios C. A..  2019.  Maritime Situational Awareness Forensics Tools for a Common Information Sharing Environment (CISE). 2019 4th International Conference on Smart and Sustainable Technologies (SpliTech). :1–5.
CISE stands for Common Information Sharing Environment and refers to an architecture and set of protocols, procedures and services for the exchange of data and information across Maritime Authorities of EU (European Union) Member States (MS's). In the context of enabling the implementation and adoption of CISE by different MS's, EU has funded a number of projects that enable the development of subsystems and adaptors intended to allow MS's to connect and make use of CISE. In this context, the Integrated Systems Laboratory (ISL) has led the development of the corresponding Hellenic and Cypriot CISE by developing a Control, Command & Information (C2I) system that unifies all partial maritime surveillance systems into one National Situational Picture Management (NSPM) system, and adaptors that allow the interconnection of the corresponding national legacy systems to CISE and the exchange of data, information and requests between the two MS's. Furthermore, a set of forensics tools that allow geospatial & time filtering and detection of anomalies, risk incidents, fake MMSIs, suspicious speed changes, collision paths, and gaps in AIS (Automatic Identification System), have been developed by combining motion models, AI, deep learning and fusion algorithms using data from different databases through CISE. This paper briefly discusses these developments within the EU CISE-2020, Hellenic CISE and CY-CISE projects and the benefits from the sharing of maritime data across CISE for both maritime surveillance and security. The prospect of using CISE for the creation of a considerably rich database that could be used for forensics analysis and detection of suspicious maritime traffic and maritime surveillance is discussed.
2020-01-27
Hibti, Meryem, Baïna, Karim, Benatallah, Boualem.  2019.  Towards Swarm Intelligence Architectural Patterns: an IoT-Big Data-AI-Blockchain convergence perspective. Proceedings of the 4th International Conference on Big Data and Internet of Things. :1–8.
The Internet of Things (IoT) is exploding. It is made up of billions of smart devices -from minuscule chips to mammoth machines - that use wireless technology to talk to each other (and to us). IoT infrastructures can vary from instrumented connected devices providing data externally to smart, and autonomous systems. To accompany data explosion resulting, among others, from IoT, Big data analytics processes examine large data sets to uncover hidden patterns, unknown correlations between collected events, either at a very technical level (incident/anomaly detection, predictive maintenance) or at business level (customer preferences, market trends, revenue opportunities) to provide improved operational efficiency, better customer service, competitive advantages over rival organizations, etc. In order to capitalize business value of the data generated by IoT sensors, IoT, Big Data Analytics/IA need to meet in the middle. One critical use case for IoT is to warn organizations when a product or service is at risk. The aim of this paper is to present a first proposal of IoT-Big Data-IA architectural patterns catalogues with a Blockchain implementation perspective in seek of design methodologies artifacts.
2019-03-06
Calo, Seraphin, Verma, Dinesh, Chakraborty, Supriyo, Bertino, Elisa, Lupu, Emil, Cirincione, Gregory.  2018.  Self-Generation of Access Control Policies. Proceedings of the 23Nd ACM on Symposium on Access Control Models and Technologies. :39-47.

Access control for information has primarily focused on access statically granted to subjects by administrators usually in the context of a specific system. Even if mechanisms are available for access revocation, revocations must still be executed manually by an administrator. However, as physical devices become increasingly embedded and interconnected, access control needs to become an integral part of the resource being protected and be generated dynamically by resources depending on the context in which the resource is being used. In this paper, we discuss a set of scenarios for access control needed in current and future systems and use that to argue that an approach for resources to generate and manage their access control policies dynamically on their own is needed. We discuss some approaches for generating such access control policies that may address the requirements of the scenarios.

2019-01-31
Postnikoff, Brittany, Goldberg, Ian.  2018.  Robot Social Engineering: Attacking Human Factors with Non-Human Actors. Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. :313–314.

Social robots may make use of social abilities such as persuasion, commanding obedience, and lying. Meanwhile, the field of computer security and privacy has shown that these interpersonal skills can be applied by humans to perform social engineering attacks. Social engineering attacks are the deliberate application of manipulative social skills by an individual in an attempt to achieve a goal by convincing others to do or say things that may or may not be in their best interests. In our work we argue that robot social engineering attacks are already possible and that defenses should be developed to protect against these attacks. We do this by defining what a robot social engineer is, outlining how previous research has demonstrated robot social engineering, and discussing the risks that can accompany robot social engineering attacks.

Abou-Zahra, Shadi, Brewer, Judy, Cooper, Michael.  2018.  Artificial Intelligence (AI) for Web Accessibility: Is Conformance Evaluation a Way Forward? Proceedings of the Internet of Accessible Things. :20:1–20:4.

The term "artificial intelligence" is a buzzword today and is heavily used to market products, services, research, conferences, and more. It is scientifically disputed which types of products and services do actually qualify as "artificial intelligence" versus simply advanced computer technologies mimicking aspects of natural intelligence. Yet it is undisputed that, despite often inflationary use of the term, there are mainstream products and services today that for decades were only thought to be science fiction. They range from industrial automation, to self-driving cars, robotics, and consumer electronics for smart homes, workspaces, education, and many more contexts. Several technological advances enable what is commonly referred to as "artificial intelligence". It includes connected computers and the Internet of Things (IoT), open and big data, low cost computing and storage, and many more. Yet regardless of the definition of the term artificial intelligence, technological advancements in this area provide immense potential, especially for people with disabilities. In this paper we explore some of these potential in the context of web accessibility. We review some existing products and services, and their support for web accessibility. We propose accessibility conformance evaluation as one potential way forward, to accelerate the uptake of artificial intelligence, to improve web accessibility.

Bahirat, Paritosh, He, Yangyang, Menon, Abhilash, Knijnenburg, Bart.  2018.  A Data-Driven Approach to Developing IoT Privacy-Setting Interfaces. 23rd International Conference on Intelligent User Interfaces. :165–176.

User testing is often used to inform the development of user interfaces (UIs). But what if an interface needs to be developed for a system that does not yet exist? In that case, existing datasets can provide valuable input for UI development. We apply a data-driven approach to the development of a privacy-setting interface for Internet-of-Things (IoT) devices. Applying machine learning techniques to an existing dataset of users' sharing preferences in IoT scenarios, we develop a set of "smart" default profiles. Our resulting interface asks users to choose among these profiles, which capture their preferences with an accuracy of 82%—a 14% improvement over a naive default setting and a 12% improvement over a single smart default setting for all users.

Bahirat, Paritosh, Sun, Qizhang, Knijnenburg, Bart P..  2018.  Scenario Context V/s Framing and Defaults in Managing Privacy in Household IoT. Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion. :63:1–63:2.

The Internet of Things provides household device users with an ability to connect and manage numerous devices over a common platform. However, the sheer number of possible privacy settings creates issues such as choice overload. This article outlines a data-driven approach to understand how users make privacy decisions in household IoT scenarios. We demonstrate that users are not just influenced by the specifics of the IoT scenario, but also by aspects immaterial to the decision, such as the default setting and its framing.

Golbeck, Jennifer.  2018.  Surveillance or Support?: When Personalization Turns Creepy 23rd International Conference on Intelligent User Interfaces. :5–5.

Personalization, recommendations, and user modeling can be powerful tools to improve people's experiences with technology and to help them find information. However, we also know that people underestimate how much of their personal information is used by our technology and they generally do not understand how much algorithms can discover about them. Both privacy and ethical technology have issues of consent at their heart. While many personalization systems assume most users would consent to the way they employ personal data, research shows this is not necessarily the case. This talk will look at how to consider issues of privacy and consent when users cannot explicitly state their preferences, The Creepy Factor, and how to balance users' concerns with the benefits personalized technology can offer.

Wagner, Alan R..  2018.  An Autonomous Architecture That Protects the Right to Privacy. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. :330–334.

The advent and widespread adoption of wearable cameras and autonomous robots raises important issues related to privacy. The mobile cameras on these systems record and may re-transmit enormous amounts of video data that can then be used to identify, track, and characterize the behavior of the general populous. This paper presents a preliminary computational architecture designed to preserve specific types of privacy over a video stream by identifying categories of individuals, places, and things that require higher than normal privacy protection. This paper describes the architecture as a whole as well as preliminary results testing aspects of the system. Our intention is to implement and test the system on ground robots and small UAVs and demonstrate that the system can provide selective low-level masking or deletion of data requiring higher privacy protection.

Manikonda, Lydia, Deotale, Aditya, Kambhampati, Subbarao.  2018.  What's Up with Privacy?: User Preferences and Privacy Concerns in Intelligent Personal Assistants Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. :229–235.

The recent breakthroughs in Artificial Intelligence (AI) have allowed individuals to rely on automated systems for a variety of reasons. Some of these systems are the currently popular voice-enabled systems like Echo by Amazon and Home by Google that are also called as Intelligent Personal Assistants (IPAs). Though there are rising concerns about privacy and ethical implications, users of these IPAs seem to continue using these systems. We aim to investigate to what extent users are concerned about privacy and how they are handling these concerns while using the IPAs. By utilizing the reviews posted online along with the responses to a survey, this paper provides a set of insights about the detected markers related to user interests and privacy challenges. The insights suggest that users of these systems irrespective of their concerns about privacy, are generally positive in terms of utilizing IPAs in their everyday lives. However, there is a significant percentage of users who are concerned about privacy and take further actions to address related concerns. Some percentage of users expressed that they do not have any privacy concerns but when they learned about the "always listening" feature of these devices, their concern about privacy increased.