Torkzadehmahani, Reihaneh, Kairouz, Peter, Paten, Benedict.
2019.
DP-CGAN: Differentially Private Synthetic Data and Label Generation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :98—104.
Generative Adversarial Networks (GANs) are one of the well-known models to generate synthetic data including images, especially for research communities that cannot use original sensitive datasets because they are not publicly accessible. One of the main challenges in this area is to preserve the privacy of individuals who participate in the training of the GAN models. To address this challenge, we introduce a Differentially Private Conditional GAN (DP-CGAN) training framework based on a new clipping and perturbation strategy, which improves the performance of the model while preserving privacy of the training dataset. DP-CGAN generates both synthetic data and corresponding labels and leverages the recently introduced Renyi differential privacy accountant to track the spent privacy budget. The experimental results show that DP-CGAN can generate visually and empirically promising results on the MNIST dataset with a single-digit epsilon parameter in differential privacy.
Ramezanian, Sara, Niemi, Valtteri.
2019.
Privacy Preserving Cyberbullying Prevention with AI Methods in 5G Networks. 2019 25th Conference of Open Innovations Association (FRUCT). :265—271.
Children and teenagers that have been a victim of bullying can possibly suffer its psychological effects for a lifetime. With the increase of online social media, cyberbullying incidents have been increased as well. In this paper we discuss how we can detect cyberbullying with AI techniques, using term frequency-inverse document frequency. We label messages as benign or bully. We want our method of cyberbullying detection to be privacy-preserving, such that the subscribers' benign messages should not be revealed to the operator. Moreover, the operator labels subscribers as normal, bully and victim. The operator utilizes policy control in 5G networks, to protect victims of cyberbullying from harmful traffic.
Chen, Huili, Cammarota, Rosario, Valencia, Felipe, Regazzoni, Francesco.
2019.
PlaidML-HE: Acceleration of Deep Learning Kernels to Compute on Encrypted Data. 2019 IEEE 37th International Conference on Computer Design (ICCD). :333—336.
Machine Learning as a Service (MLaaS) is becoming a popular practice where Service Consumers, e.g., end-users, send their data to a ML Service and receive the prediction outputs. However, the emerging usage of MLaaS has raised severe privacy concerns about users' proprietary data. PrivacyPreserving Machine Learning (PPML) techniques aim to incorporate cryptographic primitives such as Homomorphic Encryption (HE) and Multi-Party Computation (MPC) into ML services to address privacy concerns from a technology standpoint. Existing PPML solutions have not been widely adopted in practice due to their assumed high overhead and integration difficulty within various ML front-end frameworks as well as hardware backends. In this work, we propose PlaidML-HE, the first end-toend HE compiler for PPML inference. Leveraging the capability of Domain-Specific Languages, PlaidML-HE enables automated generation of HE kernels across diverse types of devices. We evaluate the performance of PlaidML-HE on different ML kernels and demonstrate that PlaidML-HE greatly reduces the overhead of the HE primitive compared to the existing implementations.
Dilmaghani, Saharnaz, Brust, Matthias R., Danoy, Grégoire, Cassagnes, Natalia, Pecero, Johnatan, Bouvry, Pascal.
2019.
Privacy and Security of Big Data in AI Systems: A Research and Standards Perspective. 2019 IEEE International Conference on Big Data (Big Data). :5737—5743.
The huge volume, variety, and velocity of big data have empowered Machine Learning (ML) techniques and Artificial Intelligence (AI) systems. However, the vast portion of data used to train AI systems is sensitive information. Hence, any vulnerability has a potentially disastrous impact on privacy aspects and security issues. Nevertheless, the increased demands for high-quality AI from governments and companies require the utilization of big data in the systems. Several studies have highlighted the threats of big data on different platforms and the countermeasures to reduce the risks caused by attacks. In this paper, we provide an overview of the existing threats which violate privacy aspects and security issues inflicted by big data as a primary driving force within the AI/ML workflow. We define an adversarial model to investigate the attacks. Additionally, we analyze and summarize the defense strategies and countermeasures of these attacks. Furthermore, due to the impact of AI systems in the market and the vast majority of business sectors, we also investigate Standards Developing Organizations (SDOs) that are actively involved in providing guidelines to protect the privacy and ensure the security of big data and AI systems. Our far-reaching goal is to bridge the research and standardization frame to increase the consistency and efficiency of AI systems developments guaranteeing customer satisfaction while transferring a high degree of trustworthiness.
Zhu, Tianqing, Yu, Philip S..
2019.
Applying Differential Privacy Mechanism in Artificial Intelligence. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1601—1609.
Artificial Intelligence (AI) has attracted a large amount of attention in recent years. However, several new problems, such as privacy violations, security issues, or effectiveness, have been emerging. Differential privacy has several attractive properties that make it quite valuable for AI, such as privacy preservation, security, randomization, composition, and stability. Therefore, this paper presents differential privacy mechanisms for multi-agent systems, reinforcement learning, and knowledge transfer based on those properties, which proves that current AI can benefit from differential privacy mechanisms. In addition, the previous usage of differential privacy mechanisms in private machine learning, distributed machine learning, and fairness in models is discussed, bringing several possible avenues to use differential privacy mechanisms in AI. The purpose of this paper is to deliver the initial idea of how to integrate AI with differential privacy mechanisms and to explore more possibilities to improve AIs performance.
Mehta, Brijesh B., Gupta, Ruchika, Rao, Udai Pratap, Muthiyan, Mukesh.
2019.
A Scalable (\$\textbackslashtextbackslashalpha, k\$)-Anonymization Approach using MapReduce for Privacy Preserving Big Data Publishing. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—6.
Different tools and sources are used to collect big data, which may create privacy issues. k-anonymity, l-diversity, t-closeness etc. privacy preserving data publishing approaches are used data de-identification, but as multiple sources is used to collect the data, chance of re-identification is very high. Anonymization large data is not a trivial task, hence, privacy preserving approaches scalability has become a challenging research area. Researchers explore it by proposing algorithms for scalable anonymization. We further found that in some scenarios efficient anonymization is not enough, timely anonymization is also required. Hence, to incorporate the velocity of data with Scalable k-Anonymization (SKA) approach, we propose a novel approach, Scalable ( α, k)-Anonymization (SAKA). Our proposed approach outperforms in terms of information loss and running time as compared to existing approaches. With best of our knowledge, this is the first proposed scalable anonymization approach for the velocity of data.
Smith, Gary.
2019.
Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe. 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). :150—153.
Artificial Intelligence (AI) can and does use individual's data to make predictions about their wants, their needs, their influences on them and predict what they could do. The use of individual's data naturally raises privacy concerns. This article focuses on AI, the privacy issue against the backdrop of the endless growth of the Digital Universe where Big Data, AI, Data Analytics and 5G Technology live and grow in The Internet of Things (IoT).
Moriai, Shiho.
2019.
Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH). :198—198.
We aim at creating a society where we can resolve various social challenges by incorporating the innovations of the fourth industrial revolution (e.g. IoT, big data, AI, robot, and the sharing economy) into every industry and social life. By doing so the society of the future will be one in which new values and services are created continuously, making people's lives more conformable and sustainable. This is Society 5.0, a super-smart society. Security and privacy are key issues to be addressed to realize Society 5.0. Privacy-preserving data analytics will play an important role. In this talk we show our recent works on privacy-preserving data analytics such as privacy-preserving logistic regression and privacy-preserving deep learning. Finally, we show our ongoing research project under JST CREST “AI”. In this project we are developing privacy-preserving financial data analytics systems that can detect fraud with high security and accuracy. To validate the systems, we will perform demonstration tests with several financial institutions and solve the problems necessary for their implementation in the real world.
Nawaz, A., Gia, T. N., Queralta, J. Peña, Westerlund, T..
2019.
Edge AI and Blockchain for Privacy-Critical and Data-Sensitive Applications. 2019 Twelfth International Conference on Mobile Computing and Ubiquitous Network (ICMU). :1—2.
The edge and fog computing paradigms enable more responsive and smarter systems without relying on cloud servers for data processing and storage. This reduces network load as well as latency. Nonetheless, the addition of new layers in the network architecture increases the number of security vulnerabilities. In privacy-critical systems, the appearance of new vulnerabilities is more significant. To cope with this issue, we propose and implement an Ethereum Blockchain based architecture with edge artificial intelligence to analyze data at the edge of the network and keep track of the parties that access the results of the analysis, which are stored in distributed databases.
Liu, Bo, Xiong, Jian, Wu, Yiyan, Ding, Ming, Wu, Cynthia M..
2019.
Protecting Multimedia Privacy from Both Humans and AI. 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). :1—6.
With the development of artificial intelligence (AI), multimedia privacy issues have become more challenging than ever. AI-assisted malicious entities can steal private information from multimedia data more easily than humans. Traditional multimedia privacy protection only considers the situation when humans are the adversaries, therefore they are ineffective against AI-assisted attackers. In this paper, we develop a new framework and new algorithms that can protect image privacy from both humans and AI. We combine the idea of adversarial image perturbation which is effective against AI and the obfuscation technique for human adversaries. Experiments show that our proposed methods work well for all types of attackers.
Lou, Xin, Tran, Cuong, Yau, David K.Y., Tan, Rui, Ng, Hongwei, Fu, Tom Zhengjia, Winslett, Marianne.
2019.
Learning-Based Time Delay Attack Characterization for Cyber-Physical Systems. 2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). :1—6.
The cyber-physical systems (CPSes) rely on computing and control techniques to achieve system safety and reliability. However, recent attacks show that these techniques are vulnerable once the cyber-attackers have bypassed air gaps. The attacks may cause service disruptions or even physical damages. This paper designs the built-in attack characterization scheme for one general type of cyber-attacks in CPS, which we call time delay attack, that delays the transmission of the system control commands. We use the recurrent neural networks in deep learning to estimate the delay values from the input trace. Specifically, to deal with the long time-sequence data, we design the deep learning model using stacked bidirectional long short-term memory (LSTM) units. The proposed approach is tested by using the data generated from a power plant control system. The results show that the LSTM-based deep learning approach can work well based on data traces from three sensor measurements, i.e., temperature, pressure, and power generation, in the power plant control system. Moreover, we show that the proposed approach outperforms the base approach based on k-nearest neighbors.
Safar, Jamie L., Tummala, Murali, McEachen, John C., Bollmann, Chad.
2019.
Modeling Worm Propagation and Insider Threat in Air-Gapped Network using Modified SEIQV Model. 2019 13th International Conference on Signal Processing and Communication Systems (ICSPCS). :1—6.
Computer worms pose a major threat to computer and communication networks due to the rapid speed at which they propagate. Biologically based epidemic models have been widely used to analyze the propagation of worms in computer networks. For an air-gapped network with an insider threat, we propose a modified Susceptible-Exposed-Infected-Quarantined-Vaccinated (SEIQV) model called the Susceptible-Exposed-Infected-Quarantined-Patched (SEIQP) model. We describe the assumptions that apply to this model, define a set of differential equations that characterize the system dynamics, and solve for the basic reproduction number. We then simulate and analyze the parameters controlled by the insider threat to determine where resources should be allocated to attain different objectives and results.
Guri, Mordechai, Bykhovsky, Dima, Elovici, Yuval.
2019.
Brightness: Leaking Sensitive Data from Air-Gapped Workstations via Screen Brightness. 2019 12th CMI Conference on Cybersecurity and Privacy (CMI). :1—6.
Air-gapped computers are systems that are kept isolated from the Internet since they store or process sensitive information. In this paper, we introduce an optical covert channel in which an attacker can leak (or, exfiltlrate) sensitive information from air-gapped computers through manipulations on the screen brightness. This covert channel is invisible and it works even while the user is working on the computer. Malware on a compromised computer can obtain sensitive data (e.g., files, images, encryption keys and passwords), and modulate it within the screen brightness, invisible to users. The small changes in the brightness are invisible to humans but can be recovered from video streams taken by cameras such as a local security camera, smartphone camera or a webcam. We present related work and discuss the technical and scientific background of this covert channel. We examined the channel's boundaries under various parameters, with different types of computer and TV screens, and at several distances. We also tested different types of camera receivers to demonstrate the covert channel. Lastly, we present relevant countermeasures to this type of attack.
Guri, Mordechai, Zadov, Boris, Bykhovsky, Dima, Elovici, Yuval.
2019.
CTRL-ALT-LED: Leaking Data from Air-Gapped Computers Via Keyboard LEDs. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:801—810.
Using the keyboard LEDs to send data optically was proposed in 2002 by Loughry and Umphress [1] (Appendix A). In this paper we extensively explore this threat in the context of a modern cyber-attack with current hardware and optical equipment. In this type of attack, an advanced persistent threat (APT) uses the keyboard LEDs (Caps-Lock, Num-Lock and Scroll-Lock) to encode information and exfiltrate data from airgapped computers optically. Notably, this exfiltration channel is not monitored by existing data leakage prevention (DLP) systems. We examine this attack and its boundaries for today's keyboards with USB controllers and sensitive optical sensors. We also introduce smartphone and smartwatch cameras as components of malicious insider and 'evil maid' attacks. We provide the necessary scientific background on optical communication and the characteristics of modern USB keyboards at the hardware and software level, and present a transmission protocol and modulation schemes. We implement the exfiltration malware, discuss its design and implementation issues, and evaluate it with different types of keyboards. We also test various receivers, including light sensors, remote cameras, 'extreme' cameras, security cameras, and smartphone cameras. Our experiment shows that data can be leaked from air-gapped computers via the keyboard LEDs at a maximum bit rate of 3000 bit/sec per LED given a light sensor as a receiver, and more than 120 bit/sec if smartphones are used. The attack doesn't require any modification of the keyboard at hardware or firmware levels.
Davenport, Amanda, Shetty, Sachin.
2019.
Modeling Threat of Leaking Private Keys from Air-Gapped Blockchain Wallets. 2019 IEEE International Smart Cities Conference (ISC2). :9—13.
In this paper we consider the threat surface and security of air gapped wallet schemes for permissioned blockchains as preparation for a Markov based mathematical model, and quantify the risk associated with private key leakage. We identify existing threats to the wallet scheme and existing work done to both attack and secure the scheme. We provide an overview the proposed model and outline justification for our methods. We follow with next steps in our remaining work and the overarching goals and motivation for our methods.
Guri, Mordechai.
2019.
HOTSPOT: Crossing the Air-Gap Between Isolated PCs and Nearby Smartphones Using Temperature. 2019 European Intelligence and Security Informatics Conference (EISIC). :94—100.
Air-gapped computers are hermetically isolated from the Internet to eliminate any means of information leakage. In this paper we present HOTSPOT - a new type of airgap crossing technique. Signals can be sent secretly from air-gapped computers to nearby smartphones and then on to the Internet - in the form of thermal pings. The thermal signals are generated by the CPUs and GPUs and intercepted by a nearby smartphone. We examine this covert channel and discuss other work in the field of air-gap covert communication channels. We present technical background and describe thermal sensing in modern smartphones. We implement a transmitter on the computer side and a receiver Android App on the smartphone side, and discuss the implementation details. We evaluate the covert channel and tested it in a typical work place. Our results show that it possible to send covert signals from air-gapped PCs to the attacker on the Internet through the thermal pings. We also propose countermeasures for this type of covert channel which has thus far been overlooked.
Davenport, Amanda, Shetty, Sachin.
2019.
Air Gapped Wallet Schemes and Private Key Leakage in Permissioned Blockchain Platforms. 2019 IEEE International Conference on Blockchain (Blockchain). :541—545.
In this paper we consider the threat surface and security of air gapped wallet schemes for permissioned blockchains as preparation for a Markov based mathematical model, and quantify the risk associated with private key leakage. We identify existing threats to the wallet scheme and existing work done to both attack and secure the scheme. We provide an overview the proposed model and outline justification for our methods. We follow with next steps in our remaining work and the overarching goals and motivation for our methods.
Zhu, Weijun, Liu, Yichen, Fan, Yongwen, Liu, Yang, Liu, Ruitong.
2019.
If Air-Gap Attacks Encounter the Mimic Defense. 2019 9th International Conference on Information Science and Technology (ICIST). :485—490.
Air-gap attacks and mimic defense are two emerging techniques in the field of network attack and defense, respectively. However, direct confrontation between them has not yet appeared in the real world. Who will be the winner, if air-gap attacks encounter mimic defense? To this end, a preliminary analysis is conducted for exploring the possible the strategy space of game according to the core principles of air-gap attacks and mimic defense. On this basis, an architecture model is proposed, which combines some detectors for air-gap attacks and mimic defense devices. First, a Dynamic Heterogeneous Redundancy (DHR) structure is employed to be on guard against malicious software of air-gap attacks. Second, some detectors for air-gap attacks are used to detect some signal sent by air-gap attackers' transmitter. Third, the proposed architecture model is obtained by organizing the DHR structure and the detectors for air-gap attacks with some logical relationship. The simulated experimental results preliminarily confirm the power of the new model.