Title | Preserving Privacy in Convolutional Neural Network: An ∊-tuple Differential Privacy Approach |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Adesuyi, Tosin A., Kim, Byeong Man |
Conference Name | 2019 IEEE 2nd International Conference on Knowledge Innovation and Invention (ICKII) |
Date Published | jul |
Keywords | classification, cloud computing, CNN, CNN model, complex data features, composability, convolutional neural nets, convolutional neural network, data privacy, Deep Neural Network, deep neural networks, Differential privacy, financial data, Human Behavior, image recognition, learning (artificial intelligence), medical data, model buildup data, model inversion attack, privacy, privacy concern, privacy preserving model, pubcrawl, Resiliency, reusable output model, salient data features, Scalability, significant accuracy degradation, Training data, transfer learning, ϵ-tuple differential privacy approach |
Abstract | Recent breakthrough in neural network has led to the birth of Convolutional neural network (CNN) which has been found to be very efficient especially in the areas of image recognition and classification. This success is traceable to the availability of large datasets and its capability to learn salient and complex data features which subsequently produce a reusable output model (Fth). The Fth are often made available (e.g. on cloud as-a-service) for others (client) to train their data or do transfer learning, however, an adversary can perpetrate a model inversion attack on the model Fth to recover training data, hence compromising the sensitivity of the model buildup data. This is possible because CNN as a variant of deep neural network does memorize most of its training data during learning. Consequently, this has pose a privacy concern especially when a medical or financial data are used as model buildup data. Existing researches that proffers privacy preserving approach however suffer from significant accuracy degradation and this has left privacy preserving model on a theoretical desk. In this paper, we proposed an -tuple differential privacy approach that is based on neuron impact factor estimation to preserve privacy of CNN model without significant accuracy degradation. We experiment our approach on two large datasets and the result shows no significant accuracy degradation. |
DOI | 10.1109/ICKII46306.2019.9042653 |
Citation Key | adesuyi_preserving_2019 |