Biblio
Filters: Author is Adesuyi, Tosin A. [Clear All Filters]
Preserving Privacy in Convolutional Neural Network: An ∊-tuple Differential Privacy Approach. 2019 IEEE 2nd International Conference on Knowledge Innovation and Invention (ICKII). :570–573.
.
2019. Recent breakthrough in neural network has led to the birth of Convolutional neural network (CNN) which has been found to be very efficient especially in the areas of image recognition and classification. This success is traceable to the availability of large datasets and its capability to learn salient and complex data features which subsequently produce a reusable output model (Fθ). The Fθ are often made available (e.g. on cloud as-a-service) for others (client) to train their data or do transfer learning, however, an adversary can perpetrate a model inversion attack on the model Fθ to recover training data, hence compromising the sensitivity of the model buildup data. This is possible because CNN as a variant of deep neural network does memorize most of its training data during learning. Consequently, this has pose a privacy concern especially when a medical or financial data are used as model buildup data. Existing researches that proffers privacy preserving approach however suffer from significant accuracy degradation and this has left privacy preserving model on a theoretical desk. In this paper, we proposed an ϵ-tuple differential privacy approach that is based on neuron impact factor estimation to preserve privacy of CNN model without significant accuracy degradation. We experiment our approach on two large datasets and the result shows no significant accuracy degradation.