Visible to the public Biblio

Filters: Keyword is stochastic gradient descent algorithm  [Clear All Filters]
2021-01-11
Wang, J., Wang, A..  2020.  An Improved Collaborative Filtering Recommendation Algorithm Based on Differential Privacy. 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). :310–315.
In this paper, differential privacy protection method is applied to matrix factorization method that used to solve the recommendation problem. For centralized recommendation scenarios, a collaborative filtering recommendation model based on matrix factorization is established, and a matrix factorization mechanism satisfying ε-differential privacy is proposed. Firstly, the potential characteristic matrix of users and projects is constructed. Secondly, noise is added to the matrix by the method of target disturbance, which satisfies the differential privacy constraint, then the noise matrix factorization model is obtained. The parameters of the model are obtained by the stochastic gradient descent algorithm. Finally, the differential privacy matrix factorization model is used for score prediction. The effectiveness of the algorithm is evaluated on the public datasets including Movielens and Netflix. The experimental results show that compared with the existing typical recommendation methods, the new matrix factorization method with privacy protection can recommend within a certain range of recommendation accuracy loss while protecting the users' privacy information.
2020-02-18
Nasr, Milad, Shokri, Reza, Houmansadr, Amir.  2019.  Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-Box Inference Attacks against Centralized and Federated Learning. 2019 IEEE Symposium on Security and Privacy (SP). :739–753.

Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.