Visible to the public Biblio

Filters: Keyword is perturbation  [Clear All Filters]
2022-12-20
Lin, Xuanwei, Dong, Chen, Liu, Ximeng, Zhang, Yuanyuan.  2022.  SPA: An Efficient Adversarial Attack on Spiking Neural Networks using Spike Probabilistic. 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid). :366–375.
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model's accuracy under white-box attack decreases by 9.2S% 31.1S% better than others, and average success rates are 74.87% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting,
2022-04-20
Keshk, Marwa, Turnbull, Benjamin, Sitnikova, Elena, Vatsalan, Dinusha, Moustafa, Nour.  2021.  Privacy-Preserving Schemes for Safeguarding Heterogeneous Data Sources in Cyber-Physical Systems. IEEE Access. 9:55077–55097.
Cyber-Physical Systems (CPS) underpin global critical infrastructure, including power, water, gas systems and smart grids. CPS, as a technology platform, is unique as a target for Advanced Persistent Threats (APTs), given the potentially high impact of a successful breach. Additionally, CPSs are targets as they produce significant amounts of heterogeneous data from the multitude of devices and networks included in their architecture. It is, therefore, essential to develop efficient privacy-preserving techniques for safeguarding system data from cyber attacks. This paper introduces a comprehensive review of the current privacy-preserving techniques for protecting CPS systems and their data from cyber attacks. Concepts of Privacy preservation and CPSs are discussed, demonstrating CPSs' components and the way these systems could be exploited by either cyber and physical hacking scenarios. Then, classification of privacy preservation according to the way they would be protected, including perturbation, authentication, machine learning (ML), cryptography and blockchain, are explained to illustrate how they would be employed for data privacy preservation. Finally, we show existing challenges, solutions and future research directions of privacy preservation in CPSs.
Conference Name: IEEE Access
2021-07-08
Chaturvedi, Amit Kumar, Chahar, Meetendra Singh, Sharma, Kalpana.  2020.  Proposing Innovative Perturbation Algorithm for Securing Portable Data on Cloud Servers. 2020 9th International Conference System Modeling and Advancement in Research Trends (SMART). :360—364.
Cloud computing provides an open architecture and resource sharing computing platform with pay-per-use model. It is now a popular computing platform and most of the new internet based computing services are on this innovation supported environment. We consider it as innovation supported because developers are more focused here on the service design, rather on arranging the infrastructure, network, management of the resources, etc. These all things are available in cloud computing on hired basis. Now, a big question arises here is the security of data or privacy of data because the service provider is already using the infrastructure, network, storage, processors, and other more resources from the third party. So, the security or privacy of the portable user's data is the main motivation for writing this research paper. In this paper, we are proposing an innovative perturbation algorithm MAP() to secure the portable user's data on the cloud server.
2021-02-22
Fang, S., Kennedy, S., Wang, C., Wang, B., Pei, Q., Liu, X..  2020.  Sparser: Secure Nearest Neighbor Search with Space-filling Curves. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :370–375.
Nearest neighbor search, a classic way of identifying similar data, can be applied to various areas, including database, machine learning, natural language processing, software engineering, etc. Secure nearest neighbor search aims to find nearest neighbors to a given query point over encrypted data without accessing data in plaintext. It provides privacy protection to datasets when nearest neighbor queries need to be operated by an untrusted party (e.g., a public server). While different solutions have been proposed to support nearest neighbor queries on encrypted data, these existing solutions still encounter critical drawbacks either in efficiency or privacy. In light of the limitations in the current literature, we propose a novel approximate nearest neighbor search solution, referred to as Sparser, by leveraging a combination of space-filling curves, perturbation, and Order-Preserving Encryption. The advantages of Sparser are twofold, strengthening privacy and improving efficiency. Specifically, Sparser pre-processes plaintext data with space-filling curves and perturbation, such that data is sparse, which mitigates leakage abuse attacks and renders stronger privacy. In addition to privacy enhancement, Sparser can efficiently find approximate nearest neighbors over encrypted data with logarithmic time. Through extensive experiments over real-world datasets, we demonstrate that Sparser can achieve strong privacy protection under leakage abuse attacks and minimize search time.
2020-04-20
Wang, Chong Xiao, Song, Yang, Tay, Wee Peng.  2018.  PRESERVING PARAMETER PRIVACY IN SENSOR NETWORKS. 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :1316–1320.
We consider the problem of preserving the privacy of a set of private parameters while allowing inference of a set of public parameters based on observations from sensors in a network. We assume that the public and private parameters are correlated with the sensor observations via a linear model. We define the utility loss and privacy gain functions based on the Cramér-Rao lower bounds for estimating the public and private parameters, respectively. Our goal is to minimize the utility loss while ensuring that the privacy gain is no less than a predefined privacy gain threshold, by allowing each sensor to perturb its own observation before sending it to the fusion center. We propose methods to determine the amount of noise each sensor needs to add to its observation under the cases where prior information is available or unavailable.
Wang, Chong Xiao, Song, Yang, Tay, Wee Peng.  2018.  PRESERVING PARAMETER PRIVACY IN SENSOR NETWORKS. 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :1316–1320.
We consider the problem of preserving the privacy of a set of private parameters while allowing inference of a set of public parameters based on observations from sensors in a network. We assume that the public and private parameters are correlated with the sensor observations via a linear model. We define the utility loss and privacy gain functions based on the Cramér-Rao lower bounds for estimating the public and private parameters, respectively. Our goal is to minimize the utility loss while ensuring that the privacy gain is no less than a predefined privacy gain threshold, by allowing each sensor to perturb its own observation before sending it to the fusion center. We propose methods to determine the amount of noise each sensor needs to add to its observation under the cases where prior information is available or unavailable.
2019-01-31
Nakamura, T., Nishi, H..  2018.  TMk-Anonymity: Perturbation-Based Data Anonymization Method for Improving Effectiveness of Secondary Use. IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society. :3138–3143.

The recent emergence of smartphones, cloud computing, and the Internet of Things has brought about the explosion of data creation. By collating and merging these enormous data with other information, services that use information become more sophisticated and advanced. However, at the same time, the consideration of privacy violations caused by such merging is indispensable. Various anonymization methods have been proposed to preserve privacy. The conventional perturbation-based anonymization method of location data adds comparatively larger noise, and the larger noise makes it difficult to utilize the data effectively for secondary use. In this research, to solve these problems, we first clarified the definition of privacy preservation and then propose TMk-anonymity according to the definition.

2018-07-06
Zhang, F., Chan, P. P. K., Tang, T. Q..  2015.  L-GEM based robust learning against poisoning attack. 2015 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). :175–178.

Poisoning attack in which an adversary misleads the learning process by manipulating its training set significantly affect the performance of classifiers in security applications. This paper proposed a robust learning method which reduces the influences of attack samples on learning. The sensitivity, defined as the fluctuation of the output with small perturbation of the input, in Localized Generalization Error Model (L-GEM) is measured for each training sample. The classifier's output on attack samples may be sensitive and inaccurate since these samples are different from other untainted samples. An import score is assigned to each sample according to its localized generalization error bound. The classifier is trained using a new training set obtained by resampling the samples according to their importance scores. RBFNN is applied as the classifier in experimental evaluation. The proposed model outperforms than the traditional one under the well-known label flip poisoning attacks including nearest-first and farthest-first flips attack.