Biblio
Filters: Author is Calderon, Juan [Clear All Filters]
A Generative Adversarial Approach for Sybil Attacks Recognition for Vehicular Crowdsensing. 2022 International Conference on Connected Vehicle and Expo (ICCVE). :1–7.
.
2022. Vehicular crowdsensing (VCS) is a subset of crowd-sensing where data collection is outsourced to group vehicles. Here, an entity interested in collecting data from a set of Places of Sensing Interest (PsI), advertises a set of sensing tasks, and the associated rewards. Vehicles attracted by the offered rewards deviate from their ongoing trajectories to visit and collect from one or more PsI. In this win-to-win scenario, vehicles reach their final destination with the extra reward, and the entity obtains the desired samples. Unfortunately, the efficiency of VCS can be undermined by the Sybil attack, in which an attacker can benefit from the injection of false vehicle identities. In this paper, we present a case study and analyze the effects of such an attack. We also propose a defense mechanism based on generative adversarial neural networks (GANs). We discuss GANs' advantages, and drawbacks in the context of VCS, and new trends in GANs' training that make them suitable for VCS.
Attribution Based Approach for Adversarial Example Generation. SoutheastCon 2021. :1–6.
.
2021. Neural networks with deep architectures have been used to construct state-of-the-art classifiers that can match human level accuracy in areas such as image classification. However, many of these classifiers can be fooled by examples slightly modified from their original forms. In this work, we propose a novel approach for generating adversarial examples that makes use of only attribution information of the features and perturbs only features that are highly influential to the output of the classifier. We call this approach Attribution Based Adversarial Generation (ABAG). To demonstrate the effectiveness of this approach, three somewhat arbitrary algorithms are proposed and examined. In the first algorithm all non-zero attributions are utilized and associated features perturbed; in the second algorithm only the top-n most positive and top-n most negative attributions are used and corresponding features perturbed; and in the third algorithm the level of perturbation is increased in an iterative manner until an adversarial example is discovered. All of the three algorithms are implemented and experiments are performed on the well-known MNIST dataset. Experiment results show that adversarial examples can be generated very efficiently, and thus prove the validity and efficacy of ABAG - utilizing attributions for the generation of adversarial examples. Furthermore, as shown by examples, ABAG can be adapted to provides a systematic searching approach to generate adversarial examples by perturbing a minimum amount of features.