Visible to the public Adversarial Deep Learning Models With Multiple Adversaries

TitleAdversarial Deep Learning Models With Multiple Adversaries
Publication TypeConference Paper
Year of Publication2021
AuthorsJanapriya, N., Anuradha, K., Srilakshmi, V.
Conference Name2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)
Keywordsadversarial learning, Adversarial Machine Learning, Adversary Models, Computational modeling, Deep Learning, game theory, Games, Human Behavior, Metrics, pubcrawl, Resiliency, Scalability, Semantics, Skeleton, Stochastic processes, supervised learning
AbstractAdversarial machine learning calculations handle adversarial instance age, producing bogus data information with the ability to fool any machine learning model. As the word implies, "foe" refers to a rival, whereas "rival" refers to a foe. In order to strengthen the machine learning models, this section discusses about the weakness of machine learning models and how effectively the misinterpretation occurs during the learning cycle. As definite as it is, existing methods such as creating adversarial models and devising powerful ML computations, frequently ignore semantics and the general skeleton including ML section. This research work develops an adversarial learning calculation by considering the coordinated portrayal by considering all the characteristics and Convolutional Neural Networks (CNN) explicitly. Figuring will most likely express minimal adjustments via data transport represented over positive and negative class markings, as well as a specific subsequent data flow misclassified by CNN. The final results recommend a certain game theory and formative figuring, which obtain incredible favored ensuring about significant learning models against the execution of shortcomings, which are reproduced as attack circumstances against various adversaries.
DOI10.1109/ICIRCA51532.2021.9544889
Citation Keyjanapriya_adversarial_2021