Visible to the public Creation of Adversarial Examples with Keeping High Visual Performance

TitleCreation of Adversarial Examples with Keeping High Visual Performance
Publication TypeConference Paper
Year of Publication2019
AuthorsAzakami, Tomoka, Shibata, Chihiro, Uda, Ryuya, Kinoshita, Toshiyuki
Conference Name2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT)
Keywordsadversarial examples, artificial, CAPTCHA, captchas, character images, character recognition, character string CAPTCHA, CNN, composability, convolutional neural nets, convolutional neural network, convolutional neural network (CNN), FGSM, high visual performance, Human Behavior, human readability, image classification, image recognition, image recognition technology, intelligence, learning (artificial intelligence), machine learning, Mathematical model, Neural networks, Perturbation methods, pubcrawl, Resistance, security, visualization
AbstractThe accuracy of the image classification by the convolutional neural network is exceeding the ability of human being and contributes to various fields. However, the improvement of the image recognition technology gives a great blow to security system with an image such as CAPTCHA. In particular, since the character string CAPTCHA has already added distortion and noise in order not to be read by the computer, it becomes a problem that the human readability is lowered. Adversarial examples is a technique to produce an image letting an image classification by the machine learning be wrong intentionally. The best feature of this technique is that when human beings compare the original image with the adversarial examples, they cannot understand the difference on appearance. However, Adversarial examples that is created with conventional FGSM cannot completely misclassify strong nonlinear networks like CNN. Osadchy et al. have researched to apply this adversarial examples to CAPTCHA and attempted to let CNN misclassify them. However, they could not let CNN misclassify character images. In this research, we propose a method to apply FGSM to the character string CAPTCHAs and to let CNN misclassified them.
DOI10.1109/INFOCT.2019.8710918
Citation Keyazakami_creation_2019