Visible to the public Audio CAPTCHA for Visually Impaired

TitleAudio CAPTCHA for Visually Impaired
Publication TypeConference Paper
Year of Publication2021
AuthorsMathai, Angelo, Nirmal, Atharv, Chaudhari, Purva, Deshmukh, Vedant, Dhamdhere, Shantanu, Joglekar, Pushkar
Conference Name2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME)
Date Publishedoct
Keywordsarithmetic, Audio CAPTCHA, CAPTCHA, captchas, composability, Computers, Fourier transforms, Generative Adversarial Network (GAN), generative adversarial networks, Human Behavior, Mechatronics, music, pubcrawl, Short-time Fourier Transform (STFT), Visually Impaired
AbstractCompletely Automated Public Turing Tests (CAPTCHA) have been used to differentiate between computers and humans for quite some time now. There are many different varieties of CAPTCHAs - text-based, image-based, audio, video, arithmetic, etc. However, not all varieties are suitable for the visually impaired. As time goes by and Spambots and APIs grow more accurate, the CAPTCHA tests have been constantly updated to stay relevant, but that has not happened with the audio CAPTCHA. There exists an audio CAPTCHA intended for the blind/visually impaired but many blind/visually impaired find it difficult to solve. We propose an alternative to the existing system, which would make use of unique sound samples layered with music generated through GANs (Generative Adversarial Networks) along with noise and other layers of sounds to make it difficult to dissect. The user has to count the number of times the unique sound was heard in the sample and then input that number. Since there are no letters or numbers involved in the samples, speech-to-text bots/APIs cannot be used directly to decipher this system. Also, any user regardless of their native language can comfortably use this system.
DOI10.1109/ICECCME52200.2021.9590892
Citation Keymathai_audio_2021