Visible to the public Biblio

Filters: Keyword is handicapped aids  [Clear All Filters]
2020-09-11
Shekhar, Heemany, Moh, Melody, Moh, Teng-Sheng.  2019.  Exploring Adversaries to Defend Audio CAPTCHA. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1155—1161.
CAPTCHA is a web-based authentication method used by websites to distinguish between humans (valid users) and bots (attackers). Audio captcha is an accessible captcha meant for the visually disabled section of users such as color-blind, blind, near-sighted users. Firstly, this paper analyzes how secure current audio captchas are from attacks using machine learning (ML) and deep learning (DL) models. Each audio captcha is made up of five, seven or ten random digits[0-9] spoken one after the other along with varying background noise throughout the length of the audio. If the ML or DL model is able to correctly identify all spoken digits and in the correct order of occurance in a single audio captcha, we consider that captcha to be broken and the attack to be successful. Throughout the paper, accuracy refers to the attack model's success at breaking audio captchas. The higher the attack accuracy, the more unsecure the audio captchas are. In our baseline experiments, we found that attack models could break audio captchas that had no background noise or medium background noise with any number of spoken digits with nearly 99% to 100% accuracy. Whereas, audio captchas with high background noise were relatively more secure with attack accuracy of 85%. Secondly, we propose that the concepts of adversarial examples algorithms can be used to create a new kind of audio captcha that is more resilient towards attacks. We found that even after retraining the models on the new adversarial audio data, the attack accuracy remained as low as 25% to 36% only. Lastly, we explore the benefits of creating adversarial audio captcha through different algorithms such as Basic Iterative Method (BIM) and deepFool. We found that as long as the attacker has less than 45% sample from each kinds of adversarial audio datasets, the defense will be successful at preventing attacks.
2020-06-04
Gupta, Avinash, Cecil, J., Tapia, Oscar, Sweet-Darter, Mary.  2019.  Design of Cyber-Human Frameworks for Immersive Learning. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :1563—1568.

This paper focuses on the creation of information centric Cyber-Human Learning Frameworks involving Virtual Reality based mediums. A generalized framework is proposed, which is adapted for two educational domains: one to support education and training of residents in orthopedic surgery and the other focusing on science learning for children with autism. Users, experts and technology based mediums play a key role in the design of such a Cyber-Human framework. Virtual Reality based immersive and haptic mediums were two of the technologies explored in the implementation of the framework for these learning domains. The proposed framework emphasizes the importance of Information-Centric Systems Engineering (ICSE) principles which emphasizes a user centric approach along with formalizing understanding of target subjects or processes for which the learning environments are being created.

2019-11-26
Shukla, Anjali, Rakshit, Arnab, Konar, Amit, Ghosh, Lidia, Nagar, Atulya K..  2018.  Decoding of Mind-Generated Pattern Locks for Security Checking Using Type-2 Fuzzy Classifier. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). :1976-1981.

Brain Computer Interface (BCI) aims at providing a better quality of life to people suffering from neuromuscular disability. This paper establishes a BCI paradigm to provide a biometric security option, used for locking and unlocking personal computers or mobile phones. Although it is primarily meant for the people with neurological disorder, its application can safely be extended for the use of normal people. The proposed scheme decodes the electroencephalogram signals liberated by the brain of the subjects, when they are engaged in selecting a sequence of dots in(6×6)2-dimensional array, representing a pattern lock. The subject, while selecting the right dot in a row, would yield a P300 signal, which is decoded later by the brain-computer interface system to understand the subject's intention. In case the right dots in all the 6 rows are correctly selected, the subject would yield P300 signals six times, which on being decoded by a BCI system would allow the subject to access the system. Because of intra-subjective variation in the amplitude and wave-shape of the P300 signal, a type 2 fuzzy classifier has been employed to classify the presence/absence of the P300 signal in the desired window. A comparison of performances of the proposed classifier with others is also included. The functionality of the proposed system has been validated using the training instances generated for 30 subjects. Experimental results confirm that the classification accuracy for the present scheme is above 90% irrespective of subjects.

2018-06-07
Balaji, V., Kuppusamy, K. S..  2017.  Towards accessible mobile pattern authentication for persons with visual impairments. 2017 International Conference on Computational Intelligence in Data Science(ICCIDS). :1–5.

Security in smartphones has become one of the major concerns, with prolific growth in its usage scenario. Many applications are available for Android users to protect their applications and data. But all these security applications are not easily accessible for persons with disabilities. For persons with color blindness, authentication mechanisms pose user interface related issues. Color blind users find the inaccessible and complex design in the interface difficult to access and interpret mobile locks. This paper focuses on a novel method for providing color and touch sensitivity based dot pattern lock. This Model automatically replaces the existing display style of a pattern lock with a new user preferred color combination. In addition Pressure Gradient Input (PGI) has been incorporated to enhance authentication strength. The feedback collected from users shows that this accessible security application is easy to use without any major access barrier.

2015-05-04
Ghatak, S., Lodh, A., Saha, E., Goyal, A., Das, A., Dutta, S..  2014.  Development of a keyboardless social networking website for visually impaired: SocialWeb. Global Humanitarian Technology Conference - South Asia Satellite (GHTC-SAS), 2014 IEEE. :232-236.

Over the past decade, we have witnessed a huge upsurge in social networking which continues to touch and transform our lives till present day. Social networks help us to communicate amongst our acquaintances and friends with whom we share similar interests on a common platform. Globally, there are more than 200 million visually impaired people. Visual impairment has many issues associated with it, but the one that stands out is the lack of accessibility to content for entertainment and socializing safely. This paper deals with the development of a keyboard less social networking website for visually impaired. The term keyboard less signifies minimum use of keyboard and allows the user to explore the contents of the website using assistive technologies like screen readers and speech to text (STT) conversion technologies which in turn provides a user friendly experience for the target audience. As soon as the user with minimal computer proficiency opens this website, with the help of screen reader, he/she identifies the username and password fields. The user speaks out his username and with the help of STT conversion (using Web Speech API), the username is entered. Then the control moves over to the password field and similarly, the password of the user is obtained and matched with the one saved in the website database. The concept of acoustic fingerprinting has been implemented for successfully validating the passwords of registered users and foiling intentions of malicious attackers. On successful match of the passwords, the user is able to enjoy the services of the website without any further hassle. Once the access obstacles associated to deal with social networking sites are successfully resolved and proper technologies are put to place, social networking sites can be a rewarding, fulfilling, and enjoyable experience for the visually impaired people.