Visible to the public Biblio

Filters: Keyword is hand gestures  [Clear All Filters]
2021-03-29
John, A., MC, A., Ajayan, A. S., Sanoop, S., Kumar, V. R..  2020.  Real-Time Facial Emotion Recognition System With Improved Preprocessing and Feature Extraction. 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). :1328—1333.

Human emotion recognition plays a vital role in interpersonal communication and human-machine interaction domain. Emotions are expressed through speech, hand gestures and by the movements of other body parts and through facial expression. Facial emotions are one of the most important factors in human communication that help us to understand, what the other person is trying to communicate. People understand only one-third of the message verbally, and two-third of it is through non-verbal means. There are many face emotion recognition (FER) systems present right now, but in real-life scenarios, they do not perform efficiently. Though there are many which claim to be a near-perfect system and to achieve the results in favourable and optimal conditions. The wide variety of expressions shown by people and the diversity in facial features of different people will not aid in the process of coming up with a system that is definite in nature. Hence developing a reliable system without any flaws showed by the existing systems is a challenging task. This paper aims to build an enhanced system that can analyse the exact facial expression of a user at that particular time and generate the corresponding emotion. Datasets like JAFFE and FER2013 were used for performance analysis. Pre-processing methods like facial landmark and HOG were incorporated into a convolutional neural network (CNN), and this has achieved good accuracy when compared with the already existing models.

2020-06-04
Asiri, Somayah, Alzahrani, Ahmad A..  2019.  The Effectiveness of Mixed Reality Environment-Based Hand Gestures in Distributed Collaboration. 2019 2nd International Conference on Computer Applications Information Security (ICCAIS). :1—6.

Mixed reality (MR) technologies are widely used in distributed collaborative learning scenarios and have made learning and training more flexible and intuitive. However, there are many challenges in the use of MR due to the difficulty in creating a physical presence, particularly when a physical task is being performed collaboratively. We therefore developed a novel MR system to overcomes these limitations and enhance the distributed collaboration user experience. The primary objective of this paper is to explore the potential of a MR-based hand gestures system to enhance the conceptual architecture of MR in terms of both visualization and interaction in distributed collaboration. We propose a synchronous prototype named MRCollab as an immersive collaborative approach that allows two or more users to communicate with a peer based on the integration of several technologies such as video, audio, and hand gestures.

2017-09-27
Yokota, Tomohiro, Hashida, Tomoko.  2016.  Hand Gesture and On-body Touch Recognition by Active Acoustic Sensing Throughout the Human Body. Proceedings of the 29th Annual Symposium on User Interface Software and Technology. :113–115.
In this paper, we present a novel acoustic sensing technique that recognizes two convenient input actions: hand gestures and on-body touch. We achieved them by observing the frequency spectrum of the wave propagated in the body, around the periphery of the wrist. Our approach can recognize hand gestures and on-body touch concurrently in real-time and is expected to obtain rich input variations by combining them. We conducted a user study that showed classification accuracy of 97%, 96%, and 97% for hand gestures, touches on the forearm, and touches on the back of the hand.
2017-03-07
Ruan, Wenjie, Sheng, Quan Z., Yang, Lei, Gu, Tao, Xu, Peipei, Shangguan, Longfei.  2016.  AudioGest: Enabling Fine-grained Hand Gesture Detection by Decoding Echo Signal. Proceedings of the 2016 {ACM} {International} {Joint} {Conference} on {Pervasive} and {Ubiquitous} {Computing}. :474–485.
Hand gesture is becoming an increasingly popular means of interacting with consumer electronic devices, such as mobile phones, tablets and laptops. In this paper, we present AudioGest, a device-free gesture recognition system that can accurately sense the hand in-air movement around user's devices. Compared to the state-of-the-art, AudioGest is superior in using only one pair of built-in speaker and microphone, without any extra hardware or infrastructure support and with no training, to achieve fine-grained hand detection. Our system is able to accurately recognize various hand gestures, estimate the hand in-air time, as well as average moving speed and waving range. We achieve this by transforming the device into an active sonar system that transmits inaudible audio signal and decodes the echoes of hand at its microphone. We address various challenges including cleaning the noisy reflected sound signal, interpreting the echo spectrogram into hand gestures, decoding the Doppler frequency shifts into the hand waving speed and range, as well as being robust to the environmental motion and signal drifting. We implement the proof-of-concept prototype in three different electronic devices and extensively evaluate the system in four real-world scenarios using 3,900 hand gestures that collected by five users for more than two weeks. Our results show that AudioGest can detect six hand gestures with an accuracy up to 96%, and by distinguishing the gesture attributions, it can provide up to 162 control commands for various applications.