Biblio
Human emotion recognition plays a vital role in interpersonal communication and human-machine interaction domain. Emotions are expressed through speech, hand gestures and by the movements of other body parts and through facial expression. Facial emotions are one of the most important factors in human communication that help us to understand, what the other person is trying to communicate. People understand only one-third of the message verbally, and two-third of it is through non-verbal means. There are many face emotion recognition (FER) systems present right now, but in real-life scenarios, they do not perform efficiently. Though there are many which claim to be a near-perfect system and to achieve the results in favourable and optimal conditions. The wide variety of expressions shown by people and the diversity in facial features of different people will not aid in the process of coming up with a system that is definite in nature. Hence developing a reliable system without any flaws showed by the existing systems is a challenging task. This paper aims to build an enhanced system that can analyse the exact facial expression of a user at that particular time and generate the corresponding emotion. Datasets like JAFFE and FER2013 were used for performance analysis. Pre-processing methods like facial landmark and HOG were incorporated into a convolutional neural network (CNN), and this has achieved good accuracy when compared with the already existing models.
Mixed reality (MR) technologies are widely used in distributed collaborative learning scenarios and have made learning and training more flexible and intuitive. However, there are many challenges in the use of MR due to the difficulty in creating a physical presence, particularly when a physical task is being performed collaboratively. We therefore developed a novel MR system to overcomes these limitations and enhance the distributed collaboration user experience. The primary objective of this paper is to explore the potential of a MR-based hand gestures system to enhance the conceptual architecture of MR in terms of both visualization and interaction in distributed collaboration. We propose a synchronous prototype named MRCollab as an immersive collaborative approach that allows two or more users to communicate with a peer based on the integration of several technologies such as video, audio, and hand gestures.