Visible to the public Biblio

Filters: Keyword is smartphone sensing  [Clear All Filters]
2022-06-06
Silva, J. Sá, Saldanha, Ruben, Pereira, Vasco, Raposo, Duarte, Boavida, Fernando, Rodrigues, André, Abreu, Madalena.  2019.  WeDoCare: A System for Vulnerable Social Groups. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :1053–1059.
One of the biggest problems in the current society is people's safety. Safety measures and mechanisms are especially important in the case of vulnerable social groups, such as migrants, homeless, and victims of domestic and/or sexual violence. In order to cope with this problem, we witness an increasing number of personal alarm systems in the market, most of them based on panic buttons. Nevertheless, none of them has got widespread acceptance mainly because of limited Human-Computer Interaction. In the context of this work, we developed an innovative mobile application that recognizes an attack through speech and gesture recognition. This paper describes such a system and presents its features, some of them based on the emerging concept of Human-in-the-Loop Cyber-physical Systems and new concepts of Human-Computer Interaction.
2019-01-21
Yu, Z., Du, H., Xiao, D., Wang, Z., Han, Q., Guo, B..  2018.  Recognition of Human Computer Operations Based on Keystroke Sensing by Smartphone Microphone. IEEE Internet of Things Journal. 5:1156–1168.

Human computer operations such as writing documents and playing games have become popular in our daily lives. These activities (especially if identified in a non-intrusive manner) can be used to facilitate context-aware services. In this paper, we propose to recognize human computer operations through keystroke sensing with a smartphone. Specifically, we first utilize the microphone embedded in a smartphone to sense the input audio from a computer keyboard. We then identify keystrokes using fingerprint identification techniques. The determined keystrokes are then corrected with a word recognition procedure, which utilizes the relations of adjacent letters in a word. Finally, by fusing both semantic and acoustic features, a classification model is constructed to recognize four typical human computer operations: 1) chatting; 2) coding; 3) writing documents; and 4) playing games. We recruited 15 volunteers to complete these operations, and evaluated the proposed approach from multiple aspects in realistic environments. Experimental results validated the effectiveness of our approach.