Biblio
We present Diana, an embodied agent who is aware of her own virtual space and the physical space around her. Using video and depth sensors, Diana attends to the user's gestures, body language, gaze and (soon) facial expressions as well as their words. Diana also gestures and emotes in addition to speaking, and exists in a 3D virtual world that the user can see. This produces symmetric and shared perception, in the sense that Diana can see the user, the user can see Diana, and both can see the virtual world. The result is an embodied agent that begins to develop the conceit that the user is interacting with a peer rather than a program.
A lack of awareness surrounding secure online behaviour can lead to end-users, and their personal details becoming vulnerable to compromise. This paper describes an ongoing research project in the field of usable security, examining the relationship between end-user-security behaviour, and the use of affective feedback to educate end-users. Part of the aforementioned research project considers the link between categorical information users reveal about themselves online, and the information users believe, or report that they have revealed online. The experimental results confirm a disparity between information revealed, and what users think they have revealed, highlighting a deficit in security awareness. Results gained in relation to the affective feedback delivered are mixed, indicating limited short-term impact. Future work seeks to perform a long-term study, with the view that positive behavioural changes may be reflected in the results as end-users become more knowledgeable about security awareness.
An enormous number of images are currently shared through social networking services such as Facebook. These images usually contain appearance of people and may violate the people's privacy if they are published without permission from each person. To remedy this privacy concern, visual privacy protection, such as blurring, is applied to facial regions of people without permission. However, in addition to image quality degradation, this may spoil the context of the image: If some people are filtered while the others are not, missing facial expression makes comprehension of the image difficult. This paper proposes an image melding-based method that modifies facial regions in a visually unintrusive way with preserving facial expression. Our experimental results demonstrated that the proposed method can retain facial expression while protecting privacy.