Visible to the public Biblio

Filters: Keyword is Avatars  [Clear All Filters]
2022-08-12
Liu, Kui, Koyuncu, Anil, Kim, Dongsun, Bissyandè, Tegawende F..  2019.  AVATAR: Fixing Semantic Bugs with Fix Patterns of Static Analysis Violations. 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER). :1–12.
Fix pattern-based patch generation is a promising direction in Automated Program Repair (APR). Notably, it has been demonstrated to produce more acceptable and correct patches than the patches obtained with mutation operators through genetic programming. The performance of pattern-based APR systems, however, depends on the fix ingredients mined from fix changes in development histories. Unfortunately, collecting a reliable set of bug fixes in repositories can be challenging. In this paper, we propose to investigate the possibility in an APR scenario of leveraging code changes that address violations by static bug detection tools. To that end, we build the AVATAR APR system, which exploits fix patterns of static analysis violations as ingredients for patch generation. Evaluated on the Defects4J benchmark, we show that, assuming a perfect localization of faults, AVATAR can generate correct patches to fix 34/39 bugs. We further find that AVATAR yields performance metrics that are comparable to that of the closely-related approaches in the literature. While AVATAR outperforms many of the state-of-the-art pattern-based APR systems, it is mostly complementary to current approaches. Overall, our study highlights the relevance of static bug finding tools as indirect contributors of fix ingredients for addressing code defects identified with functional test cases.
2020-07-16
McNeely-White, David G., Ortega, Francisco R., Beveridge, J. Ross, Draper, Bruce A., Bangar, Rahul, Patil, Dhruva, Pustejovsky, James, Krishnaswamy, Nikhil, Rim, Kyeongmin, Ruiz, Jaime et al..  2019.  User-Aware Shared Perception for Embodied Agents. 2019 IEEE International Conference on Humanized Computing and Communication (HCC). :46—51.

We present Diana, an embodied agent who is aware of her own virtual space and the physical space around her. Using video and depth sensors, Diana attends to the user's gestures, body language, gaze and (soon) facial expressions as well as their words. Diana also gestures and emotes in addition to speaking, and exists in a 3D virtual world that the user can see. This produces symmetric and shared perception, in the sense that Diana can see the user, the user can see Diana, and both can see the virtual world. The result is an embodied agent that begins to develop the conceit that the user is interacting with a peer rather than a program.

2018-02-06
Shepherd, L. A., Archibald, J..  2017.  Security Awareness and Affective Feedback: Categorical Behaviour vs. Reported Behaviour. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–6.

A lack of awareness surrounding secure online behaviour can lead to end-users, and their personal details becoming vulnerable to compromise. This paper describes an ongoing research project in the field of usable security, examining the relationship between end-user-security behaviour, and the use of affective feedback to educate end-users. Part of the aforementioned research project considers the link between categorical information users reveal about themselves online, and the information users believe, or report that they have revealed online. The experimental results confirm a disparity between information revealed, and what users think they have revealed, highlighting a deficit in security awareness. Results gained in relation to the affective feedback delivered are mixed, indicating limited short-term impact. Future work seeks to perform a long-term study, with the view that positive behavioural changes may be reflected in the results as end-users become more knowledgeable about security awareness.

2017-03-08
Nakashima, Y., Koyama, T., Yokoya, N., Babaguchi, N..  2015.  Facial expression preserving privacy protection using image melding. 2015 IEEE International Conference on Multimedia and Expo (ICME). :1–6.

An enormous number of images are currently shared through social networking services such as Facebook. These images usually contain appearance of people and may violate the people's privacy if they are published without permission from each person. To remedy this privacy concern, visual privacy protection, such as blurring, is applied to facial regions of people without permission. However, in addition to image quality degradation, this may spoil the context of the image: If some people are filtered while the others are not, missing facial expression makes comprehension of the image difficult. This paper proposes an image melding-based method that modifies facial regions in a visually unintrusive way with preserving facial expression. Our experimental results demonstrated that the proposed method can retain facial expression while protecting privacy.