Visible to the public Biblio

Filters: Keyword is intent  [Clear All Filters]
2022-08-26
Zhu, Jessica, Van Brummelen, Jessica.  2021.  Teaching Students About Conversational AI Using Convo, a Conversational Programming Agent. 2021 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). :1–5.
Smart assistants, like Amazon's Alexa or Apple's Siri, have become commonplace in many people's lives, appearing in their phones and homes. Despite their ubiquity, these conversational AI agents still largely remain a mystery to many, in terms of how they work and what they can do. To lower the barrier to entry to understanding and creating these agents for young students, we expanded on Convo, a conversational programming agent that can respond to both voice and text inputs. The previous version of Convo focused on teaching only programming skills, so we created a simple, intuitive user interface for students to use those programming skills to train and create their own conversational AI agents. We also developed a curriculum to teach students about key concepts in AI and conversational AI in particular. We ran a 3-day workshop with 15 participating middle school students. Through the data collected from the pre- and post-workshop surveys as well as a mid-workshop brainstorming session, we found that after the workshop, students tended to think that conversational AI agents were less intelligent than originally perceived, gained confidence in their abilities to build these agents, and learned some key technical concepts about conversational AI as a whole. Based on these results, we are optimistic about CONVO'S ability to teach and empower students to develop conversational AI agents in an intuitive way.
2020-08-10
Kim, Byoungchul, Jung, Jaemin, Han, Sangchul, Jeon, Soyeon, Cho, Seong-je, Choi, Jongmoo.  2019.  A New Technique for Detecting Android App Clones Using Implicit Intent and Method Information. 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN). :478–483.
Detecting repackaged apps is one of the important issues in the Android ecosystem. Many attackers usually reverse engineer a legitimate app, modify or embed malicious codes into the app, repackage and distribute it in the online markets. They also employ code obfuscation techniques to hide app cloning or repackaging. In this paper, we propose a new technique for detecting repackaged Android apps, which is robust to code obfuscation. The technique analyzes the similarity of Android apps based on the method call information of component classes that receive implicit intents. We developed a tool Calldroid that implemented the proposed technique, and evaluated it on apps transformed using well-known obfuscators. The evaluation results showed that the proposed technique can effectively detect repackaged apps.
2018-02-27
Soleymani, Mohammad, Riegler, Michael, al Halvorsen, P$\backslash$a.  2017.  Multimodal Analysis of Image Search Intent: Intent Recognition in Image Search from User Behavior and Visual Content. Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. :251–259.

Users search for multimedia content with different underlying motivations or intentions. Study of user search intentions is an emerging topic in information retrieval since understanding why a user is searching for a content is crucial for satisfying the user's need. In this paper, we aimed at automatically recognizing a user's intent for image search in the early stage of a search session. We designed seven different search scenarios under the intent conditions of finding items, re-finding items and entertainment. We collected facial expressions, physiological responses, eye gaze and implicit user interactions from 51 participants who performed seven different search tasks on a custom-built image retrieval platform. We analyzed the users' spontaneous and explicit reactions under different intent conditions. Finally, we trained machine learning models to predict users' search intentions from the visual content of the visited images, the user interactions and the spontaneous responses. After fusing the visual and user interaction features, our system achieved the F-1 score of 0.722 for classifying three classes in a user-independent cross-validation. We found that eye gaze and implicit user interactions, including mouse movements and keystrokes are the most informative features. Given that the most promising results are obtained by modalities that can be captured unobtrusively and online, the results demonstrate the feasibility of deploying such methods for improving multimedia retrieval platforms.