Biblio
Never Alone (2016) is a generative large-scale urban screen video-sound installation, which presents the idea of generative choreographies amongst multiple video agents, or "digital performers". This generative installation questions how we navigate in urban spaces and the ubiquity and disruptive nature of encounters within the cities' landscapes. The video agents explore precarious movement paths along the façade inhabiting landscapes that are both architectural and emotional.
Personal agent software is now in daily use in personal devices and in some organizational settings. While many advocate an agent sociality design paradigm that incorporates human-like features and social dialogues, it is unclear whether this is a good match for professionals who seek productivity instead of leisurely use. We conducted a 17-day field study of a prototype of a personal AI agent that helps employees find work-related information. Using log data, surveys, and interviews, we found individual differences in the preference for humanized social interactions (social-agent orientation), which led to different user needs and requirements for agent design. We also explored the effect of agent proactive interactions and found that they carried the risk of interruption, especially for users who were generally averse to interruptions at work. Further, we found that user differences in social-agent orientation and aversion to agent proactive interactions can be inferred from behavioral signals. Our results inform research into social agent design, proactive agent interaction, and personalization of AI agents.
Embodied conversational agents are changing the way humans interact with technology. In order to develop humanlike ECAs they need to be able to perform natural gestures that are used in day-to-day conversation. Gestures can give insight into an ECAs personality trait of extraversion, but what factors into it is still being explored. Our study focuses on two aspects of gesture: amplitude and frequency. Our goal is to find out whether agents should use specific gestures more frequently than others depending on the personality type they have been designed with. We also look to quantify gesture amplitude and compare it to a previous study on the perception of an agent's naturalness of its gestures. Our results showed some indication that introverts and extraverts judge the agent's naturalness similarly. The larger the amplitude our agent used, the more natural its gestures were perceived. The frequency of gestures between extraverts and introverts seem to contain hardly any difference, even in terms of types of gesture used.
In this study, we are using a multi-party recording as a template for building a parametric speech synthesiser which is able to express different levels of attentiveness in backchannel tokens. This allowed us to investigate i) whether it is possible to express the same perceived level of attentiveness in synthesised than in natural backchannels; ii) whether it is possible to increase and decrease the perceived level of attentiveness of backchannels beyond the range observed in the original corpus.
This paper describes a system for embodied conversational agents developed by Inmerssion and one of the applications—Young Merlin: Trial by Fire —built with this system. In the Merlin application, the ECA and a human interact with speech in virtual reality. The goal of this application is to provide engaging VR experiences that build rapport through storytelling and verbal interactions. The agent is fully automated, and his attitude towards the user changes over time depending on the interaction. The conversational system was built through a declarative approach that supports animations, markup language, and gesture recognition. Future versions of Merlin will implement multi-character dialogs, additional actions, and extended interaction time.
We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent.
Protecting the confidentiality of information manipulated by a computing system is one of the most important challenges facing today's cybersecurity community. A promising step toward conquering this challenge is to formally verify that the end-to-end behavior of the computing system really satisfies various information-flow policies. Unfortunately, because today's system software still consists of both C and assembly programs, the end-to-end verification necessarily requires that we not only prove the security properties of individual components, but also carefully preserve these properties through compilation and cross-language linking. In this paper, we present a novel methodology for formally verifying end-to-end security of a software system that consists of both C and assembly programs. We introduce a general definition of observation function that unifies the concepts of policy specification, state indistinguishability, and whole-execution behaviors. We show how to use different observation functions for different levels of abstraction, and how to link different security proofs across abstraction levels using a special kind of simulation that is guaranteed to preserve state indistinguishability. To demonstrate the effectiveness of our new methodology, we have successfully constructed an end-to-end security proof, fully formalized in the Coq proof assistant, of a nontrivial operating system kernel (running on an extended CompCert x86 assembly machine model). Some parts of the kernel are written in C and some are written in assembly; we verify all of the code, regardless of language.
Different applications concurrently running on modern MPSoCs can interfere with each other when they use shared resources. This interference can cause side channels, i.e., sources of unintended information flow between applications. To prevent such side channels, we propose a hybrid mapping methodology that attempts to ensure spatial isolation, i.e., a mutually-exclusive allocation of resources to applications in the MPSoC. At design time and as a first step, we compute compact and connected application mappings (called shapes). In a second step, run-time management uses this information to map multiple spatially segregated shapes to the architecture. We present and evaluate a (fast) heuristic and an (exact) SAT-based mapper, demonstrating the viability of the approach.
We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming language with constructs for performing Bayesian inference. Then, differential privacy of programs is established using a relational refinement type system, in which refinements on probability types are indexed by a metric on distributions. Our framework leverages recent developments in Bayesian inference, probabilistic programming languages, and in relational refinement types. We demonstrate the expressiveness of PrivInfer by verifying privacy for several examples of private Bayesian inference.
Different wireless Peer-to-Peer (P2P) routing protocols rely on cooperative protocols of interaction among peers, yet, most of the surveyed provide little detail on how the peers can take into consideration the peers' reliability for improving routing efficiency in collaborative networks. Previous research has shown that in most of the trust and reputation evaluation schemes, the peers' rating behaviour can be improved to include the peers' attributes for understanding peers' reliability. This paper proposes a reliability based trust model for dynamic trust evaluation between the peers in P2P networks for collaborative routing. Since the peers' routing attributes vary dynamically, our proposed model must also accommodate the dynamic changes of peers' attributes and behaviour. We introduce peers' buffers as a scaling factor for peers' trust evaluation in the trust and reputation routing protocols. The comparison between reliability and non-reliability based trust models using simulation shows the improved performance of our proposed model in terms of delivery ratio and average message latency.
Social recommendation takes advantage of the influence of social relationships in decision making and the ready availability of social data through social networking systems. Trust relationships in particular can be exploited in such systems for rating prediction and recommendation, which has been shown to have the potential for improving the quality of the recommender and alleviating the issue of data sparsity, cold start, and adversarial attacks. An appropriate trust inference mechanism is necessary in extending the knowledge base of trust opinions and tackling the issue of limited trust information due to connection sparsity of social networks. In this work, we offer a new solution to trust inference in social networks to provide a better knowledge base for trust-aware recommender systems. We propose using a semiring framework as a nonlinear way to combine trust evidences for inferring trust, where trust relationship is model as 2-D vector containing both trust and certainty information. The trust propagation and aggregation rules, as the building blocks of our trust inference scheme, are based upon the properties of trust relationships. In our approach, both trust and distrust (i.e., positive and negative trust) are considered, and opinion conflict resolution is supported. We evaluate the proposed approach on real-world datasets, and show that our trust inference framework has high accuracy, and is capable of handling trust relationship in large networks. The inferred trust relationships can enlarge the knowledge base for trust information and improve the quality of trust-aware recommendation.
Content-centric networking (CCN) is a networking paradigm that emphasizes request-response-based data transfer. A \\textbackslashem consumer\ issues a request explicitly referencing desired data by name. A \\textbackslashem producer\ assigns a name to each data it publishes. Names are used both to identify data to and route traffic between consumers and producers. The type, format, and representation of names are fundamental to CCN. Currently, names are represented as human-readable application-layer URIs. This has several important security and performance implications for the network. In this paper, we propose to transparently decouple application-layer names from their network-layer counterparts. We demonstrate a mapping between the two namespaces that can be deterministically computed by consumers and producers, using application names formatted according to the standard CCN URI scheme. Meanwhile, consumers and producers can continue to use application-layer names. We detail the computation and mapping function requirements and discuss their impact on consumers, producers, and routers. Finally, we comprehensively analyze several mapping functions to show their functional equivalence to standard application names and argue that they address several issues that stem from propagating application names into the network.
Code reuse attacks based on return oriented programming (ROP) are becoming more and more prevalent every year. They started as a way to circumvent operating systems protections against injected code, but they are now also used as a technique to keep the malicious code hidden from detection and analysis systems. This means that while in the past ROP chains were short and simple (and therefore did not require any dedicated tool for their analysis), we recently started to observe very complex algorithms – such as a complete rootkit – implemented entirely as a sequence of ROP gadgets. In this paper, we present a set of techniques to analyze complex code reuse attacks. First, we identify and discuss the main challenges that complicate the reverse engineer of code implemented using ROP. Second, we propose an emulation-based framework to dissect, reconstruct, and simplify ROP chains. Finally, we test our tool on the most complex example available to date: a ROP rootkit containing four separate chains, two of them dynamically generated at runtime.
Diversity has been suggested as an effective alternative to the current trend in rules-based approaches to cybersecurity. However, little work to date has focused on how various techniques generalize to new attacks. That is, there is no accepted methodology that researchers use to evaluate diversity techniques. Starting with the hypothesis that an attacker's effort increases as the common set of executable code snippets (return-oriented programming (ROP) gadgets) decreases across application variants, we explore how different diversification techniques affect the set of ROP gadgets that is available to an attacker. We show that a small population of diversified variants is sufficient to eliminate 90-99% of ROP gadgets across a collection of real-world applications. Finally, we observe that the number of remaining gadgets may still be sufficient for an attacker to mount an effective attack regardless of the presence of software diversity.
This demo dramatically illustrates how replacing 'Classic' TCP congestion control (Reno, Cubic, etc.) with a 'Scalable' alternative like Data Centre TCP (DCTCP) keeps queuing delay ultra-low; not just for a select few light applications like voice or gaming, but even when a variety of interactive applications all heavily load the same (emulated) Internet access. DCTCP has so far been confined to data centres because it is too aggressive–-it starves Classic TCP flows. To allow DCTCP to be exploited on the public Internet, we developed DualQ Coupled Active Queue Management (AQM), which allows the two TCP types to safely co-exist. Visitors can test all these claims. As well as running Web-based apps, they can pan and zoom a panoramic video of a football stadium on a touch-screen, and experience how their personalized HD scene seems to stick to their finger, even though it is encoded on the fly on servers accessed via an emulated delay, representing 'the cloud'. A pair of VR goggles can be used at the same time, making a similar point. The demo provides a dashboard so that visitors can not only experience the interactivity of each application live, but they can also quantify it via a wide range of performance stats, updated live. It also includes controls so visitors can configure different TCP variants, AQMs, network parameters and background loads and immediately test the effect.
In cloud computing, computationally weak users are always willing to outsource costly computations to a cloud, and at the same time they need to check the correctness of the result provided by the cloud. Such activities motivate the occurrence of verifiable computation (VC). Recently, Parno, Raykova and Vaikuntanathan showed any VC protocol can be constructed from an attribute-based encryption (ABE) scheme for a same class of functions. In this paper, we propose two practical and efficient semi-adaptively secure key-policy attribute-based encryption (KP-ABE) schemes with constant-size ciphertexts. The semi-adaptive security requires that the adversary designates the challenge attribute set after it receives public parameters but before it issues any secret key query, which is stronger than selective security guarantee. Our first construction deals with small universe while the second one supports large universe. Both constructions employ the technique underlying the prime-order instantiation of nested dual system groups, which are based on the \$d\$-linear assumption including SXDH and DLIN assumptions. In order to evaluate the performance, we implement our ABE schemes using \$\textbackslashtextsf\Python\\$ language in Charm. Compared with previous KP-ABE schemes with constant-size ciphertexts, our constructions achieve shorter ciphertext and secret key sizes, and require low computation costs, especially under the SXDH assumption.
We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability ε. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1-o(1) must communicate at least Ω(T log n / log log n) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is Θ(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate Ω(1) for certain topologies, no coding scheme with constant rate Ω(1) exists for arbitrary n-party noisy networks.