Biblio
This paper has firstly introduced big data services and cloud computing model based on different process forms, and analyzed the authentication technology and security services of the existing big data to understand their processing characteristics. Operation principles and complexity of the big data services and cloud computing have also been studied, and summary about their suitable environment and pros and cons have been made. Based on the Cloud Computing, the author has put forward the Model of Big Data Cloud Computing based on Extended Subjective Logic (MBDCC-ESL), which has introduced Jφsang's subjective logic to test the data credibility and expanded it to solve the problem of the trustworthiness of big data in the cloud computing environment. Simulation results show that the model works pretty well.
The evolving of context-aware applications are becoming more readily available as a major driver of the growth of future connected smart, autonomous environments. However, with the increasing of security risks in critical shared massive data capabilities and the increasing regulation requirements on privacy, there is a significant need for new paradigms to manage security and privacy compliances. These challenges call for context-aware and fine-grained security policies to be enforced in such dynamic environments in order to achieve efficient real-time authorization between applications and connected devices. We propose in this work a novel solution that aims to provide context-aware security model for Android applications. Specifically, our proposition provides automated context-aware access control model and leverages Attribute-Based Encryption (ABE) to secure data communications. Thorough experiments have been performed and the evaluation results demonstrate that the proposed solution provides an effective lightweight adaptable context-aware encryption model.
Although sequence-to-sequence attentional neural machine translation (NMT) has achieved great progress recently, it is confronted with two challenges: learning optimal model parameters for long parallel sentences and well exploiting different scopes of contexts. In this paper, partially inspired by the idea of segmenting a long sentence into short clauses, each of which can be easily translated by NMT, we propose a hierarchy-to-sequence attentional NMT model to handle these two challenges. Our encoder takes the segmented clause sequence as input and explores a hierarchical neural network structure to model words, clauses, and sentences at different levels, particularly with two layers of recurrent neural networks modeling semantic compositionality at the word and clause level. Correspondingly, the decoder sequentially translates segmented clauses and simultaneously applies two types of attention models to capture contexts of interclause and intraclause for translation prediction. In this way, we can not only improve parameter learning, but also well explore different scopes of contexts for translation. Experimental results on Chinese-English and English-German translation demonstrate the superiorities of the proposed model over the conventional NMT model.
This paper presents a computational model for managing an Embodied Conversational Agent's first impressions of warmth and competence towards the user. These impressions are important to manage because they can impact users' perception of the agent and their willingness to continue the interaction with the agent. The model aims at detecting user's impression of the agent and producing appropriate agent's verbal and nonverbal behaviours in order to maintain a positive impression of warmth and competence. User's impressions are recognized using a machine learning approach with facial expressions (action units) which are important indicators of users' affective states and intentions. The agent adapts in real-time its verbal and nonverbal behaviour, with a reinforcement learning algorithm that takes user's impressions as reward to select the most appropriate combination of verbal and non-verbal behaviour to perform. A user study to test the model in a contextualized interaction with users is also presented. Our hypotheses are that users' ratings differs when the agents adapts its behaviour according to our reinforcement learning algorithm, compared to when the agent does not adapt its behaviour to user's reactions (i.e., when it randomly selects its behaviours). The study shows a general tendency for the agent to perform better when using our model than in the random condition. Significant results shows that user's ratings about agent's warmth are influenced by their a-priori about virtual characters, as well as that users' judged the agent as more competent when it adapted its behaviour compared to random condition.
Vehicular networks are susceptible to variety of attacks such as denial of service (DoS) attack, sybil attack and false alert generation attack. Different cryptographic methods have been proposed to protect vehicular networks from these kind of attacks. However, cryptographic methods have been found to be less effective to protect from insider attacks which are generated within the vehicular network system. Misbehavior detection system is found to be more effective to detect and prevent insider attacks. In this paper, we propose a machine learning based misbehavior detection system which is trained using datasets generated through extensive simulation based on realistic vehicular network environment. The simulation results demonstrate that our proposed scheme outperforms previous methods in terms of accurately identifying various misbehavior.
Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This provides an incentive for adversaries to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.