Biblio
In order to improve the information security ability of the network information platform, the information security evaluation method is proposed based on artificial neural network. Based on the comprehensive analysis of the security events in the construction of the network information platform, the risk assessment model of the network information platform is constructed based on the artificial neural network theory. The weight calculation algorithm of artificial neural network and the minimum artificial neural network pruning algorithm are also given, which can realize the quantitative evaluation of network information security. The fuzzy neural network weighted control method is used to control the information security, and the non-recursive traversal method is adopted to realize the adaptive training of information security assessment process. The adaptive learning of the artificial neural network is carried out according to the conditions, and the ability of information encryption and transmission is improved. The information security assessment is realized. The simulation results show that the method is accurate and ensures the information security.
The current evaluation of API recommendation systems mainly focuses on correctness, which is calculated through matching results with ground-truth APIs. However, this measurement may be affected if there exist more than one APIs in a result. In practice, some APIs are used to implement basic functionalities (e.g., print and log generation). These APIs can be invoked everywhere, and they may contribute less than functionally related APIs to the given requirements in recommendation. To study the impacts of correct-but-useless APIs, we use utility to measure them. Our study is conducted on more than 5,000 matched results generated by two specification-based API recommendation techniques. The results show that the matched APIs are heavily overlapped, 10% APIs compose more than 80% matched results. The selected 10% APIs are all correct, but few of them are used to implement the required functionality. We further propose a heuristic approach to measure the utility and conduct an online evaluation with 15 developers. Their reports confirm that the matched results with higher utility score usually have more efforts on programming than the lower ones.
Expected and unexpected risks in cloud computing, which included data security, data segregation, and the lack of control and knowledge, have led to some dilemmas in several fields. Among all of these dilemmas, the privacy problem is even more paramount, which has largely constrained the prevalence and development of cloud computing. There are several privacy protection algorithms proposed nowadays, which generally include two categories, Anonymity algorithm, and differential privacy mechanism. Since many types of research have already focused on the efficiency of the algorithms, few of them emphasized the different orientation and demerits between the two algorithms. Motivated by this emerging research challenge, we have conducted a comprehensive survey on the two popular privacy protection algorithms, namely K-Anonymity Algorithm and Differential Privacy Algorithm. Based on their principles, implementations, and algorithm orientations, we have done the evaluations of these two algorithms. Several expectations and comparisons are also conducted based on the current cloud computing privacy environment and its future requirements.
Smart grid utilizes cloud service to realize reliable, efficient, secured, and cost-effective power management, but there are a number of security risks in the cloud service of smart grid. The security risks are particularly problematic to operators of power information infrastructure who want to leverage the benefits of cloud. In this paper, security risk of cloud service in the smart grid are categorized and analyzed characteristics, and multi-layered index system of general technical risks is established, which applies to different patterns of cloud service. Cloud service risk of smart grid can evaluate according indexes.
Faced with a turbulent economic, political and social environment, Companies need to build effective risk management systems in their supply chains. Risk management can only be effective when the risks identification and analysis are enough accurate. In this perspective, this paper proposes a risk assessment approach based on the analytic hierarchy process and group decision making. In this study, a new method is introduced that will reduce the impact of incoherent judgments on group decision-making, It is, the “reduced weight function” that decreases the weight associated to a member of the expert panel based on the consistency of its judgments.
This paper presents the preliminary framework proposed by the authors for drivers of Smart Governance. The research question of this study is: What are the drivers for Smart Governance to achieve evidence-based policy-making? The framework suggests that in order to create a smart governance model, data governance and collaborative governance are the main drivers. These pillars are supported by legal framework, normative factors, principles and values, methods, data assets or human resources, and IT infrastructure. These aspects will guide a real time evaluation process in all levels of the policy cycle, towards to the implementation of evidence-based policies.
Information shared on Twitter is ever increasing and users-recipients are overwhelmed by the number of tweets they receive, many of which of no interest. Filters that estimate the interest of each incoming post can alleviate this problem, for example by allowing users to sort incoming posts by predicted interest (e.g., "top stories" vs. "most recent" in Facebook). Global and personal filters have been used to detect interesting posts in social networks. Global filters are trained on large collections of posts and reactions to posts (e.g., retweets), aiming to predict how interesting a post is for a broad audience. In contrast, personal filters are trained on posts received by a particular user and the reactions of the particular user. Personal filters can provide recommendations tailored to a particular user's interests, which may not coincide with the interests of the majority of users that global filters are trained to predict. On the other hand, global filters are typically trained on much larger datasets compared to personal filters. Hence, global filters may work better in practice, especially with new users, for which personal filters may have very few training instances ("cold start" problem). Following Uysal and Croft, we devised a hybrid approach that combines the strengths of both global and personal filters. As in global filters, we train a single system on a large, multi-user collection of tweets. Each tweet, however, is represented as a feature vector with a number of user-specific features.
Recognizing Families In the Wild (RFIW) is a large-scale, multi-track automatic kinship recognition evaluation, supporting both kinship verification and family classification on scales much larger than ever before. It was organized as a Data Challenge Workshop hosted in conjunction with ACM Multimedia 2017. This was achieved with the largest image collection that supports kin-based vision tasks. In the end, we use this manuscript to summarize evaluation protocols, progress made and some technical background and performance ratings of the algorithms used, and a discussion on promising directions for both research and engineers to be taken next in this line of work.
With the increasing use of mobile phones in contemporary society, more and more networked computers are connected to each other. This has brought along security issues. To solve these issues, both research and development communities are trying to build more secure software. However, there is the question that how the secure software is defined and how the security could be measured. In this paper, we study this problem by studying what kinds of security measurement tools (i.e. metrics) are available, and what these tools and metrics reveal about the security of software. As the result of the study, we noticed that security verification activities fall into two main categories, evaluation and assurance. There exist 34 metrics for measuring the security, from which 29 are assurance metrics and 5 are evaluation metrics. Evaluating and studying these metrics, lead us to the conclusion that the general quality of the security metrics are not in a satisfying level that could be suitably used in daily engineering work flows. They have both theoretical and practical issues that require further research, and need to be improved.
The past four years have seen the rise of conversational agents (CAs) in everyday life. Apple, Microsoft, Amazon, Google and Facebook have all embedded proprietary CAs within their software and, increasingly, conversation is becoming a key mode of human-computer interaction. Whilst we have long been familiar with the notion of computers that speak, the investigative concern within HCI has been upon multimodality rather than dialogue alone, and there is no sense of how such interfaces are used in everyday life. This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use. We find user expectations dramatically out of step with the operation of the systems, particularly in terms of known machine intelligence, system capability and goals. Using Norman's 'gulfs of execution and evaluation' [30] we consider the implications of these findings for the design of future systems.
Diversity has been suggested as an effective alternative to the current trend in rules-based approaches to cybersecurity. However, little work to date has focused on how various techniques generalize to new attacks. That is, there is no accepted methodology that researchers use to evaluate diversity techniques. Starting with the hypothesis that an attacker's effort increases as the common set of executable code snippets (return-oriented programming (ROP) gadgets) decreases across application variants, we explore how different diversification techniques affect the set of ROP gadgets that is available to an attacker. We show that a small population of diversified variants is sufficient to eliminate 90-99% of ROP gadgets across a collection of real-world applications. Finally, we observe that the number of remaining gadgets may still be sufficient for an attacker to mount an effective attack regardless of the presence of software diversity.
The Wikidata platform is a crowdsourced, structured knowledgebase aiming to provide integrated, free and language-agnostic facts which are–-amongst others–-used by Wikipedias. Users who actively enter, review and revise data on Wikidata are assisted by a property suggesting system which provides users with properties that might also be applicable to a given item. We argue that evaluating and subsequently improving this recommendation mechanism and hence, assisting users, can directly contribute to an even more integrated, consistent and extensive knowledge base serving a huge variety of applications. However, the quality and usefulness of such recommendations has not been evaluated yet. In this work, we provide the first evaluation of different approaches aiming to provide users with property recommendations in the process of curating information on Wikidata. We compare the approach currently facilitated on Wikidata with two state-of-the-art recommendation approaches stemming from the field of RDF recommender systems and collaborative information systems. Further, we also evaluate hybrid recommender systems combining these approaches. Our evaluations show that the current recommendation algorithm works well in regards to recall and precision, reaching a recall@7 of 79.71% and a precision@7 of 27.97%. We also find that generally, incorporating contextual as well as classifying information into the computation of property recommendations can further improve its performance significantly.
This paper presents the application of fusion meth- ods to a visual surveillance scenario. The range of relevant features for re-identifying vehicles is discussed, along with the methods for fusing probabilistic estimates derived from these estimates. In particular, two statistical parametric fusion methods are considered: Bayesian Networks and the Dempster Shafer approach. The main contribution of this paper is the development of a metric to allow direct comparison of the benefits of the two methods. This is achieved by generalising the Kelly betting strategy to accommodate a variable total stake for each sample, subject to a fixed expected (mean) stake. This metric provides a method to quantify the extra information provided by the Dempster-Shafer method, in comparison to a Bayesian Fusion approach.