Biblio
In order to identify a personalized story, suitable for the needs of large masses of visitors and tourists, our work has been aimed at the definition of appropriate models and solutions of fruition that make the visit experience more appealing and immersive. This paper proposes the characteristic functionalities of narratology and of the techniques of storytelling for the dynamic creation of experiential stories on a sematic basis. Therefore, it represents a report about sceneries, implementation models and architectural and functional specifications of storytelling for the dynamic creation of functional contents for the visit. Our purpose is to indicate an approach for the realization of a dynamic storytelling engine that can allow the dynamic supply of narrative contents, not necessarily predetermined and pertinent to the needs and the dynamic behaviors of the users. In particular, we have chosen to employ an adaptive, social and mobile approach, using an ontological model in order to realize a dynamic digital storytelling system, able to collect and elaborate social information and contents about the users giving them a personalized story on the basis of the place they are visiting. A case of study and some experimental results are presented and discussed.
Institutions use the information security (InfoSec) policy document as a set of rules and guidelines to govern the use of the institutional information resources. However, a common problem is that these policies are often not followed or complied with. This study explores the extent to which the problem lies with the policy documents themselves. The InfoSec policies are documented in the natural languages, which are prone to ambiguity and misinterpretation. Subsequently such policies may be ambiguous, thereby making it hard, if not impossible for users to comply with. A case study approach with a content analysis was conducted. The research explores the extent of the problem by using a case study of an educational institution in South Africa.
Smart grid is an evolving new power system framework with ICT driven power equipment massively layered structure. The new generation sensors, smart meters and electronic devices are integral components of smart grid. However, the upcoming deployment of smart devices at different layers followed by their integration with communication networks may introduce cyber threats. The interdependencies of various subsystems functioning in the smart grid, if affected by cyber-attack, may be vulnerable and greatly reduce efficiency and reliability due to any one of the device not responding in real time frame. The cyber security vulnerabilities become even more evident due to the existing superannuated cyber infrastructure. This paper presents a critical review on expected cyber security threats in complex environment and addresses the grave concern of a secure cyber infrastructure and related developments. An extensive review on the cyber security objectives and requirements along with the risk evaluation process has been undertaken. The paper analyses confidentiality and privacy issues of entire components of smart power system. A critical evaluation on upcoming challenges with innovative research concerns is highlighted to achieve a roadmap of an immune smart grid infrastructure. This will further facilitate R&d; associated developments.
With increasing popularity of cloud computing, the data owners are motivated to outsource their sensitive data to cloud servers for flexibility and reduced cost in data management. However, privacy is a big concern for outsourcing data to the cloud. The data owners typically encrypt documents before outsourcing for privacy-preserving. As the volume of data is increasing at a dramatic rate, it is essential to develop an efficient and reliable ciphertext search techniques, so that data owners can easily access and update cloud data. In this paper, we propose a privacy preserving multi-keyword ranked search scheme over encrypted data in cloud along with data integrity using a new authenticated data structure MIR-tree. The MIR-tree based index with including the combination of widely used vector space model and TF×IDF model in the index construction and query generation. We use inverted file index for storing word-digest, which provides efficient and fast relevance between the query and cloud data. Design an authentication set(AS) for authenticating the queries, for verifying top-k search results. Because of tree based index, our scheme achieves optimal search efficiency and reduces communication overhead for verifying the search results. The analysis shows security and efficiency of our scheme.
Cyber ranges are well-defined controlled virtual environments used in cybersecurity training as an efficient way for trainees to gain practical knowledge through hands-on activities. However, creating an environment that contains all the necessary features and settings, such as virtual machines, network topology and security-related content, is not an easy task, especially for a large number of participants. Therefore, we propose CyRIS (Cyber Range Instantiation System) as a solution towards this problem. CyRIS provides a mechanism to automatically prepare and manage cyber ranges for cybersecurity education and training based on specifications defined by the instructors. In this paper, we first describe the design and implementation of CyRIS, as well as its utilization. We then present an evaluation of CyRIS in terms of feature coverage compared to the Technical Guide to Information Security Testing and Assessment of the U.S National Institute of Standards and Technology, and in terms of functionality compared to other similar tools. We also discuss the execution performance of CyRIS for several representative scenarios.
Spoofing is a serious threat to the widespread use of Global Navigation Satellite Systems (GNSSs) such as GPS and can be expected to play an important role in the security of many future IoT systems that rely on time, location, or navigation information. In this paper, we focus on the technique of multi-receiver GPS spoofing detection, so far only proposed theoretically. This technique promises to detect malicious spoofing signals by making use of the reported positions of several GPS receivers deployed in a fixed constellation. We scrutinize the assumptions of prior work, in particular the error models, and investigate how these models and their results can be improved due to the correlation of errors at co-located receiver positions. We show that by leveraging spatial noise correlations, the false acceptance rate of the countermeasure can be improved while preserving the sensitivity to attacks. As a result, receivers can be placed significantly closer together than previously expected, which broadens the applicability of the countermeasure. Based on theoretical and practical investigations, we build the first realization of a multi-receiver countermeasure and experimentally evaluate its performance both in authentic and in spoofing scenarios.
Users’ online behaviors such as ratings and examination of items are recognized as one of the most valuable sources of information for learning users’ preferences in order to make personalized recommendations. But most previous works focus on modeling only one type of users’ behaviors such as numerical ratings or browsing records, which are referred to as explicit feedback and implicit feedback, respectively. In this article, we study a Semisupervised Collaborative Recommendation (SSCR) problem with labeled feedback (for explicit feedback) and unlabeled feedback (for implicit feedback), in analogy to the well-known Semisupervised Learning (SSL) setting with labeled instances and unlabeled instances. SSCR is associated with two fundamental challenges, that is, heterogeneity of two types of users’ feedback and uncertainty of the unlabeled feedback. As a response, we design a novel Self-Transfer Learning (sTL) algorithm to iteratively identify and integrate likely positive unlabeled feedback, which is inspired by the general forward/backward process in machine learning. The merit of sTL is its ability to learn users’ preferences from heterogeneous behaviors in a joint and selective manner. We conduct extensive empirical studies of sTL and several very competitive baselines on three large datasets. The experimental results show that our sTL is significantly better than the state-of-the-art methods.
Internet of Things (IoT) is to connect objects of different application fields, functionality and technology. These objects are entirely addressable and use standard communication protocol. Intelligent agents are used to integrate Internet of Things with heterogeneous low-power embedded resource-constrained networked devices. This paper discusses with the implemented real world scenario of smart autonomous patient management with the assistance of semantic technology in IoT. It uses the Smart Semantic framework using domain ontologies to encapsulate the processed information from sensor networks. This embedded Agent based Semantic Internet of Things in healthcare (ASIOTH) system is having semantic logic and semantic value based Information to make the system as smart and intelligent. This paper aims at explaining in detail the technology drivers behind the IoT and health care with the information on data modeling, data mapping of existing IoT data into different other associated system data, workflow or the process flow behind the technical operations of the remote device coordination, the architecture of network, middleware, databases, application services. The challenges and the associated solution in this field are discussed with the use case.
Control of a large engineered swarm can be achieved by influencing key agents within the swarm. The swarm can rely on its communication network to spread the external perturbation and transition to a new state when all agents reach a consensus. Maximising this consensus speed is a vital design parameter when fast response is desirable. The systems analysed consist of N interacting agents that have the same number of outward, observing, connections that follow k-nearest neighbour rules and are represented by a directed graph Laplacian. The spectral properties of this graph are exploited to identify leaders with a newly presented semi-analytical approach referred to as the Leaders of Influence (LoI) method. This method is demonstrated on k-NNR graphs for a set number of leaders. These methods are compared with a genetic algorithm and are shown to be efficient and effective at leader identification. A focus of this work is the effect of leadership style on consensus speed where an autocratic approach (leaders that are not influenced by other nodes in the graph) is shown to always produce faster consensus than a democratic leadership model.
In Pixar's Finding Dory, we are introduced to a new character: Hank the Octopus. This is a very different character than Pixar has been asked to animate before. Our directors demanded both precise control and graceful, clean silhouettes. The reference artwork we were given showed complex curves between arms and body without any disjointed shapes or breaks in form. Video of Octopus in motion reveals an infinitely malleable creature capable of an enormous shape language. This art direction required a small group of TDs to create a control scheme that was sensible, flexible and with a new level of control in order for animators to bring Hank to life. We had to think deeply from the tips of the fingers all the way through how the tentacles connect to the mouth corners, and eye sockets. Each of this issues raised concerns around design, deformation and finally how the end user can manipulate such complexity effectively.
Standardization and harmonization efforts have reached a consensus towards using a special-purpose Vehicular Public-Key Infrastructure (VPKI) in upcoming Vehicular Communication (VC) systems. However, there are still several technical challenges with no conclusive answers; one such an important yet open challenge is the acquisition of short-term credentials, pseudonym: how should each vehicle interact with the VPKI, e.g., how frequently and for how long? Should each vehicle itself determine the pseudonym lifetime? Answering these questions is far from trivial. Each choice can affect both the user privacy and the system performance and possibly, as a result, its security. In this paper, we make a novel systematic effort to address this multifaceted question. We craft three generally applicable policies and experimentally evaluate the VPKI system performance, leveraging two large-scale mobility datasets. We consider the most promising, in terms of efficiency, pseudonym acquisition policies; we find that within this class of policies, the most promising policy in terms of privacy protection can be supported with moderate overhead. Moreover, in all cases, this work is the first to provide tangible evidence that the state-of-the-art VPKI can serve sizable areas or domain with modest computing resources.
Evolutionary Computation (EC) has been used with great success on various real-world problems. One domain abundant with numerous difficult problems is cryptology. Cryptology can be divided into cryptography, that informally speaking considers methods how to ensure secrecy (but also authenticity, privacy, etc.), and cryptanalysis, that deals with methods how to break cryptographic systems. Although not always in an obvious way, EC can be applied to problems from both domains. This tutorial will first give a brief introduction to cryptology intended for general audience (therefore, omitting proofs and mathematics behind many concepts). Afterwards, we concentrate on several topics from cryptography that are successfully tackled up to now with EC and discuss why those topics are suitable to apply EC. However, care must be taken since there exists a number of problems that seem to be impossible to solve with EC and one needs to realize the limitations of the heuristics. We will discuss the choice of appropriate EC techniques (GA, GP, CGP, ES, multi-objective optimization, etc) for various problems and evaluate on the importance of that choice. Furthermore, we will discuss the gap between the cryptographic community and EC community and what does that mean for the results. By doing that, we will give a special emphasis on the perspective that cryptography presents a source of benchmark problems for the EC community. To conclude, we will present a number of topics we consider to be a strong research choice that can have a real-world impact. In that part, we give a special attention to cryptographic problems where cryptographic community successfully applied EC, but where those problems remained out of the focus of EC community. This tutorial will also present some live demos of EC in action when dealing with cryptographic problems. We will present several problems, ways of encoding solutions, impact of the algorithms choice and finally, we will run some experiments to show the results and discuss how to assess them from cryptographic perspective.
Repairing erroneous or conflicting data that violate a set of constraints is an important problem in data management. Many automatic or semi-automatic data-repairing algorithms have been proposed in the last few years, each with its own strengths and weaknesses. Bart is an open-source error-generation system conceived to support thorough experimental evaluations of these data-repairing systems. The demo is centered around three main lessons. To start, we discuss how generating errors in data is a complex problem, with several facets. We introduce the important notions of detectability and repairability of an error, that stand at the core of Bart. Then, we show how, by changing the features of errors, it is possible to influence quite significantly the performance of the tools. Finally, we concretely put to work five data-repairing algorithms on dirty data of various kinds generated using Bart, and discuss their performance.
Information Systems curricula require on-going and frequent review [2] [11]. Furthermore, such curricula must be flexible because of the fast-paced, dynamic nature of the workplace. Such flexibility can be maintained through modernizing course content or, inclusively, exchanging hardware or software for newer versions. Alternatively, flexibility can arise from incorporating new information into curricula from other disciplines. One field where the pace of change is extremely high is cybersecurity [3]. Students are left with outdated skills when curricula lag behind the pace of change in industry. For example, cryptography is a required learning objective in the DHS/NSA Center of Academic Excellence (CAE) knowledge criteria [1]. However, the overarching curriculum associated with basic ciphers has gone unchanged for decades. Indeed, a general problem in cybersecurity education is that students lack fundamental knowledge in areas such as ciphers [5]. In response, researchers have developed a variety of interactive classroom visualization tools [5] [8] [9]. Such tools visualize the standard approach to frequency analysis of simple substitution ciphers that includes review of most common, single letters in ciphertext. While fundamental ciphers such as the monoalphabetic substitution cipher have not been updated (these are historical ciphers), collective understanding of how humans interact with language has changed. Updated understanding in both English language pedagogy [10] [12] and automated cryptanalysis of substitution ciphers [4] potentially renders the interactive classroom visualization tools incomplete or outdated. Classroom visualization tools are powerful teaching aids, particularly for abstract concepts. Existing research has established that such tools promote an active learning environment that translates to not only effective learning conditions but also higher student retention rates [7]. However, visualization tools require extensive planning and design when used to actively engage students with detailed, specific knowledge units such as ciphers [7] [8]. Accordingly, we propose a heatmap-based frequency analysis visualization solution that (a) incorporates digraph and trigraph language processing norms; (b) and enhances the active learning pedagogy inherent in visualization tools. Preliminary results indicate that study participants take approximately 15% longer to learn the heatmap-based frequency analysis technique compared to traditional frequency analysis but demonstrate a 50% increase in efficacy when tasked with solving simple substitution ciphers. Further, a heatmap-based solution contributes positively to the field insofar as educators have an additional tool to use in the classroom. As well, the heatmap visualization tool may allow researchers to comparatively examine efficacy of visualization tools in the cryptanalysis of mono-alphabetic substitution ciphers.
We present Falcon, an interactive, deterministic, and declarative data cleaning system, which uses SQL update queries as the language to repair data. Falcon does not rely on the existence of a set of pre-defined data quality rules. On the contrary, it encourages users to explore the data, identify possible problems, and make updates to fix them. Bootstrapped by one user update, Falcon guesses a set of possible sql update queries that can be used to repair the data. The main technical challenge addressed in this paper consists in finding a set of sql update queries that is minimal in size and at the same time fixes the largest number of errors in the data. We formalize this problem as a search in a lattice-shaped space. To guarantee that the chosen updates are semantically correct, Falcon navigates the lattice by interacting with users to gradually validate the set of sql update queries. Besides using traditional one-hop based traverse algorithms (e.g., BFS or DFS), we describe novel multi-hop search algorithms such that Falcon can dive over the lattice and conduct the search efficiently. Our novel search strategy is coupled with a number of optimization techniques to further prune the search space and efficiently maintain the lattice. We have conducted extensive experiments using both real-world and synthetic datasets to show that Falcon can effectively communicate with users in data repairing.
Ubiquitous WiFi infrastructure and smart phones offer a great opportunity to study physical activities. In this paper, we present MobiCamp, a large-scale testbed for studying mobility-related activities of residents on a campus. MobiCamp consists of \textasciitilde2,700 APs, \textasciitilde95,000 smart phones, and an App with \textasciitilde2,300 opt-in volunteer users. More specifically, we capture how mobile users interact with different types of buildings, with other users, and with classroom courses, etc. To achieve this goal, we first obtain a relatively complete coverage of the users' mobility traces by utilizing four types of information from SNMP and by relaxing the location granularity to roughly at the room level. Then the popular App provides user attributes (grade, gender, etc.) and fine-grained behavior information (phone usages, course timetables, etc.) of the sampled population. These detailed mobile data is then correlated with the mobility traces from the SNMP to estimate the entire campus population's physical activities. We use two applications to show the power of MobiCamp.
Semantic Big Data is about the creation of new applications exploiting the richness and flexibility of declarative semantics combined with scalable and highly distributed data management systems. In this work, we present an application scenario in which a domain ontology, Open Refine and the Okkam Entity Name System enable a frictionless and scalable data integration process leading to a knowledge base for tax assessment. Further, we introduce the concept of Entiton as a flexible and efficient data model suitable for large scale data inference and analytic tasks. We successfully tested our data processing pipeline on a real world dataset, supporting ACI Informatica in the investigation for Vehicle Excise Duty (VED) evasion in Aosta Valley region (Italy). Besides useful business intelligence indicators, we implemented a distributed temporal inference engine to unveil VED evasion and circulation ban violations. The results of the integration are presented to the tax agents in a powerful Siren Solution KiBi dashboard, enabling seamless data exploration and business intelligence.
Growth of internet era and corporate sector dealings communication online has introduced crucial security challenges in cyber space. Statistics of recent large scale attacks defined new class of threat to online world, advanced persistent threat (APT) able to impact national security and economic stability of any country. From all APTs, botnet is one of the well-articulated and stealthy attacks to perform cybercrime. Botnet owners and their criminal organizations are continuously developing innovative ways to infect new targets into their networks and exploit them. The concept of botnet refers collection of compromised computers (bots) infected by automated software robots, that interact to accomplish some distributed task which run without human intervention for illegal purposes. They are mostly malicious in nature and allow cyber criminals to control the infected machines remotely without the victim's knowledge. They use various techniques, communication protocols and topologies in different stages of their lifecycle; also specifically they can upgrade their methods at any time. Botnet is global in nature and their target is to steal or destroy valuable information from organizations as well as individuals. In this paper we present real world botnet (APTs) survey.
The Mobile Ad-hoc Networks (MANET) are suffering from network partitioning when there is group mobility and thus cannot efficiently provide connectivity to all nodes in the network. Autonomous Mobile Mesh Network (AMMNET) is a new class of MANET which will overcome the weakness of MANET, especially from network partitioning. However, AMMNET is vulnerable to routing attacks such as Blackhole attack in which malicious node can make itself as intragroup, intergroup or intergroup bridge router and disrupt the network. In AMMNET, To maintain connectivity, network survivability is an important aspect of reliable communication. Maintaning security is a challenge in the self organising nature of the topology. To address this weakness proposed approach measured the performance of the impact of security enhancement on AMMNET with the basis of bait detection scheme. Modified bait approach that will prevent blackhole node entering into the network and helps to maintain the reliability of the network. The proposed scheme uses the idea of Wumpus World concept from Artificial Intelligence. Modified bait scheme will prevent the blackhole attack and secures network.
Once data is released to the Internet, there is little hope to successfully delete it, as it may have been duplicated, reposted, and archived in multiple places. This poses a significant threat to users' privacy and their right to permanently erase their very own data. One approach to control the implications on privacy is to assign a lifetime value to the published data and ensure that the data is no longer accessible after this point in time. However, such an approach suffers from the inability to successfully predict the right time when the data should vanish. Consequently, the author of the data can only estimate the correct time, which unfortunately can cause the premature or belated deletion of data. This paper tackles the problem of prefixed lifetimes in data deletion from a different angle and argues that alternative approaches are a desideratum for research. In our approach, we consider different criteria when data should be deleted, such as keeping data available as long as there is sufficient interest for it or untimely delete it in cases of excessive accesses. To assist the self-destruction of data, we propose a protocol and develop a prototype, called Neuralyzer, which leverages the caching mechanisms of the Domain Name System (DNS) to ensure the successful deletion of data. Our experimental results demonstrate that our approach can completely delete published data while at the same time achieving flexible expiration times varying from few days to several months depending on the users' interest.