Biblio
We investigate minimization of tree pattern queries that use the child relation, descendant relation, node labels, and wildcards. We prove that minimization for such tree patterns is Sigma2P-complete and thus solve a problem first attacked by Flesca, Furfaro, and Masciari in 2003. We first provide an example that shows that tree patterns cannot be minimized by deleting nodes. This example shows that the M-NR conjecture, which states that minimality of tree patterns is equivalent to their nonredundancy, is false. We then show how the example can be turned into a gadget that allows us to prove Sigma2P-completeness.
Distributed denial of service attacks represent continuous threat to availability of information and communication resources. This research conducted the analysis of relevant scientific literature and synthesize parameters on packet and traffic flow level applicable for detection of infrastructure layer DDoS attacks. It is concluded that packet level detection uses two or more parameters while traffic flow level detection often used only one parameter which makes it more convenient and resource efficient approach in DDoS detection.
The usage of Information and Communication Technologies (ICTs) pervades everyday's life. If it is true that ICT contributed to improve the quality of our life, it is also true that new forms of (cyber)crime have emerged in this setting. The diversity and amount of information forensic investigators need to cope with, when tackling a cyber-crime case, call for tools and techniques where knowledge is the main actor. Current approaches leave to the investigator the chore of integrating the diverse sources of evidence relevant for a case thus hindering the automatic generation of reusable knowledge. This paper describes an architecture that lifts the classical phases of a digital forensic investigation to a knowledge-driven setting. We discuss how the usage of languages and technologies originating from the Semantic Web proposal can complement digital forensics tools so that knowledge becomes a first-class citizen. Our architecture enables to perform in an integrated way complex forensic investigations and, as a by-product, build a knowledge base that can be consulted to gain insights from previous cases. Our proposal has been inspired by real-world scenarios emerging in the context of an Italian research project about cyber security.
Phishing is a technique aimed to imitate an official websites of any company such as banks, institutes, etc. The purpose of phishing is to theft private and sensitive credentials of users such as password, username or PIN. Phishing detection is a technique to deal with this kind of malicious activity. In this paper we propose a method able to discriminate between web pages aimed to perform phishing attacks and legitimate ones. We exploit state of the art machine learning algorithms in order to build models using indicators that are able to detect phishing activities.
The push for data sharing and data processing across organisational boundaries creates challenges at many levels of the software stack. Data sharing and processing rely on the participating parties agreeing on the permissible operations and expressing them into actionable contracts and policies. Converting these contracts and policies into a operational infrastructure is still a matter of research and therefore begs the question how should a digital data market place infrastructure look like? In this paper we investigate how communication fabric and applications can be tightly coupled into a multi-domain overlay network which enforces accountability. We prove our concepts with a prototype which shows how a simple workflow can run across organisational boundaries.
In contrast with goal-oriented dialogue, social dialogue has no clear measure of task success. Consequently, evaluation of these systems is notoriously hard. In this paper, we review current evaluation methods, focusing on automatic metrics. We conclude that turn-based metrics often ignore the context and do not account for the fact that several replies are valid, while end-of-dialogue rewards are mainly hand-crafted. Both lack grounding in human perceptions.
Multifactor authentication presents a robust security method, but typically requires multiple steps on the part of the user resulting in a high cost to usability and limiting adoption. Furthermore, a truly usable system must be unobtrusive and inconspicuous. Here, we present a system that provides all three factors of authentication (knowledge, possession, and inherence) in a single step in the form of an earpiece which implements brain-based authentication via custom-fit, in-ear electroencephalography (EEG). We demonstrate its potential by collecting EEG data using manufactured custom-fit earpieces with embedded electrodes. Across 7 participants, we are able to achieve perfect performance, mean 0% false acceptance (FAR) and 0% false rejection rates (FRR), using participants' best performing tasks collected in one session by one earpiece with three electrodes. Our results indicate that a single earpiece with embedded electrodes could provide a discreet, convenient, and robust method for secure one-step, three-factor authentication.
This paper presents a solution that simultaneously addresses both reliability and security (RnS) in a monitoring framework. We identify the commonalities between reliability and security to guide the design of HyperTap, a hypervisor-level framework that efficiently supports both types of monitoring in virtualization environments. In HyperTap, the logging of system events and states is common across monitors and constitutes the core of the framework. The audit phase of each monitor is implemented and operated independently. In addition, HyperTap relies on hardware invariants to provide a strongly isolated root of trust. HyperTap uses active monitoring, which can be adapted to enforce a wide spectrum of RnS policies. We validate Hy- perTap by introducing three example monitors: Guest OS Hang Detection (GOSHD), Hidden RootKit Detection (HRKD), and Privilege Escalation Detection (PED). Our experiments with fault injection and real rootkits/exploits demonstrate that HyperTap provides robust monitoring with low performance overhead.
Winner of the William C. Carter Award for Best Paper based on PhD work and Best Paper Award voted by conference participants.
Reliability and security tend to be treated separately because they appear orthogonal: reliability focuses on accidental failures, security on intentional attacks. Because of the apparent dissimilarity between the two, tools to detect and recover from different classes of failures and attacks are usually designed and implemented differently. So, integrating support for reliability and security in a single framework is a significant challenge.
Here, we discuss how to address this challenge in the context of cloud computing, for which reliability and security are growing concerns. Because cloud deployments usually consist of commodity hardware and software, efficient monitoring is key to achieving resiliency. Although reliability and security monitoring might use different types of analytics, the same sensing infrastructure can provide inputs to monitoring modules.
We split monitoring into two phases: logging and auditing. Logging captures data or events; it constitutes the framework’s core and is common to all monitors. Auditing analyzes data or events; it’s implemented and operated independently by each monitor. To support a range of auditing policies, logging must capture a complete view, including both actions and states of target systems. It must also provide useful, trustworthy information regarding the captured view.
We applied these principles when designing HyperTap, a hypervisor-level monitoring framework for virtual machines (VMs). Unlike most VM-monitoring techniques, HyperTap employs hardware architectural invariants (hardware invariants, for short) to establish the root of trust for logging. Hardware invariants are properties defined and enforced by a hardware platform (for example, the x86 instruction set architecture). Additionally, HyperTap supports continuous, event-driven VM monitoring, which enables both capturing the system state and responding rapidly to actions of interest.
We initiate the study of a quantity that we call coordination complexity. In a distributed optimization problem, the information defining a problem instance is distributed among n parties, who need to each choose an action, which jointly will form a solution to the optimization problem. The coordination complexity represents the minimal amount of information that a centralized coordinator, who has full knowledge of the problem instance, needs to broadcast in order to coordinate the n parties to play a nearly optimal solution. We show that upper bounds on the coordination complexity of a problem imply the existence of good jointly differentially private algorithms for solving that problem, which in turn are known to upper bound the price of anarchy in certain games with dynamically changing populations. We show several results. We fully characterize the coordination complexity for the problem of computing a many-to-one matching in a bipartite graph. Our upper bound in fact extends much more generally to the problem of solving a linearly separable convex program. We also give a different upper bound technique, which we use to bound the coordination complexity of coordinating a Nash equilibrium in a routing game, and of computing a stable matching.
Traditional security measures for large-scale critical infrastructure systems have focused on keeping adversaries out of the system. As the Internet of Things (IoT) extends into millions of homes, with tens or hundreds of devices each, the threat landscape is complicated. IoT devices have unknown access capabilities with unknown reach into other systems. This paper presents ongoing work on how techniques in sensor verification and cyber-physical modeling and analysis on bulk power systems can be applied to identify malevolent IoT devices and secure smart and connected communities against the most impactful threats.
In today's society, even though the technology is so developed, the coloring of computer images has remained at the manual stage. As a carrier of human culture and art, film has existed in our history for hundred years. With the development of science and technology, movies have developed from the simple black-and-white film era to the current digital age. There is a very complicated process for coloring old movies. Aside from the traditional hand-painting techniques, the most common method is to use post-processing software for coloring movie frames. This kind of operation requires extraordinary skills, patience and aesthetics, which is a great test for the operator. In recent years, the extensive use of machine learning and neural networks has made it possible for computers to intelligently process images. Since 2016, various types of generative adversarial networks models have been proposed to make deep learning shine in the fields of image style transfer, image coloring, and image style change. In this case, the experiment uses the generative adversarial networks principle to process pictures and videos to realize the automatic rendering of old documentary movies.
The outsourcing for fabrication introduces security threats, namely hardware Trojans (HTs). Many design-for-trust (DFT) techniques have been proposed to address such threats. However, many HT detection techniques are not effective due to the dependence on golden chips, limitation of useful information available and process variations. In this paper, we data-mine on path delay information and propose a variation-tolerant path delay order encoding technique to detect HTs.