Biblio
Head-mounted augmented reality (AR) enables embodied in situ drawing in three dimensions (3D). We explore 3D drawing interactions based on uninstrumented, unencumbered (bare) hands that preserve the user's ability to freely navigate and interact with the physical environment. We derive three alternative interaction techniques supporting bare-handed drawing in AR from the literature and by analysing several envisaged use cases. The three interaction techniques are evaluated in a controlled user study examining three distinct drawing tasks: planar drawing, path description, and 3D object reconstruction. The results indicate that continuous freehand drawing supports faster line creation than the control point based alternatives, although with reduced accuracy. User preferences for the different techniques are mixed and vary considerably between the different tasks, highlighting the value of diverse and flexible interactions. The combined effectiveness of these three drawing techniques is illustrated in an example application of 3D AR drawing.
Complex software is built by composing components implementing largely independent blocks of functionality. However, once the sources are compiled into an executable, that modularity is lost. This is unfortunate for code recipients, for whom knowing the components has many potential benefits, such as improved program understanding for reverse-engineering, identifying shared code across different programs, binary code reuse, and authorship attribution. A novel approach for decomposing such source-free program executables into components is here proposed. Given an executable, the approach first statically builds a decomposition graph, where nodes are functions and edges capture three types of relationships: code locality, data references, and function calls. It then applies a graph-theoretic approach to partition the functions into disjoint components. A prototype implementation, BCD, demonstrates the approach's efficacy: Evaluation of BCD with 25 C++ binary programs to recover the methods belonging to each class achieves high precision and recall scores for these tested programs.
Community detection in complex networks is a fundamental problem that attracts much attention across various disciplines. Previous studies have been mostly focusing on external connections between nodes (i.e., topology structure) in the network whereas largely ignoring internal intricacies (i.e., local behavior) of each node. A pair of nodes without any interaction can still share similar internal behaviors. For example, in an enterprise information network, compromised computers controlled by the same intruder often demonstrate similar abnormal behaviors even if they do not connect with each other. In this paper, we study the problem of community detection in enterprise information networks, where large-scale internal events and external events coexist on each host. The discovered host communities, capturing behavioral affinity, can benefit many comparative analysis tasks such as host anomaly assessment. In particular, we propose a novel community detection framework to identify behavior-based host communities in enterprise information networks, purely based on large-scale heterogeneous event data. We continue proposing an efficient method for assessing host's anomaly level by leveraging the detected host communities. Experimental results on enterprise networks demonstrate the effectiveness of our model.
We propose a Calculi in process algebraic framework to formally model Intrusion Detection System (IDS) for secure routing in Mobile Ad-hoc Networks. The proposed calculi, named as dRi, is basically an extension of Distributed pi calculus (Dpi). The calculi models unicast, multicast & broadcast communication, node mobility, energy conservation at node and detection of malicious node(s) in Mobile Ad-hoc Networks. The Calculi has two syntactic categories: one for describing nodes and another for processes which reside in nodes. We also present two views of semantic reductions; one as reduction on configurations whereas another as LTSs (Labelled Transition Systems), behavioural semantics, where reduction on configurations are described on various actions. We present an example described using LTSs to show the capability of the proposed calculi. We define a bisimulation based equivalence between configurations. Further we define a touch-stone equivalence on its reduction semantics & also present prove outline for bisimulation based equivalence that can be recovered from its touch-stone equivalence and vice-versa.
Cyber-Physical Systems (CPSs) are engineered systems seamlessly integrating computational algorithms and physical components. CPS advances offer numerous benefits to domains such as health, transportation, smart homes and manufacturing. Despite these advances, the overall cybersecurity posture of CPS devices remains unclear. In this paper, we provide knowledge on how to improve CPS resiliency by evaluating and comparing the accuracy, and scalability of two popular vulnerability assessment tools, Nessus and OpenVAS. Accuracy and suitability are evaluated with a diverse sample of pre-defined vulnerabilities in Industrial Control Systems (ICS), smart cars, smart home devices, and a smart water system. Scalability is evaluated using a large-scale vulnerability assessment of 1,000 Internet accessible CPS devices found on Shodan, the search engine for the Internet of Things (IoT). Assessment results indicate several CPS devices from major vendors suffer from critical vulnerabilities such as unsupported operating systems, OpenSSH vulnerabilities allowing unauthorized information disclosure, and PHP vulnerabilities susceptible to denial of service attacks.
To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose $\beta$-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a $\beta$-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.
The celebrated Nakamoto consensus protocol [16] ushered in several new consensus applications including cryptocurrencies. A few recent works [7, 17] have analyzed important properties of blockchains, including most significantly, consistency, which is a guarantee that all honest parties output the same sequence of blocks throughout the execution of the protocol. To establish consistency, the prior analysis of Pass, Seeman and Shelat [17] required a careful counting of certain combinatorial events that was difficult to apply to variations of Nakamoto. The work of Garay, Kiayas, and Leonardas [7] provides another method of analyzing the blockchain under the simplifying assumption that the network was synchronous. The contribution of this paper is the development of a simple Markov-chain based method for analyzing consistency properties of blockchain protocols. The method includes a formal way of stating strong concentration bounds as well as easy ways to concretely compute the bounds. We use our new method to answer a number of basic questions about consistency of blockchains: Our new analysis provides a tighter guarantee on the consistency property of Nakamoto's protocol, including for parameter regimes which [17] could not consider; We analyze a family of delaying attacks first presented in [17], and extend them to other protocols; We analyze how long a participant should wait before considering a high-value transaction "confirmed"; We analyze the consistency of CliqueChain, a variation of the Chainweb [14] system; We provide the first rigorous consistency analysis of GHOST [20] and also analyze a folklore "balancing"-attack. In each case, we use our framework to experimentally analyze the consensus bounds for various network delay parameters and adversarial computing percentages. We hope our techniques enable authors of future blockchain proposals to provide a more rigorous analysis of their schemes.
In-network caching is a feature shared by all proposed Information Centric Networking (ICN) architectures as it is critical to achieving a more efficient retrieval of content. However, the default "cache everything everywhere" universal caching scheme has caused the emergence of several privacy threats. Timing attacks are one such privacy breach where attackers can probe caches and use timing analysis of data retrievals to identify if content was retrieved from the data source or from the cache, the latter case inferring that this content was requested recently. We have previously proposed a betweenness centrality based caching strategy to mitigate such attacks by increasing user anonymity. We demonstrated its efficacy in a transit-stub topology. In this paper, we further investigate the effect of betweenness centrality based caching on cache privacy and user anonymity in more general synthetic and real world Internet topologies. It was also shown that an attacker with access to multiple compromised routers can locate and track a mobile user by carrying out multiple timing analysis attacks from various parts of the network. We extend our privacy evaluation to a scenario with mobile users and show that a betweenness centrality based caching policy provides a mobile user with path privacy by increasing an attacker's difficulty in locating a moving user or identifying his/her route.
With the arrival of the Internet of Things (IoT), more devices appear online with default credentials or lacking proper security protocols. Consequently, we have seen a rise of powerful DDoS attacks originating from IoT devices in the last years. In most cases the devices were infected by bot malware through the telnet protocol. This has lead to several honeypot studies on telnet-based attacks. However, IoT installations also involve other protocols, for example for Machine-to-Machine communication. Those protocols often provide by default only little security. In this paper, we present a measurement study on attacks against or based on those protocols. To this end, we use data obtained from a /15 network telescope and three honey-pots with 15 IPv4 addresses. We find that telnet-based malware is still widely used and that infected devices are employed not only for DDoS attacks but also for crypto-currency mining. We also see, although at a much lesser frequency, that attackers are looking for IoT-specific services using MQTT, CoAP, UPnP, and HNAP, and that they target vulnerabilities of routers and cameras with HTTP.
The need for security1 continues to grow in distributed computing. Today's security solutions require greater scalability and convenience in cloud-computing architectures, in addition to the ability to store and process larger volumes of data to address very sophisticated attacks. This paper explores some of the existing architectures for big data intelligence analytics, and proposes an architecture that promises to provide greater security for data intensive environments. The architecture is designed to leverage the wealth in the multi-source information for security intelligence.
Compile-time specialization and feature pruning through static binary rewriting have been proposed repeatedly as techniques for reducing the attack surface of large programs, and for minimizing the trusted computing base. We propose a new approach to attack surface reduction: dynamic binary lifting and recompilation. We present BinRec, a binary recompilation framework that lifts binaries to a compiler-level intermediate representation (IR) to allow complex transformations on the captured code. After transformation, BinRec lowers the IR back to a "recovered" binary, which is semantically equivalent to the input binary, but does have its unnecessary features removed. Unlike existing approaches, which are mostly based on static analysis and rewriting, our framework analyzes and lifts binaries dynamically. The crucial advantage is that we can not only observe the full program including all of its dependencies, but we can also determine which program features the end-user actually uses. We evaluate the correctness and performance of BinRec, and show that our approach enables aggressive pruning of unwanted features in COTS binaries.
We present improved protocols for the conversion of secret-shared bit-vectors into secret-shared integers and vice versa, for the use as subroutines in secure multiparty computation (SMC) protocols and for protocols verifying the adherence of parties to prescribed SMC protocols. The protocols are primarily designed for three-party computation with honest majority. We evaluate our protocols as part of the Sharemind three-party protocol set and see a general reduction of verification overheads, thereby increasing the practicality of covertly or actively secure Sharemind protocols.
Least Significant Bit (LSB) as one of steganography methods that already exist today is really mainstream because easy to use, but has weakness that is too easy to decode the hidden message. It is because in LSB the message embedded evenly to all pixels of an image. This paper introduce a method of steganography that combine LSB with clustering method that is Fuzzy C-Means (FCM). It is abbreviated with LSB\_FCM, then compare the stegano result with LSB method. Each image will divided into two cluster, then the biggest cluster capacity will be choosen, finally save the cluster coordinate key as place for embedded message. The key as a reference when decode the message. Each image has their own cluster capacity key. LSB\_FCM has disadvantage that is limited place to embedded message, but it also has advantages compare with LSB that is LSB\_FCM have more difficulty level when decrypted the message than LSB method, because in LSB\_FCM the messages embedded randomly in the best cluster pixel of an image, so to decrypted people must have the cluster coordinate key of the image. Evaluation result show that the MSE and PSNR value of LSB\_FCM some similiar with the pure LSB, it means that LSB\_FCM can give imperceptible image as good as the pure LSB, but have better security from the embedding place.
Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to a black-box attack, which is a more realistic scenario. In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text perturbations in a black-box setting that forces a deep-learning classifier to misclassify a text input. We develop novel scoring strategies to find the most important words to modify such that the deep classifier makes a wrong prediction. Simple character-level transformations are applied to the highest-ranked words in order to minimize the edit distance of the perturbation. We evaluated DeepWordBug on two real-world text datasets: Enron spam emails and IMDB movie reviews. Our experimental results indicate that DeepWordBug can reduce the classification accuracy from 99% to 40% on Enron and from 87% to 26% on IMDB. Our results strongly demonstrate that the generated adversarial sequences from a deep-learning model can similarly evade other deep models.
This paper considers a pilot spoofing attack scenario in a massive MIMO system. A malicious user tries to disturb the channel estimation process by sending interference symbols to the base-station (BS) via the uplink. Another legitimate user counters by sending random symbols. The BS does not possess any partial channel state information (CSI) and distribution of symbols sent by malicious user a priori. For such scenario, this paper aims to separate the channel directions from the legitimate and malicious users to the BS, respectively. A blind channel separation algorithm based on estimating the characteristic function of the distribution of the signal space vector is proposed. Simulation results show that the proposed algorithm provides good channel separation performance in a typical massive MIMO system.
With the widespread deployment of Control-Flow Integrity (CFI), control-flow hijacking attacks, and consequently code reuse attacks, are significantly more difficult. CFI limits control flow to well-known locations, severely restricting arbitrary code execution. Assessing the remaining attack surface of an application under advanced control-flow hijack defenses such as CFI and shadow stacks remains an open problem. We introduce BOPC, a mechanism to automatically assess whether an attacker can execute arbitrary code on a binary hardened with CFI/shadow stack defenses. BOPC computes exploits for a target program from payload specifications written in a Turing-complete, high-level language called SPL that abstracts away architecture and program-specific details. SPL payloads are compiled into a program trace that executes the desired behavior on top of the target binary. The input for BOPC is an SPL payload, a starting point (e.g., from a fuzzer crash) and an arbitrary memory write primitive that allows application state corruption. To map SPL payloads to a program trace, BOPC introduces Block Oriented Programming (BOP), a new code reuse technique that utilizes entire basic blocks as gadgets along valid execution paths in the program, i.e., without violating CFI or shadow stack policies. We find that the problem of mapping payloads to program traces is NP-hard, so BOPC first reduces the search space by pruning infeasible paths and then uses heuristics to guide the search to probable paths. BOPC encodes the BOP payload as a set of memory writes. We execute 13 SPL payloads applied to 10 popular applications. BOPC successfully finds payloads and complex execution traces – which would likely not have been found through manual analysis – while following the target's Control-Flow Graph under an ideal CFI policy in 81% of the cases.
The underlying or core technology of Bitcoin cryptocurrency has become a blessing for human being in this era. Everything is gradually changing to digitization in this today's epoch. Bitcoin creates virtual money using Blockchain that's become popular over the world. Blockchain is a shared public ledger, and it includes all transactions which are confirmed. It is almost impossible to crack the hidden information in the blocks of the Blockchain. However, there are certain security and technical challenges like scalability, privacy leakage, selfish mining, etc. which hampers the wide application of Blockchain. In this paper, we briefly discuss this emerging technology namely Blockchain. In addition, we extrapolate in-depth insight on Blockchain technology.
Peer to Peer (P2P) is a dynamic and self-organized technology, popularly used in File sharing applications to achieve better performance and avoids single point of failure. The popularity of this network has attracted many attackers framing different attacks including Sybil attack, Routing Table Insertion attack (RTI) and Free Riding. Many mitigation methods are also proposed to defend or reduce the impact of such attacks. However, most of those approaches are protocol specific. In this work, we propose a Blockchain based security framework for P2P network to address such security issues. which can be tailored to any P2P file-sharing system.