Biblio
Mining is the foundation of blockchain-based cryptocurrencies such as Bitcoin rewarding the miner for finding blocks for new transactions. The Monero currency enables mining with standard hardware in contrast to special hardware (ASICs) as often used in Bitcoin, paving the way for in-browser mining as a new revenue model for website operators. In this work, we study the prevalence of this new phenomenon. We identify and classify mining websites in 138M domains and present a new fingerprinting method which finds up to a factor of 5.7 more miners than publicly available block lists. Our work identifies and dissects Coinhive as the major browser-mining stakeholder. Further, we present a new method to associate mined blocks in the Monero blockchain to mining pools and uncover that Coinhive currently contributes 1.18% of mined blocks having turned over 1293 Moneros in June 2018.
This study discusses the results and findings of an augmented reality navigation app that was created using vector data uploaded to an online mapping software for indoor navigation. The main objective of this research is to determine the current issues with a solution of indoor navigation that relies on the use of GPS signals, as these signals are sparse in buildings. The data was uploaded in the form of GeoJSON files to MapBox which relayed the data to the app using an API in the form of Tilesets. The application converted the tilesets to a miniaturized map and calculated the navigation path, and then overlaid that navigation line onto the floor via the camera. Once the project setup was completed, multiple navigation paths have been tested numerous times between the different sync points and destination rooms. At the end, their accuracy, ease of access and several other factors, along with their issues, were recorded. The testing revealed that the navigation system was not only accurate despite the lack of GPS signal, but it also detected the device motion precisely. Furthermore, the navigation system did not take much time to generate the navigation path, as the app processed the data tile by tile. The application was also able to accurately measure the ground plane along with the walls, perfectly overlaying the navigation line. However, a few observations indicated various factors affected the accuracy of the navigation, and testing revealed areas where major improvements can be made to improve both accuracy and ease of access.
Machine learning is widely used in security-sensitive settings like spam and malware detection, although it has been shown that malicious data can be carefully modified at test time to evade detection. To overcome this limitation, adversary-aware learning algorithms have been developed, exploiting robust optimization and game-theoretical models to incorporate knowledge of potential adversarial data manipulations into the learning algorithm. Despite these techniques have been shown to be effective in some adversarial learning tasks, their adoption in practice is hindered by different factors, including the difficulty of meeting specific theoretical requirements, the complexity of implementation, and scalability issues, in terms of computational time and space required during training. In this work, we aim to develop secure kernel machines against evasion attacks that are not computationally more demanding than their non-secure counterparts. In particular, leveraging recent work on robustness and regularization, we show that the security of a linear classifier can be drastically improved by selecting a proper regularizer, depending on the kind of evasion attack, as well as unbalancing the cost of classification errors. We then discuss the security of nonlinear kernel machines, and show that a proper choice of the kernel function is crucial. We also show that unbalancing the cost of classification errors and varying some kernel parameters can further improve classifier security, yielding decision functions that better enclose the legitimate data. Our results on spam and PDF malware detection corroborate our analysis.
Machine learning is widely used in security-sensitive settings like spam and malware detection, although it has been shown that malicious data can be carefully modified at test time to evade detection. To overcome this limitation, adversary-aware learning algorithms have been developed, exploiting robust optimization and game-theoretical models to incorporate knowledge of potential adversarial data manipulations into the learning algorithm. Despite these techniques have been shown to be effective in some adversarial learning tasks, their adoption in practice is hindered by different factors, including the difficulty of meeting specific theoretical requirements, the complexity of implementation, and scalability issues, in terms of computational time and space required during training. In this work, we aim to develop secure kernel machines against evasion attacks that are not computationally more demanding than their non-secure counterparts. In particular, leveraging recent work on robustness and regularization, we show that the security of a linear classifier can be drastically improved by selecting a proper regularizer, depending on the kind of evasion attack, as well as unbalancing the cost of classification errors. We then discuss the security of nonlinear kernel machines, and show that a proper choice of the kernel function is crucial. We also show that unbalancing the cost of classification errors and varying some kernel parameters can further improve classifier security, yielding decision functions that better enclose the legitimate data. Our results on spam and PDF malware detection corroborate our analysis.
This paper reviews the definitions and characteristics of military effects, the Internet of Battlefield Things (IoBT), and their impact on decision processes in a Multi-Domain Operating environment (MDO). The aspects of contemporary military decision-processes are illustrated and an MDO Effect Loop decision process is introduced. We examine the concept of IoBT effects and their implications in MDO. These implications suggest that when considering the concept of MDO, as a doctrine, the technological advances of IoBTs empower enhancements in decision frameworks and increase the viability of novel operational approaches and options for military effects.
Notable recent security incidents have generated intense interest in adversaries which attempt to subvert–-perhaps covertly–-crypto$\backslash$-graphic algorithms. In this paper we develop (IND-CPA) Semantically Secure encryption in this challenging setting. This fundamental encryption primitive has been previously studied in the "kleptographic setting," though existing results must relax the model by introducing trusted components or otherwise constraining the subversion power of the adversary: designing a Public Key System that is kletographically semantically secure (with minimal trust) has remained elusive to date. In this work, we finally achieve such systems, even when all relevant cryptographic algorithms are subject to adversarial (kleptographic) subversion. To this end we exploit novel inter-component randomized cryptographic checking techniques (with an offline checking component), combined with common and simple software engineering modular programming techniques (applied to the system's black box specification level). Moreover, our methodology yields a strong generic technique for the preservation of any semantically secure cryptosystem when incorporated into the strong kleptographic adversary setting.
Delay-Tolerant Networks exhibit highly asynchronous connections often routed over many mobile hops before reaching its intended destination. The Bundle Security Protocol has been standardized providing properties such as authenticity, integrity, and confidentiality of bundles using traditional Public-Key Cryptography. Other protocols based on Identity-Based Cryptography have been proposed to reduce the key distribution overhead. However, in both schemes, secret keys are usually valid for several months. Thus, a secret key extracted from a compromised node allows for decryption of past communications since its creation. We solve this problem and propose the first forward secure protocol for Delay-Tolerant Networking. For this, we apply the Puncturable Encryption construction designed by Green and Miers, integrate it into the Bundle Security Protocol and adapt its parameters for different highly asynchronous scenarios. Finally, we provide performance measurements and discuss their impact.
Cloud computing provides customers with enormous compute power and storage capacity, allowing them to deploy their computation and data-intensive applications without having to invest in infrastructure. Many firms use cloud computing as a means of relocating and maintaining resources outside of their enterprise, regardless of the cloud server's location. However, preserving the data in cloud leads to a number of issues related to data loss, accountability, security etc. Such fears become a great barrier to the adoption of the cloud services by users. Cloud computing offers a high scale storage facility for internet users with reference to the cost based on the usage of facilities provided. Privacy protection of a user's data is considered as a challenge as the internal operations offered by the service providers cannot be accessed by the users. Hence, it becomes necessary for monitoring the usage of the client's data in cloud. In this research, we suggest an effective cloud storage solution for accessing patient medical records across hospitals in different countries while maintaining data security and integrity. In the suggested system, multifactor authentication for user login to the cloud, homomorphic encryption for data storage with integrity verification, and integrity verification have all been implemented effectively. To illustrate the efficacy of the proposed strategy, an experimental investigation was conducted.
This work introduces concepts and algorithms along with a case study validating them, to enhance the event detection, pattern recognition and anomaly identification results in real life video surveillance. The motivation for the work underlies in the observation that human behavioral patterns in general continuously evolve and adapt with time, rather than being static. First, limitations in existing work with respect to this phenomena are identified. Accordingly, the notion and algorithms of Dynamic Clustering are introduced in order to overcome these drawbacks. Correspondingly, we propose the concept of maintaining two separate sets of data in parallel, namely the Normal Plane and the Anomaly Plane, to successfully achieve the task of learning continuously. The practicability of the proposed algorithms in a real life scenario is demonstrated through a case study. From the analysis presented in this work, it is evident that a more comprehensive analysis, closely following human perception can be accomplished by incorporating the proposed notions and algorithms in a video surveillance event.
The domain name system (DNS) offers an ideal distributed database for big data mining related to different cyber security questions. Besides infrastructural problems, scalability issues, and security challenges related to the protocol itself, information from DNS is often required also for more nuanced cyber security questions. Against this backdrop, this paper discusses the fundamental characteristics of DNS in relation to cyber security and different research prototypes designed for passive but continuous DNS-based monitoring of domains and addresses. With this discussion, the paper also illustrates a few general software design aspects.
This exploratory empirical paper investigates whether the sharing of unique malware files between domains is empirically associated with the sharing of Internet Protocol (IP) addresses and the sharing of normal, non-malware files. By utilizing a graph theoretical approach with a web crawling dataset from F-Secure, the paper finds no robust statistical associations, however. Unlike what might be expected from the still continuing popularity of shared hosting services, the sharing of IP addresses through the domain name system (DNS) seems to neither increase nor decrease the sharing of malware files. In addition to these exploratory empirical results, the paper contributes to the field of DNS mining by elaborating graph theoretical representations that are applicable for analyzing different network forensics problems.
We consider a compositional construction of approximate abstractions of interconnected control systems. In our framework, an abstraction acts as a substitute in the controller design process and is itself a continuous control system. The abstraction is related to the concrete control system via a so-called simulation function: a Lyapunov-like function, which is used to establish a quantitative bound between the behavior of the approximate abstraction and the concrete system. In the first part of the paper, we provide a small gain type condition that facilitates the compositional construction of an abstraction of an interconnected control system together with a simulation function from the abstractions and simulation functions of the individual subsystems. In the second part of the paper, we restrict our attention to linear control system and characterize simulation functions in terms of controlled invariant, externally stabilizable subspaces. Based on those characterizations, we propose a particular scheme to construct abstractions for linear control systems. We illustrate the compositional construction of an abstraction on an interconnected system consisting of four linear subsystems. We use the abstraction as a substitute to synthesize a controller to enforce a certain linear temporal logic specification.
The application of mobile Wireless Sensor Networks (WSNs) with a big amount of participants poses many challenges. For instance, high transmission loss rates which are caused i.a. by collisions might occur. Additionally, WSNs frequently operate under harsh conditions, where a high probability of link or node failures is inherently given. This leads to reliable data maintenance being a key issue. Existing approaches which were developed to keep data dependably in WSNs often either perform well in highly dynamic or in completely static scenarios, or require complex calculations. Herein, we present the Network Coding based Multicast Growth Codes (MCGC), which represent a solution for reliable data maintenance in large-scale WSNs. MCGC are able to tolerate high fault rates and reconstruct more originally collected data in a shorter period of time than compared existing approaches. Simulation results show performance improvements of up to 75% in comparison to Growth Codes (GC). These results are achieved independently of the systems' dynamics and despite of high fault probabilities.
We present a formal method for computing the best security provisioning for Internet of Things (IoT) scenarios characterized by a high degree of mobility. The security infrastructure is intended as a security resource allocation plan, computed as the solution of an optimization problem that minimizes the risk of having IoT devices not monitored by any resource. We employ the shortfall as a risk measure, a concept mostly used in the economics, and adapt it to our scenario. We show how to compute and evaluate an allocation plan, and how such security solutions address the continuous topology changes that affect an IoT environment.
Cloud Computing has many significant benefits like the provision of computing resources and virtual networks on demand. However, there is the problem to assure the security of these networks against Distributed Denial-of-Service (DDoS) attack. Over the past few decades, the development of protection method based on data mining has attracted many researchers because of its effectiveness and practical significance. Most commonly these detection methods use prelearned models or models based on rules. Because of this the proposed DDoS detection methods often failure in dynamically changing cloud virtual networks. In this paper, we purposed self-learning method allows to adapt a detection model to network changes. This is minimized the false detection and reduce the possibility to mark legitimate users as malicious and vice versa. The developed method consists of two steps: collecting data about the network traffic by Netflow protocol and relearning the detection model with the new data. During the data collection we separate the traffic on legitimate and malicious. The separated traffic is labeled and sent to the relearning pool. The detection model is relearned by a data from the pool of current traffic. The experiment results show that proposed method could increase efficiency of DDoS detection systems is using data mining.