Biblio
Client-side JavaScript has become ubiquitous in web applications to improve user experience and reduce server load. However, since clients are untrusted, servers cannot rely on the confidentiality or integrity of client-side JavaScript code and the data that it operates on. For example, client-side input validation must be repeated at server side, and confidential business logic cannot be offloaded. In this paper, we present TrustJS, a framework that enables trustworthy execution of security-sensitive JavaScript inside commodity browsers. TrustJS leverages trusted hardware support provided by Intel SGX to protect the client-side execution of JavaScript, enabling a flexible partitioning of web application code. We present the design of TrustJS and provide initial evaluation results, showing that trustworthy JavaScript offloading can further improve user experience and conserve more server resources.
As the Internet of Thing (IoT) matures, a lot of concerns are being raised about security, privacy and interoperability. The Web of Things (WoT) model leverages web technologies to improve interoperability. Due to its distributed components, the web scaled well beyond initial expectations. Still, secure authentication and communication across organization boundaries rely on the Public Key Infrastructure (PKI) which is a non-transparent, centralized single point of failure. We can improve transparency and reduce the chain of trust–-thus significantly improving the IoT security–-by empowering blockchain technology and web security standards. In this paper, we build a scalable, decentralized IoT-centric PKI and discuss how we can combine it with the emerging web authentication and authorization framework for constrained environments.
Knowledge Graphs (KG) are becoming core components of most artificial intelligence applications. Linked Data, as a method of publishing KGs, allows applications to traverse within, and even out of, the graph thanks to global dereferenceable identifiers denoting entities, in the form of IRIs. However, as we show in this work, after analyzing several popular datasets (namely DBpedia, LOD Cache, and Web Data Commons JSON-LD data) many entities are being represented using literal strings where IRIs should be used, diminishing the advantages of using Linked Data. To remedy this, we propose an approach for identifying such strings and replacing them with their corresponding entity IRIs. The proposed approach is based on identifying relations between entities based on both ontological axioms as well as data profiling information and converting strings to entity IRIs based on the types of entities linked by each relation. Our approach showed 98% recall and 76% precision in identifying such strings and 97% precision in converting them to their corresponding IRI in the considered KG. Further, we analyzed how the connectivity of the KG is increased when new relevant links are added to the entities as a result of our method. Our experiments on a subset of the Spanish DBpedia data show that it could add 25% more links to the KG and improve the overall connectivity by 17%.
Recent years have witnessed a rapid growth in the domain of Internet of Things (IoT). This network of billions of devices generates and exchanges huge amount of data. The limited cache capacity and memory bandwidth make transferring and processing such data on traditional CPUs and GPUs highly inefficient, both in terms of energy consumption and delay. However, many IoT applications are statistical at heart and can accept a part of inaccuracy in their computation. This enables the designers to reduce complexity of processing by approximating the results for a desired accuracy. In this paper, we propose an ultra-efficient approximate processing in-memory architecture, called APIM, which exploits the analog characteristics of non-volatile memories to support addition and multiplication inside the crossbar memory, while storing the data. The proposed design eliminates the overhead involved in transferring data to processor by virtually bringing the processor inside memory. APIM dynamically configures the precision of computation for each application in order to tune the level of accuracy during runtime. Our experimental evaluation running six general OpenCL applications shows that the proposed design achieves up to 20x performance improvement and provides 480x improvement in energy-delay product, ensuring acceptable quality of service. In exact mode, it achieves 28x energy savings and 4.8x speed up compared to the state-of-the-art GPU cores.
Recent methods for learning vector space representations of words, word embedding, such as GloVe and Word2Vec have succeeded in capturing fine-grained semantic and syntactic regularities. We analyzed the effectiveness of these methods for e-commerce recommender systems by transferring the sequence of items generated by users' browsing journey in an e-commerce website into a sentence of words. We examined the prediction of fine-grained item similarity (such as item most similar to iPhone 6 64GB smart phone) and item analogy (such as iPhone 5 is to iPhone 6 as Samsung S5 is to Samsung S6) using real life users' browsing history of an online European department store. Our results reveal that such methods outperform related models such as singular value decomposition (SVD) with respect to item similarity and analogy tasks across different product categories. Furthermore, these methods produce a highly condensed item vector space representation, item embedding, with behavioral meaning sub-structure. These vectors can be used as features in a variety of recommender system applications. In particular, we used these vectors as features in a neural network based models for anonymous user recommendation based on session's first few clicks. It is found that recurrent neural network that preserves the order of user's clicks outperforms standard neural network, item-to-item similarity and SVD (recall@10 value of 42% based on first three clicks) for this task.
FPGAs have been used as accelerators in a wide variety of domains such as learning, search, genomics, signal processing, compression, analytics and so on. In recent years, the availability of tools and flows such as high-level synthesis has made it even easier to accelerate a variety of high-performance computing applications onto FPGAs. In this paper we propose a systematic methodology for optimizing the performance of an accelerated block using the notion of compute intensity to guide optimizations in high-level synthesis. We demonstrate the effectiveness of our methodology on an FPGA implementation of a non-uniform discrete Fourier transform (NUDFT), used to convert a wireless channel model from the time-domain to the frequency domain. The acceleration of this particular computation can be used to improve the performance and capacity of wireless channel simulation, which has wide applications in the system level design and performance evaluation of wireless networks. Our results show that our FPGA implementation outperforms the same code offloaded onto GPUs and CPUs by 1.6x and 10x respectively, in performance as measured by the throughput of the accelerated block. The gains in performance per watt versus GPUs and CPUs are 15.6x and 41.5x respectively.
In this paper, cyber physical system is analyzed from security perspective. A double closed-loop security control structure and algorithm with defense functions is proposed. From this structure, the features of several cyber attacks are considered respectively. By this structure, the models of information disclosure, denial-of-service (DoS) and Man-in-the-Middle Attack (MITM) are proposed. According to each kind attack, different models are obtained and analyzed, then reduce to the unified models. Based on this, system security conditions are obtained, and a defense scenario with detail algorithm is design to illustrate the implementation of this program.
Onion sites on the darkweb operate using the Tor Hidden Service (HS) protocol to shield their locations on the Internet, which (among other features) enables these sites to host malicious and illegal content while being resistant to legal action and seizure. Identifying and monitoring such illicit sites in the darkweb is of high relevance to the Computer Security and Law Enforcement communities. We have developed an automated infrastructure that crawls and indexes content from onion sites into a large-scale data repository, called LIGHTS, with over 100M pages. In this paper we describe Automated Tool for Onion Labeling (ATOL), a novel scalable analysis service developed to conduct a thematic assessment of the content of onion sites in the LIGHTS repository. ATOL has three core components – (a) a novel keyword discovery mechanism (ATOLKeyword) which extends analyst-provided keywords for different categories by suggesting new descriptive and discriminative keywords that are relevant for the categories; (b) a classification framework (ATOLClassify) that uses the discovered keywords to map onion site content to a set of categories when sufficient labeled data is available; (c) a clustering framework (ATOLCluster) that can leverage information from multiple external heterogeneous knowledge sources, ranging from domain expertise to Bitcoin transaction data, to categorize onion content in the absence of sufficient supervised data. The paper presents empirical results of ATOL on onion datasets derived from the LIGHTS repository, and additionally benchmarks ATOL's algorithms on the publicly available 20 Newsgroups dataset to demonstrate the reproducibility of its results. On the LIGHTS dataset, ATOLClassify gives a 12% performance gain over an analyst-provided baseline, while ATOLCluster gives a 7% improvement over state-of-the-art semi-supervised clustering algorithms. We also discuss how ATOL has been deployed and externally evaluated, as part of the LIGHTS system.
Distributed Denial-of-Service (DDoS) attacks are increasing in frequency and volume on the Internet, and there is evidence that cyber-criminals are turning to Internet-of-Things (IoT) devices such as cameras and vending machines as easy launchpads for large-scale attacks. This paper quantifies the capability of consumer IoT devices to participate in reflective DDoS attacks. We first show that household devices can be exposed to Internet reflection even if they are secured behind home gateways. We then evaluate eight household devices available on the market today, including lightbulbs, webcams, and printers, and experimentally profile their reflective capability, amplification factor, duration, and intensity rate for TCP, SNMP, and SSDP based attacks. Lastly, we demonstrate reflection attacks in a real-world setting involving three IoT-equipped smart-homes, emphasising the imminent need to address this problem before it becomes widespread.
The Bonseyes EU H2020 collaborative project aims to develop a platform consisting of a Data Marketplace, a Deep Learning Toolbox, and Developer Reference Platforms for organizations wanting to adopt Artificial Intelligence. The project will be focused on using artificial intelligence in low power Internet of Things (IoT) devices ("edge computing"), embedded computing systems, and data center servers ("cloud computing"). It will bring about orders of magnitude improvements in efficiency, performance, reliability, security, and productivity in the design and programming of systems of artificial intelligence that incorporate Smart Cyber-Physical Systems (CPS). In addition, it will solve a causality problem for organizations who lack access to Data and Models. Its open software architecture will facilitate adoption of the whole concept on a wider scale. To evaluate the effectiveness, technical feasibility, and to quantify the real-world improvements in efficiency, security, performance, effort and cost of adding AI to products and services using the Bonseyes platform, four complementary demonstrators will be built. Bonseyes platform capabilities are aimed at being aligned with the European FI-PPP activities and take advantage of its flagship project FIWARE. This paper provides a description of the project motivation, goals and preliminary work.
In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we propose the SIPHON architecture–-a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called $\backslash$emph\wormholes\ distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, five physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50 000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware.
Driven by CA compromises and the risk of man-in-the-middle attacks, new security features have been added to TLS, HTTPS, and the web PKI over the past five years. These include Certificate Transparency (CT), for making the CA system auditable; HSTS and HPKP headers, to harden the HTTPS posture of a domain; the DNS-based extensions CAA and TLSA, for control over certificate issuance and pinning; and SCSV, for protocol downgrade protection. This paper presents the first large scale investigation of these improvements to the HTTPS ecosystem, explicitly accounting for their combined usage. In addition to collecting passive measurements at the Internet uplinks of large University networks on three continents, we perform the largest domain-based active Internet scan to date, covering 193M domains. Furthermore, we track the long-term deployment history of new TLS security features by leveraging passive observations dating back to 2012. We find that while deployment of new security features has picked up in general, only SCSV (49M domains) and CT (7M domains) have gained enough momentum to improve the overall security of HTTPS. Features with higher complexity, such as HPKP, are deployed scarcely and often incorrectly. Our empirical findings are placed in the context of risk, deployment effort, and benefit of these new technologies, and actionable steps for improvement are proposed. We cross-correlate use of features and find some techniques with significant correlation in deployment. We support reproducible research and publicly release data and code.
Among many research efforts devoted to automated art investigations, the problem of quantification of literary style remains current. Meanwhile, linguists and computer scientists have tried to sort out texts according to their types or authors. We use the recently-introduced p-leader multifractal formalism to analyze a corpus of novels written for adults and young adults, with the goal of assessing if a difference in style can be found. Our results agree with the interpretation that novels written for young adults largely follow conventions of the genre, whereas novels written for adults are less homogeneous.
Active authentication is the problem of continuously verifying the identity of a person based on behavioral aspects of their interaction with a computing device. In this paper, we collect and analyze behavioral biometrics data from 200 subjects, each using their personal Android mobile device for a period of at least 30 days. This data set is novel in the context of active authentication due to its size, duration, number of modalities, and absence of restrictions on tracked activity. The geographical colocation of the subjects in the study is representative of a large closed-world environment such as an organization where the unauthorized user of a device is likely to be an insider threat: coming from within the organization. We consider four biometric modalities: 1) text entered via soft keyboard, 2) applications used, 3) websites visited, and 4) physical location of the device as determined from GPS (when outdoors) or WiFi (when indoors). We implement and test a classifier for each modality and organize the classifiers as a parallel binary decision fusion architecture. We are able to characterize the performance of the system with respect to intruder detection time and to quantify the contribution of each modality to the overall performance.
Data provenance describes how data came to be in its present form. It includes data sources and the transformations that have been applied to them. Data provenance has many uses, from forensics and security to aiding the reproducibility of scientific experiments. We present CamFlow, a whole-system provenance capture mechanism that integrates easily into a PaaS offering. While there have been several prior whole-system provenance systems that captured a comprehensive, systemic and ubiquitous record of a system's behavior, none have been widely adopted. They either A) impose too much overhead, B) are designed for long-outdated kernel releases and are hard to port to current systems, C) generate too much data, or D) are designed for a single system. CamFlow addresses these shortcoming by: 1) leveraging the latest kernel design advances to achieve efficiency; 2) using a self-contained, easily maintainable implementation relying on a Linux Security Module, NetFilter, and other existing kernel facilities; 3) providing a mechanism to tailor the captured provenance data to the needs of the application; and 4) making it easy to integrate provenance across distributed systems. The provenance we capture is streamed and consumed by tenant-built auditor applications. We illustrate the usability of our implementation by describing three such applications: demonstrating compliance with data regulations; performing fault/intrusion detection; and implementing data loss prevention. We also show how CamFlow can be leveraged to capture meaningful provenance without modifying existing applications.
This paper describes the applications of deep learning-based image recognition in the DARPA Memex program and its repository of 1.4 million weapons-related images collected from the Deep web. We develop a fast, efficient, and easily deployable framework for integrating Google's Tensorflow framework with Apache Tika for automatically performing image forensics on the Memex data. Our framework and its integration are evaluated qualitatively and quantitatively and our work suggests that automated, large-scale, and reliable image classification and forensics can be widely used and deployed in bulk analysis for answering domain-specific questions.
Crypto-ransomware is a challenging threat that ciphers a user's files while hiding the decryption key until a ransom is paid by the victim. This type of malware is a lucrative business for cybercriminals, generating millions of dollars annually. The spread of ransomware is increasing as traditional detection-based protection, such as antivirus and anti-malware, has proven ineffective at preventing attacks. Additionally, this form of malware is incorporating advanced encryption algorithms and expanding the number of file types it targets. Cybercriminals have found a lucrative market and no one is safe from being the next victim. Encrypting ransomware targets business small and large as well as the regular home user. This paper discusses ransomware methods of infection, technology behind it and what can be done to help prevent becoming the next victim. The paper investigates the most common types of crypto-ransomware, various payload methods of infection, typical behavior of crypto ransomware, its tactics, how an attack is ordinarily carried out, what files are most commonly targeted on a victim's computer, and recommendations for prevention and safeguards are listed as well.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
Software metrics are widely used to measure the quality of software and to give an early indication of the efficiency of the development process in industry. There are many well-established frameworks for measuring the quality of source code through metrics, but limited attention has been paid to the quality of software models. In this article, we evaluate the quality of state machine models specified using the Analytical Software Design (ASD) tooling. We discuss how we applied a number of metrics to ASD models in an industrial setting and report about results and lessons learned while collecting these metrics. Furthermore, we recommend some quality limits for each metric and validate them on models developed in a number of industrial projects.
The principal mission of Multi-Source Multicast (MSM) is to disseminate all messages from all sources in a network to all destinations. MSM is utilized in numerous applications. In many of them, securing the messages disseminated is critical. A common secure model is to consider a network where there is an eavesdropper which is able to observe a subset of the network links, and seek a code which keeps the eavesdropper ignorant regarding all the messages. While this is solved when all messages are located at a single source, Secure MSM (SMSM) is an open problem, and the rates required are hard to characterize in general. In this paper, we consider Individual Security, which promises that the eavesdropper has zero mutual information with each message individually. We completely characterize the rate region for SMSM under individual security, and show that such a security level is achievable at the full capacity of the network, that is, the cut-set bound is the matching converse, similar to non-secure MSM. Moreover, we show that the field size is similar to non-secure MSM and does not have to be larger due to the security constraint.
There are several security requirements identification methods proposed by researchers in up-front requirements engineering (RE). However, in open source software (OSS) projects, developers use lightweight representation and refine requirements frequently by writing comments. They also tend to discuss security aspect in comments by providing code snippets, attachments, and external resource links. Since most security requirements identification methods in up-front RE are based on textual information retrieval techniques, these methods are not suitable for OSS projects or just-in-time RE. In our study, we propose a new model based on logistic regression to identify security requirements in OSS projects. We used five metrics to build security requirements identification models and tested the performance of these metrics by applying those models to three OSS projects. Our results show that four out of five metrics achieved high performance in intra-project testing.
Dynamic spectrum sharing techniques applied in the UHF TV band have been developed to allow secondary WiFi transmission in areas with active TV users. This technique of dynamically controlling the exclusion zone enables vastly increasing secondary spectrum re-use, compared to the "TV white space" model where TV transmitters determine the exclusion zone and only "idle" channels can be re-purposed. However, in current such dynamic spectrum sharing systems, the sensitive operation parameters of both primary TV users (PUs) and secondary users (SUs) need to be shared with the spectrum database controller (SDC) for the purpose of realizing efficient spectrum allocation. Since such SDC server is not necessarily operated by a trusted third party, those current systems might cause essential threatens to the privacy requirement from both PUs and SUs. To address this privacy issue, this paper proposes a privacy-preserving spectrum sharing system between PUs and SUs, which realizes the spectrum allocation decision process using efficient multi-party computation (MPC) technique. In this design, the SDC only performs secure computation over encrypted input from PUs and SUs such that none of the PU or SU operation parameters will be revealed to SDC. The evaluation of its performance illustrates that our proposed system based on efficient MPC techniques can perform dynamic spectrum allocation process between PUs and SUs efficiently while preserving users' privacy.