Biblio
Cross-VM attacks have emerged as a major threat on commercial clouds. These attacks commonly exploit hardware level leakages on shared physical servers. A co-located machine can readily feel the presence of a co-located instance with a heavy computational load through performance degradation due to contention on shared resources. Shared cache architectures such as the last level cache (LLC) have become a popular leakage source to mount cross-VM attack. By exploiting LLC leakages, researchers have already shown that it is possible to recover fine grain information such as cryptographic keys from popular software libraries. This makes it essential to verify implementations that handle sensitive data across the many versions and numerous target platforms, a task too complicated, error prone and costly to be handled by human beings. Here we propose a machine learning based technique to classify applications according to their cache access profiles. We show that with minimal and simple manual processing steps feature vectors can be used to train models using support vector machines to classify the applications with a high degree of success. The profiling and training steps are completely automated and do not require any inspection or study of the code to be classified. In native execution, we achieve a successful classification rate as high as 98% (L1 cache) and 78$\backslash$% (LLC) over 40 benchmark applications in the Phoronix suite with mild training. In the cross-VM setting on the noisy Amazon EC2 the success rate drops to 60$\backslash$% for a suite of 25 applications. With this initial study we demonstrate that it is possible to train meaningful models to successfully predict applications running in co-located instances.
Map-based services are becoming increasingly important in many applications. These services often need to show geospatial objects (e.g., cities and parks) in Web browsers, and being able to retrieve such objects efficiently is critical to achieving a low response time for user queries. In this demonstration we present a browser-based caching technique to store and load geospatial objects on a map in a Web page. The technique employs a hierarchical structure to store and index polygons, and does intelligent prefetching and cache replacement by utilizing the information about the user's recent browser activities. We demonstrate the usage of the technique in an application called TwitterMap for visualizing more than 1 billion tweets in real time. We show its effectiveness by using different replacement policies. The technique is implemented as a general-purpose Javascript library, making it suitable for other applications as well.
Wireless mesh network (WMN) consists of mesh gateways, mesh routers and mesh clients. In hybrid WMN, both backbone mesh network and client mesh network are mesh connected. Capacity analysis of multi-hop wireless networks has proven to be an interesting and challenging research topic. The capacity of hybrid WMN depends on several factors such as traffic model, topology, scheduling strategy and bandwidth allocation strategy, etc. In this paper, the capacity of hybrid WMN is studied according to the traffic model and bandwidth allocation. The traffic of hybrid WMN is categorized into internal and external traffic. Then the capacity of each mesh client is deduced according to appropriate bandwidth allocation. The analytical results show that hybrid WMN achieves lower capacity than infrastructure WMN. The results and conclusions can guide for the construction of hybrid WMN.
Cryptography and encryption is a topic that is blurred by its complexity making it difficult for the majority of the public to easily grasp. The focus of our research is based on SSL technology involving CAs, a centralized system that manages and issues certificates to web servers and computers for validation of identity. We first explain how the certificate provides a secure connection creating a trust between two parties looking to communicate with one another over the internet. Then the paper goes into what happens when trust is compromised and how information that is being transmitted could possibly go into the hands of the wrong person. We are proposing a browser plugin, Certificate Authority Rescue Engine (CAre), to serve as an added source of security with simplicity and visibility. In order to see why CAre will be an added benefit to average and technical users of the internet, one must understand what website security entails. Therefore, this paper will dive deep into website security through the use of public key infrastructure and its core components; certificates, certificate authorities, and their relationship with web browsers.
An operating system kernel written in the Rust language would have extremely fine-grained isolation boundaries, have no memory leaks, and be safe from a wide range of security threats and memory bugs. Previous efforts towards this end concluded that writing a kernel requires changing Rust. This paper reaches a different conclusion, that no changes to Rust are needed and a kernel can be implemented with a very small amount of unsafe code. It describes how three sample kernel mechanisms–-DMA, USB, and buffer caches–-can be built using these abstractions.
Detection and prevention of data breaches in corporate networks is one of the most important security problems of today's world. The techniques and applications proposed for solution are not successful when attackers attempt to steal data using steganography. Steganography is the art of storing data in a file called cover, such as picture, sound and video. The concealed data cannot be directly recognized in the cover. Steganalysis is the process of revealing the presence of embedded messages in these files. There are many statistical and signature based steganalysis algorithms. In this work, the detection of steganographic images with steganalysis techniques is reviewed and a system has been developed which automatically detects steganographic images in network traffic by using open source tools.
Trust in SSL-based communications is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) ensuring that it is not revoked. Currently, such certificate revocation checks are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, despite the existence of these revocation checks, sophisticated cyber-attackers, may trick web browsers to trust a revoked certificate, believing that it is still valid. Consequently, the web browser will communicate (over TLS) with web servers controlled by cyber-attackers. Although frequently updated, nonced, and timestamped certificates may reduce the frequency and impact of such cyber-attacks, they impose a very large overhead to the CAs and OCSP servers, which now need to timestamp and sign on a regular basis all the responses, for every certificate they have issued, resulting in a very high overhead. To mitigate this overhead and provide a solution to the described cyber-attacks, we present CCSP: a new approach to provide timely information regarding the status of certificates, which capitalizes on a newly introduced notion called signed collections. In this paper, we present the design, preliminary implementation, and evaluation of CCSP in general, and signed collections in particular. Our preliminary results suggest that CCSP (i) reduces space requirements by more than an order of magnitude, (ii) lowers the number of signatures required by 6 orders of magnitude compared to OCSP-based methods, and (iii) adds only a few milliseconds of overhead in the overall user latency.
Digitally signed malware can bypass system protection mechanisms that install or launch only programs with valid signatures. It can also evade anti-virus programs, which often forego scanning signed binaries. Known from advanced threats such as Stuxnet and Flame, this type of abuse has not been measured systematically in the broader malware landscape. In particular, the methods, effectiveness window, and security implications of code-signing PKI abuse are not well understood. We propose a threat model that highlights three types of weaknesses in the code-signing PKI. We overcome challenges specific to code-signing measurements by introducing techniques for prioritizing the collection of code signing certificates that are likely abusive. We also introduce an algorithm for distinguishing among different types of threats. These techniques allow us to study threats that breach the trust encoded in the Windows code signing PKI. The threats include stealing the private keys associated with benign certificates and using them to sign malware or by impersonating legitimate companies that do not develop software and, hence, do not own code-signing certificates. Finally, we discuss the actionable implications of our findings and propose concrete steps for improving the security of the code-signing ecosystem.
Emerging nonvolatile memory (NVM) devices are not limited to build nonvolatile memory macros. They can also be used in developing nonvolatile logics (nvLogics) for nonvolatile processors, security circuits for the internet of things (IoT), and computing-in-memory (CIM) for artificial intelligence (AI) chips. This paper explores the challenges in circuit designs of emerging memory devices for application in nonvolatile logics, security circuits, and CIM for deep neural networks (DNN). Several silicon-verified examples of these circuits are reviewed in this paper.
The MgO-based magnetic tunnel junction (MTJ) is the basis of modern hard disk drives' magnetic read sensors. Within its operating bandwidth, the sensor's performance is significantly affected by nonlinear and oscillating behavior arising from the MTJ's magnetization dynamics at microwave frequencies. Static I-V curve measurements are commonly used to characterize sensor's nonlinear effects. Unfortunately, these do not sufficiently capture the MTJ's magnetization dynamics. In this paper, we demonstrate the use of the two-tone measurement technique for full treatment of the sensor's nonlinear effects in conjunction with dynamic ones. This approach is new in the field of magnetism and magnetic materials, and it has its challenges due to the nature of the device. Nevertheless, the experimental results demonstrate how the two-tone measurement technique can be used to characterize magnetic sensor nonlinear properties.
Industrial Control Systems (ICS) are widely deployed in mission critical infrastructures such as manufacturing, energy, and transportation. The mission critical nature of ICS devices poses important security challenges for ICS vendors and asset owners. In particular, the patching of ICS devices is usually deferred to scheduled production outages so as to prevent potential operational disruption of critical systems. In this paper, we present the results from our longitudinal measurement and characterization study of ICS patching behavior. Our analysis of more than 100 thousand Internet-exposed ICS devices reveals that fewer than 30% upgrade to newer patched versions within 60 days of a vulnerability disclosure. Based on our measurement and analysis, we further propose a model to forecast the patching behavior of ICS devices.
Nearly all modern software has security flaws–-either known or unknown by the users. However, metrics for evaluating software security (or lack thereof) are noisy at best. Common evaluation methods include counting the past vulnerabilities of the program, or comparing the size of the Trusted Computing Base (TCB), measured in lines of code (LoC) or binary size. Other than deleting large swaths of code from project, it is difficult to assess whether a code change decreased the likelihood of a future security vulnerability. Developers need a practical, constructive way of evaluating security. This position paper argues that we actually have all the tools needed to design a better, empirical method of security evaluation. We discuss related work that estimates the severity and vulnerability of certain attack vectors based on code properties that can be determined via static analysis. This paper proposes a grand, unified model that can predict the risk and severity of vulnerabilities in a program. Our prediction model uses machine learning to correlate these code features of open-source applications with the history of vulnerabilities reported in the CVE (Common Vulnerabilities and Exposures) database. Based on this model, one can incorporate an analysis into the standard development cycle that predicts whether the code is becoming more or less prone to vulnerabilities.
Distinguishing and classifying different types of malware is important to better understanding how they can infect computers and devices, the threat level they pose and how to protect against them. In this paper, a system for classifying malware programs is presented. The paper describes the architecture of the system and assesses its performance on a publicly available database (provided by Microsoft for the Microsoft Malware Classification Challenge BIG2015) to serve as a benchmark for future research efforts. First, the malicious programs are preprocessed such that they are visualized as gray scale images. We then make use of an architecture comprised of multiple layers (multiple levels of encoding) to carry out the classification process of those images/programs. We compare the performance of this approach against traditional machine learning and pattern recognition algorithms. Our experimental results show that the deep learning architecture yields a boost in performance over those conventional/standard algorithms. A hold-out validation analysis using the superior architecture shows an accuracy in the order of 99.15%.
This tutorial will present a systematic overview of \$\backslash$em kleptography\: stealing information subliminally from black-box cryptographic implementations; and \$\backslash$em cliptography\: defending mechanisms that clip the power of kleptographic attacks via specification re-designs (without altering the underlying algorithms). Despite the laudatory history of development of modern cryptography, applying cryptographic tools to reliably provide security and privacy in practice is notoriously difficult. One fundamental practical challenge, guaranteeing security and privacy without explicit trust in the algorithms and implementations that underlie basic security infrastructure, remains. While the dangers of entertaining adversarial implementation of cryptographic primitives seem obvious, the ramifications of such attacks are surprisingly dire: it turns out that – in wide generality – adversarial implementations of cryptographic (both deterministic and randomized) algorithms may leak private information while producing output that is statistically indistinguishable from that of a faithful implementation. Such attacks were formally studied in Kleptography. Snowden revelations has shown us how security and privacy can be lost at a very large scale even when traditional cryptography seems to be used to protect Internet communication, when Kleptography was not taken into consideration. We will first explain how the above-mentioned Kleptographic attacks can be carried out in various settings. We will then introduce several simple but rigorous immunizing strategies that were inspired by folklore practical wisdoms to protect different algorithms from implementation subversion. Those strategies can be applied to ensure security of most of the fundamental cryptographic primitives such as PRG, digital signatures, public key encryptions against kleptographic attacks when they are implemented accordingly. Our new design principles may suggest new standardization methods that help reducing the threats of subverted implementation. We also hope our tutorial to stimulate a community-wise efforts to further tackle the fundamental challenge mentioned at the beginning.
In the past couple of years Cloud Computing has become an eminent part of the IT industry. As a result of its economic benefits more and more people are heading towards Cloud adoption. In present times there are numerous Cloud Service providers (CSP) allowing customers to host their applications and data onto Cloud. However Cloud Security continues to be the biggest obstacle in Cloud adoption and thereby prevents customers from accessing its services. Various techniques have been implemented by provides in order to mitigate risks pertaining to Cloud security. In this paper, we present a Hybrid Cryptographic System (HCS) that combines the benefits of both symmetric and asymmetric encryption thus resulting in a secure Cloud environment. The paper focuses on creating a secure Cloud ecosystem wherein we make use of multi-factor authentication along with multiple levels of hashing and encryption. The proposed system along with the algorithm are simulated using the CloudSim simulator. To this end, we illustrate the working of our proposed system along with the simulated results.
Data loss is perceived as one of the major threats for cloud storage. Consequently, the security community developed several challenge-response protocols that allow a user to remotely verify whether an outsourced file is still intact. However, two important practical problems have not yet been considered. First, clients commonly outsource multiple files of different sizes, raising the question how to formalize such a scheme and in particular ensuring that all files can be simultaneously audited. Second, in case auditing of the files fails, existing schemes do not provide a client with any method to prove if the original files are still recoverable. We address both problems and describe appropriate solutions. The first problem is tackled by providing a new type of "Proofs of Retrievability" scheme, enabling a client to check all files simultaneously in a compact way. The second problem is solved by defining a novel procedure called "Proofs of Recoverability", enabling a client to obtain an assurance whether a file is recoverable or irreparably damaged. Finally, we present a combination of both schemes allowing the client to check the recoverability of all her original files, thus ensuring cloud storage file recoverability.
Recent architectures for the advanced metering infrastructure (AMI) have incorporated several back-end systems that handle billing and other smart grid control operations. The non-availability of metering data when needed or the untimely delivery of data needed for control operations will undermine the activities of these back-end systems. Unfortunately, there are concerns that cyber attacks such as distributed denial of service (DDoS) will manifest in magnitude and complexity in a smart grid AMI network. Such attacks will range from a delay in the availability of end user's metering data to complete denial in the case of a grounded network. This paper proposes a cloud-based (IaaS) firewall for the mitigation of DDoS attacks in a smart grid AMI network. The proposed firewall has the ability of not only mitigating the effects of DDoS attack but can prevent the attack before they are launched. Our proposed firewall system leverages on cloud computing technology which has an added advantage of reducing the burden of data computations and storage for smart grid AMI back-end systems. The openflow firewall proposed in this study is a better security solution with regards to the traditional on-premises DoS solutions which cannot cope with the wide range of new attacks targeting the smart grid AMI network infrastructure. Simulation results generated from the study show that our model can guarantee the availability of metering/control data and could be used to improve the QoS of the smart grid AMI network under a DDoS attack scenario.
New generation communication technologies (e.g., 5G) enhance interactions in mobile and wireless communication networks between devices by supporting a large-scale data sharing. The vehicle is such kind of device that benefits from these technologies, so vehicles become a significant component of vehicular networks. Thus, as a classic application of Internet of Things (IoT), the vehicular network can provide more information services for its human users, which makes the vehicular network more socialized. A new concept is then formed, namely "Vehicular Social Networks (VSNs)", which bring both benefits of data sharing and challenges of security. Traditional public key infrastructures (PKI) can guarantee user identity authentication in the network; however, PKI cannot distinguish untrustworthy information from authorized users. For this reason, a trust evaluation mechanism is required to guarantee the trustworthiness of information by distinguishing malicious users from networks. Hence, this paper explores a trust evaluation algorithm for VSNs and proposes a cloud-based VSN architecture to implement the trust algorithm. Experiments are conducted to investigate the performance of trust algorithm in a vehicular network environment through building a three-layer VSN model. Simulation results reveal that the trust algorithm can be efficiently implemented by the proposed three-layer model.
Cloud computing paradigm continues to revolutionize the way business processes are being conducted through the provision of massive resources, reliability across networks and ability to offer parallel processing. However, miniaturization, proliferation and nanotechnology within devices has enabled digitization of almost every object which eventually has seen the rise of a new technological marvel dubbed Internet of Things (IoT). IoT enables self-configurable/smart devices to connect intelligently through Radio Frequency Identification (RFID), WI-FI, LAN, GPRS and other methods by further enabling timeously processing of information. Based on these developments, the integration of the cloud and IoT infrastructures has led to an explosion of the amount of data being exchanged between devices which have in turn enabled malicious actors to use this as a platform to launch various cybercrime activities. Consequently, digital forensics provides a significant approach that can be used to provide an effective post-event response mechanism to these malicious attacks in cloud-based IoT infrastructures. Therefore, the problem being addressed is that, at the time of writing this paper, there still exist no accepted standards or frameworks for conducting digital forensic investigation on cloud-based IoT infrastructures. As a result, the authors have proposed a cloud-centric framework that is able to isolate Big data as forensic evidence from IoT (CFIBD-IoT) infrastructures for proper analysis and examination. It is the authors' opinion that if the CFIBD-IoT framework is implemented fully it will support cloud-based IoT tool creation as well as support future investigative techniques in the cloud with a degree of certainty.
In the process of big data analysis and processing, a key concern blocking users from storing and processing their data in the cloud is their misgivings about the security and performance of cloud services. There is an urgent need to develop an approach that can help each cloud service provider (CSP) to demonstrate that their infrastructure and service behavior can meet the users' expectations. However, most of the prior research work focused on validating the process compliance of cloud service without an accurate description of the basic service behaviors, and could not measure the security capability. In this paper, we propose a novel approach to verify cloud service security conformance called CloudSec, which reduces the description gap between the cloud provider and customer through modeling cloud service behaviors (CloudBeh Model) and security SLA (SecSLA Model). These models enable a systematic integration of security constraints and service behavior into cloud while using UPPAAL to check the conformance, which can not only check CloudBeh performance metrics conformance, but also verify whether the security constraints meet the SecSLA. The proposed approach is validated through case study and experiments with a cloud storage service based on OpenStack, which illustrates CloudSec approach effectiveness and can be applied in real cloud scenarios.
Organizations face the issue of how to best allocate their security resources. Thus, they need an accurate method for assessing how many new vulnerabilities will be reported for the operating systems (OSs) they use in a given time period. Our approach consists of clustering vulnerabilities by leveraging the text information within vulnerability records, and then simulating the mean value function of vulnerabilities by relaxing the monotonic intensity function assumption, which is prevalent among the studies that use software reliability models (SRMs) and nonhomogeneous Poisson process (NHPP) in modeling. We applied our approach to the vulnerabilities of four OSs: Windows, Mac, IOS, and Linux. For the OSs analyzed in terms of curve fitting and prediction capability, our results, compared to a power-law model without clustering issued from a family of SRMs, are more accurate in all cases we analyzed.
With the ubiquitous computing of providing services and applications at anywhere and anytime, cloud computing is the best option as it offers flexible and pay-per-use based services to its customers. Nevertheless, security and privacy are the main challenges to its success due to its dynamic and distributed architecture, resulting in generating big data that should be carefully analysed for detecting network's vulnerabilities. In this paper, we propose a Collaborative Anomaly Detection Framework (CADF) for detecting cyber attacks from cloud computing environments. We provide the technical functions and deployment of the framework to illustrate its methodology of implementation and installation. The framework is evaluated on the UNSW-NB15 dataset to check its credibility while deploying it in cloud computing environments. The experimental results showed that this framework can easily handle large-scale systems as its implementation requires only estimating statistical measures from network observations. Moreover, the evaluation performance of the framework outperforms three state-of-the-art techniques in terms of false positive rate and detection rate.
Conventional program analyses have made great strides by leveraging logical reasoning. However, they cannot handle uncertain knowledge, and they lack the ability to learn and adapt. This in turn hinders the accuracy, scalability, and usability of program analysis tools in practice. We seek to address these limitations by proposing a methodology and framework for incorporating probabilistic reasoning directly into existing program analyses that are based on logical reasoning. We demonstrate that the combined approach can benefit a number of important applications of program analysis and thereby facilitate more widespread adoption of this technology.
In this paper, we propose and implement CommunityGuard, a system which comprises of intelligent Guardian Nodes that learn and prevent malicious traffic from coming into and going out of a user's personal area network. In the CommunityGuard model, each Guardian Node tells others about emerging threats, blocking these threats for all users as soon as they begin. Furthermore, Guardian Nodes regularly update themselves with latest threat models to provide effective security against new and emerging threats. Our evaluation proves that CommunityGuard provides immunity against a range of incoming and outgoing attacks at all points of time with an acceptable impact on network performance. Oftentimes, the sources of DDoS attack traffic are personal devices that have been compromised without the owner's knowledge. We have modeled CommunityGuard to prevent such outgoing DDoS traffic on a wide scale which can hamstring the otherwise very frightening prospects of crippling DDoS attacks.
This paper investigates the possibility of using ensemble learning methods to improve the performance of intrusion detection systems. We compare an ensemble of three ensemble learning methods, boosting, bagging and stacking in order to improve the detection rate and to reduce the false alarm rate. These ensemble methods use well-known and different base classification algorithms, J48 (decision tree), NB (Naïve Bayes), MLP (Neural Network) and REPTree. The comparison experiments are applied on UNSW-NB15 data set a recent public data set for network intrusion detection systems. Results show that using boosting, bagging can achieve higher accuracy than single classifier but stacking performs better than other ensemble learning methods.