Biblio
In vehicular networks, each message is signed by the generating node to ensure accountability for the contents of that message. For privacy reasons, each vehicle uses a collection of certificates, which for accountability reasons are linked at a central authority. One such design is the Security Credential Management System (SCMS) [1], which is the leading credential management system in the US. The SCMS is composed of multiple components, each of which has a different task for key management, which are logically separated. The SCMS is designed to ensure privacy against a single insider compromise, or against outside adversaries. In this paper, we demonstrate that the current SCMS design fails to achieve its design goal, showing that a compromised authority can gain substantial information about certificate linkages. We propose a solution that accommodates threshold-based detection, but uses relabeling and noise to limit the information that can be learned from a single insider adversary. We also analyze our solution using techniques from differential privacy and validate it using traffic-simulator based experiments. Our results show that our proposed solution prevents privacy information leakage against the compromised authority in collusion with outsider attackers.
Information-leakage is one of the most important security issues in the current Internet. In Named-Data Networking (NDN), Interest names introduce novel vulnerabilities that can be exploited. By setting up a malware, Interest names can be used to encode critical information (steganography embedded) and to leak information out of the network by generating anomalous Interest traffic. This security threat based on Interest names does not exist in IP network, and it is essential to solve this issue to secure the NDN architecture. This paper performs risk analysis of information-leakage in NDN. We first describe vulnerabilities with Interest names and, as countermeasures, we propose a name-based filter using search engine information, and another filter using one-class Support Vector Machine (SVM). We collected URLs from the data repository provided by Common Crawl and we evaluate the performances of our per-packet filters. We show that our filters can choke drastically the throughput of information-leakage, which makes it easier to detect anomalous Interest traffic. It is therefore possible to mitigate information-leakage in NDN network and it is a strong incentive for future deployment of this architecture at the Internet scale.
Due to the proliferation of reprogrammable hardware, core designs built from modules drawn from a variety of sources execute with direct access to critical system resources. Expressing guarantees that such modules satisfy, in particular the dynamic conditions under which they release information about their unbounded streams of inputs, and automatically proving that they satisfy such guarantees, is an open and critical problem.,,To address these challenges, we propose a domain-specific language, named STREAMS, for expressing information-flow policies with declassification over unbounded input streams. We also introduce a novel algorithm, named SIMAREL, that given a core design C and STREAMS policy P, automatically proves or falsifies that C satisfies P. The key technical insight behind the design of SIMAREL is a novel algorithm for efficiently synthesizing relational invariants over pairs of circuit executions.,,We expressed expected behavior of cores designed independently for research and production as STREAMS policies and used SIMAREL to check if each core satisfies its policy. SIMAREL proved that half of the cores satisfied expected behavior, but found unexpected information leaks in six open-source designs: an Ethernet controller, a flash memory controller, an SD-card storage manager, a robotics controller, a digital-signal processing (DSP) module, and a debugging interface.
Top-level domains play an important role in domain name system. Close attention should be paid to security of top level domains. In this paper, we found many configuration anomalies of top-level domains by analyzing their resource records. We got resource records of top-level domains from root name servers and authoritative servers of top-level domains. By comparing these resource records, we observed the anomalies in top-level domains. For example, there are 8 servers shared by more than one hundred top-level domains; Some TTL fields or SERIAL fields of resource records obtained on each NS servers of the same top-level domain were inconsistent; some authoritative servers of top-level domains were unreachable. Those anomalies may affect the availability of top-level domains. We hope that these anomalies can draw top-level domain administrators' attention to security of top-level domains.
Today's major concern is not only maximizing the information rate through linear network coding scheme which is intelligent combination of information symbols at sending nodes but also secured transmission of information. Though cryptographic measure of security (computational security) gives secure transmission of information, it results system complexity and consequent reduction in efficiency of the communication system. This problem leads to alternative way of optimally secure and maximized information transmission. The alternative solution is secure network coding which is information theoretic approach. Depending up on applications, different security measures are needed during the transmission of information over wiretapped network with potential attack by the adversaries. In this research work, mathematical model for different security constraints with upper and lower boundaries were studied depending up on the randomness added to the source message and hence the security constraints on linear network code for randomized source messages depends both on randomness added and number of random source symbols. If the source generates large number random symbols, lesser number of random keys can give higher security to the information but information theoretic security bounds remain same. Hence maximizing randomness to the source is equivalent to adding security level.
Information-Centric Network (ICN) is one of the most promising network architecture to handle the problem of rapid increase of data traffic because it allows in-network cache. ICNs with Linear Network Coding (LNC) can greatly improve the performance of content caching and delivery. In this paper, we propose a Secure Content Caching and Routing (SCCR) framework based on Software Defined Network (SDN) to find the optimal cache management and routing for secure content delivery, which aims to firstly minimize the total cost of cache and bandwidth consumption and then minimize the usage of random chunks to guarantee information theoretical security (ITS). Specifically, we firstly propose the SCCR problem and then introduce the main ideas of the SCCR framework. Next, we formulate the SCCR problem to two Linear Programming (LP) formulations and design the SCCR algorithm based on them to optimally solve the SCCR problem. Finally, extensive simulations are conducted to evaluate the proposed SCCR framework and algorithms.
The following topics are dealt with: feature extraction; data mining; support vector machines; mobile computing; photovoltaic power systems; mean square error methods; fault diagnosis; natural language processing; control system synthesis; and Internet of Things.
Digital fingerprinting refers to as method that can assign each copy of an intellectual property (IP) a distinct fingerprint. It was introduced for the purpose of protecting legal and honest IP users. The unique fingerprint can be used to identify the IP or a chip that contains the IP. However, existing fingerprinting techniques are not practical due to expensive cost of creating fingerprints and the lack of effective methods to verify the fingerprints. In the paper, we study a practical scan chain based fingerprinting method, where the digital fingerprint is generated by selecting the Q-SD or Q'-SD connection during the design of scan chains. This method has two major advantages. First, fingerprints are created as a post-silicon procedure and therefore there will be little fabrication overhead. Second, altering the Q-SD or Q'-SD connection style requires the modification of test vectors for each fingerprinted IP in order to maintain the fault coverage. This enables us to verify the fingerprint by inspecting the test vectors without opening up the chip to check the Q-SD or Q'-SD connection styles. We perform experiment on standard benchmarks to demonstrate that our approach has low design overhead. We also conduct security analysis to show that such fingerprints are robust against various attacks.
Emerging computing relies heavily on secure backend storage for the massive size of big data originating from the Internet of Things (IoT) smart devices to the Cloud-hosted web applications. Structured Query Language (SQL) Injection Attack (SQLIA) remains an intruder's exploit of choice to pilfer confidential data from the back-end database with damaging ramifications. The existing approaches were all before the new emerging computing in the context of the Internet big data mining and as such will lack the ability to cope with new signatures concealed in a large volume of web requests over time. Also, these existing approaches were strings lookup approaches aimed at on-premise application domain boundary, not applicable to roaming Cloud-hosted services' edge Software-Defined Network (SDN) to application endpoints with large web request hits. Using a Machine Learning (ML) approach provides scalable big data mining for SQLIA detection and prevention. Unfortunately, the absence of corpus to train a classifier is an issue well known in SQLIA research in applying Artificial Intelligence (AI) techniques. This paper presents an application context pattern-driven corpus to train a supervised learning model. The model is trained with ML algorithms of Two-Class Support Vector Machine (TC SVM) and Two-Class Logistic Regression (TC LR) implemented on Microsoft Azure Machine Learning (MAML) studio to mitigate SQLIA. This scheme presented here, then forms the subject of the empirical evaluation in Receiver Operating Characteristic (ROC) curve.
This paper presents our results from identifying anddocumenting false positives generated by static code analysistools. By false positives, we mean a static code analysis toolgenerates a warning message, but the warning message isnot really an error. The goal of our study is to understandthe different kinds of false positives generated so we can (1)automatically determine if an error message is truly indeed a truepositive, and (2) reduce the number of false positives developersand testers must triage. We have used two open-source tools andone commercial tool in our study. The results of our study haveled to 14 core false positive patterns, some of which we haveconfirmed with static code analysis tool developers.
While significant progress has been made separately on analytics systems for scalable stochastic gradient descent (SGD) and private SGD, none of the major scalable analytics frameworks have incorporated differentially private SGD. There are two inter-related issues for this disconnect between research and practice: (1) low model accuracy due to added noise to guarantee privacy, and (2) high development and runtime overhead of the private algorithms. This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address both issues in an integrated manner. In contrast to the white-box approach adopted by previous work, we revisit and use the classical technique of output perturbation to devise a novel “bolt-on” approach to private SGD. While our approach trivially addresses (2), it makes (1) even more challenging. We address this challenge by providing a novel analysis of the L2-sensitivity of SGD, which allows, under the same privacy guarantees, better convergence of SGD when only a constant number of passes can be made over the data. We integrate our algorithm, as well as other state-of-the-art differentially private SGD, into Bismarck, a popular scalable SGD-based analytics system on top of an RDBMS. Extensive experiments show that our algorithm can be easily integrated, incurs virtually no overhead, scales well, and most importantly, yields substantially better (up to 4X) test accuracy than the state-of-the-art algorithms on many real datasets.
Intrusion detection systems do not perform well when it comes to detecting zero-day attacks, therefore improving their performance in that regard is an active research topic. In this study, to detect zero-day attacks with high accuracy, we proposed two deep learning based anomaly detection models using autoencoder and denoising autoencoder respectively. The key factor that directly affects the accuracy of the proposed models is the threshold value which was determined using a stochastic approach rather than the approaches available in the current literature. The proposed models were tested using the KDDTest+ dataset contained in NSL-KDD, and we achieved an accuracy of 88.28% and 88.65% respectively. The obtained results show that, as a singular model, our proposed anomaly detection models outperform any other singular anomaly detection methods and they perform almost the same as the newly suggested hybrid anomaly detection models.
We present a formal method for computing the best security provisioning for Internet of Things (IoT) scenarios characterized by a high degree of mobility. The security infrastructure is intended as a security resource allocation plan, computed as the solution of an optimization problem that minimizes the risk of having IoT devices not monitored by any resource. We employ the shortfall as a risk measure, a concept mostly used in the economics, and adapt it to our scenario. We show how to compute and evaluate an allocation plan, and how such security solutions address the continuous topology changes that affect an IoT environment.
In this study, it is proposed to carry out an efficient formulation in order to figure out the stochastic security-constrained generation capacity expansion planning (SC-GCEP) problem. The main idea is related to directly compute the line outage distribution factors (LODF) which could be applied to model the N - m post-contingency analysis. In addition, the post-contingency power flows are modeled based on the LODF and the partial transmission distribution factors (PTDF). The post-contingency constraints have been reformulated using linear distribution factors (PTDF and LODF) so that both the pre- and post-contingency constraints are modeled simultaneously in the SC-GCEP problem using these factors. In the stochastic formulation, the load uncertainty is incorporated employing a two-stage multi-period framework, and a K - means clustering technique is implemented to decrease the number of load scenarios. The main advantage of this methodology is the feasibility to quickly compute the post-contingency factors especially with multiple-line outages (N - m). This concept would improve the security-constraint analysis modeling quickly the outage of m transmission lines in the stochastic SC-GCEP problem. It is carried out several experiments using two electrical power systems in order to validate the performance of the proposed formulation.