Biblio
Among the several threats to cyber services Distributed denial-of-service (DDoS) attack is most prevailing nowadays. DDoS involves making an online service unavailable by flooding the bandwidth or resources of a targeted system. It is easier for an insider having legitimate access to the system to circumvent any security controls thus resulting in insider attack. To mitigate insider assisted DDoS attacks, this paper proposes a moving target defense mechanism that involves isolation of insiders from innocent clients by using attack proxies. Further using the concept of load balancing an effective algorithm to detect and handle insider attack is developed with the aim of maximizing attack isolation while minimizing the total number of proxies used.
Securing Internet of Things (IoT) systems is a challenge because of its multiple points of vulnerability. A spate of recent hacks and security breaches has unveiled glaring vulnerabilities in the IoT. Due to the computational and memory requirement constraints associated with anomaly detection algorithms in core networks, commercial in-line (part of the direct line of communication) Anomaly Detection Systems (ADSs) rely on sampling-based anomaly detection approaches to achieve line rates and truly-inline anomaly detection accuracy in real-time. However, packet sampling is inherently a lossy process which might provide an incomplete and biased approximation of the underlying traffic patterns. Moreover, commercial routers uses proprietary software making them closed to be manipulated from the outside. As a result, detecting malicious packets on the given network path is one of the most challenging problems in the field of network security. We argue that the advent of Software Defined Networking (SDN) provides a unique opportunity to effectively detect and mitigate DDoS attacks. Unlike sampling-based approaches for anomaly detection and limitation of proprietary software at routers, we use the SDN infrastructure to relax the sampling-based ADS constraints and collect traffic flow statistics which are maintained at each SDN-enabled switch to achieve high detection accuracy. In order to implement our idea, we discuss how to mitigate DDoS attacks using the features of SDN infrastructure.
Data Deduplication provides lots of benefits to security and privacy issues which can arise as user's sensitive data at risk of within and out of doors attacks. Traditional secret writing that provides knowledge confidentiality is incompatible with knowledge deduplication. Ancient secret writing wants completely different users to encode their knowledge with their own keys. Thus, identical knowledge copies of completely different various users can result in different ciphertexts that makes Deduplication not possible. Convergent secret writing has been planned to enforce knowledge confidentiality whereas creating Deduplication possible. It encrypts/decrypts a knowledge copy with a confluent key, that is obtained by computing the cryptographical hash price of the content of the information copy. Once generation of key and encryption, the user can retain the keys and send ciphertext to cloud.
We present DECANTeR, a system to detect anomalous outbound HTTP communication, which passively extracts fingerprints for each application running on a monitored host. The goal of our system is to detect unknown malware and backdoor communication indicated by unknown fingerprints extracted from a host's network traffic. We evaluate a prototype with realistic data from an international organization and datasets composed of malicious traffic. We show that our system achieves a false positive rate of 0.9% for 441 monitored host machines, an average detection rate of 97.7%, and that it cannot be evaded by malware using simple evasion techniques such as using known browser user agent values. We compare our solution with DUMONT [24], the current state-of-the-art IDS which detects HTTP covert communication channels by focusing on benign HTTP traffic. The results show that DECANTeR outperforms DUMONT in terms of detection rate, false positive rate, and even evasion-resistance. Finally, DECANTeR detects 96.8% of information stealers in our dataset, which shows its potential to detect data exfiltration.
In this paper, we improve recent results on the decentralized switched control problem to include the moving horizon case and apply it to a testbed system. Using known derivations for a centralized controller with look-ahead, we were able to extend the decentralized problem with finite memory to include receding horizon modal information. We then compare the performance of a switched controller with finite memory and look-ahead horizon to that of a linear time independent (LTI) controller using a simulation. The decentralized controller is further tested with a real-world system comprised of multiple model-sized hovercrafts.
submitted, also arXiv: 1509.08689.
Smart spammers and telemarketers circumvent the standalone spam detection systems by making low rate spam-ming activity to a large number of recipients distributed across many telecommunication operators. The collaboration among multiple telecommunication operators (OPs) will allow operators to get rid of unwanted callers at the early stage of their spamming activity. The challenge in the design of collaborative spam detection system is that OPs are not willing to share certain information about behaviour of their users/customers because of privacy concerns. Ideally, operators agree to share certain aggregated statistical information if collaboration process ensures complete privacy protection of users and their network data. To address this challenge and convince OPs for the collaboration, this paper proposes a decentralized reputation aggregation protocol that enables OPs to take part in a collaboration process without use of a trusted third party centralized system and without developing a predefined trust relationship with other OPs. To this extent, the collaboration among operators is achieved through the exchange of cryptographic reputation scores among OPs thus fully protects relationship network and reputation scores of users even in the presence of colluders. We evaluate the performance of proposed protocol over the simulated data consisting of five collaborators. Experimental results revealed that proposed approach outperforms standalone systems in terms of true positive rate and false positive rate.
As the Internet of Thing (IoT) matures, a lot of concerns are being raised about security, privacy and interoperability. The Web of Things (WoT) model leverages web technologies to improve interoperability. Due to its distributed components, the web scaled well beyond initial expectations. Still, secure authentication and communication across organization boundaries rely on the Public Key Infrastructure (PKI) which is a non-transparent, centralized single point of failure. We can improve transparency and reduce the chain of trust–-thus significantly improving the IoT security–-by empowering blockchain technology and web security standards. In this paper, we build a scalable, decentralized IoT-centric PKI and discuss how we can combine it with the emerging web authentication and authorization framework for constrained environments.
We present a novel approach to proving the absence of timing channels. The idea is to partition the programâs execution traces in such a way that each partition component is checked for timing attack resilience by a time complexity analysis and that per-component resilience implies the resilience of the whole program. We construct a partition by splitting the program traces at secret-independent branches. This ensures that any pair of traces with the same public input has a component containing both traces. Crucially, the per-component checks can be normal safety properties expressed in terms of a single execution. Our approach is thus in contrast to prior approaches, such as self-composition, that aim to reason about multiple (k⥠2) executions at once. We formalize the above as an approach called quotient partitioning, generalized to any k-safety property, and prove it to be sound. A key feature of our approach is a demand-driven partitioning strategy that uses a regex-like notion called trails to identify sets of execution traces, particularly those influenced by tainted (or secret) data. We have applied our technique in a prototype implementation tool called Blazer, based on WALA, PPL, and the brics automaton library. We have proved timing-channel freedom of (or synthesized an attack specification for) 24 programs written in Java bytecode, including 6 classic examples from the literature and 6 examples extracted from the DARPA STAC challenge problems.
In this paper a method of monostatic RCS measuring in real conditions for complex shaped objects is proposed. The basic idea of the method is to provide measuring in near field zone for different parts of the object (fragments) separately. This technique is titled "decomposition method". After such measurements all RCS data are summed and one can obtain the average RCS of investigated object. Such method is much more accessible in comparison with natural measurements in far field zone. In this paper the decomposition method is tested numerically. For this a model of complex shape object (tank T-90) is divided into the fragments for some direction of view. It is shown that the sum of RCS of the fragments is close to the full object RCS for corresponding direction.
Source location privacy (SLP) is becoming an important property for a large class of security-critical wireless sensor network applications such as monitoring and tracking. Much of the previous work on SLP have focused on the development of various protocols to enhance the level of SLP imparted to the network, under various attacker models and other conditions. Others works have focused on analysing the level of SLP being imparted by a specific protocol. In this paper, we focus on deconstructing routing-based SLP protocols to enable a better understanding of their structure. We argue that the SLP-aware routing protocols can be classified into two main categories, namely (i) spatial and (ii) temporal. Based on this, we show that there are three important components, namely (i) decoy selection, (ii) use and routing of control messages and (iii) use and routing of decoy messages. The decoy selection technique imparts the spatial or temporal property of SLP-aware routing. We show the viability of the framework through the construction of well-known SLP-aware routing protocols using the identified components.
Cross-modal audio-visual perception has been a long-lasting topic in psychology and neurology, and various studies have discovered strong correlations in human perception of auditory and visual stimuli. Despite work on computational multimodal modeling, the problem of cross-modal audio-visual generation has not been systematically studied in the literature. In this paper, we make the first attempt to solve this cross-modal generation problem leveraging the power of deep generative adversarial training. Specifically, we use conditional generative adversarial networks to achieve cross-modal audio-visual generation of musical performances. We explore different encoding methods for audio and visual signals, and work on two scenarios: instrument-oriented generation and pose-oriented generation. Being the first to explore this new problem, we compose two new datasets with pairs of images and sounds of musical performances of different instruments. Our experiments using both classification and human evaluation demonstrate that our model has the ability to generate one modality, i.e., audio/visual, from the other modality, i.e., visual/audio, to a good extent. Our experiments on various design choices along with the datasets will facilitate future research in this new problem space.
Network traffic classification is an important problem in network traffic analysis. It plays a vital role in many network tasks including quality of service, firewall enforcement and security. One of the challenging problems of classifying network traffic is the imbalanced property of network data. Usually, the amount of traffic in some classes is much higher than the amount of traffic in other classes. In this paper, we proposed an application of a deep learning approach to address imbalanced data problem in network traffic classification. We used a recent proposed deep network for unsupervised learning called Auxiliary Classifier Generative Adversarial Network to generate synthesized data samples for balancing between the minor and the major classes. We tested our method on a well-known network traffic dataset and the results showed that our proposed method achieved better performance compared to a recent proposed method for handling imbalanced problem in network traffic classification.
Convolutional Neural Network (CNN) based methods have shown significant performance gains in the problem of visual tracking in recent years. Due to many uncertain changes of objects online, such as abrupt motion, background clutter and large deformation, the visual tracking is still a challenging task. We propose a novel algorithm, namely Deep Location-Specific Tracking, which decomposes the tracking problem into a localization task and a classification task, and trains an individual network for each task. The localization network exploits the information in the current frame and provides a specific location to improve the probability of successful tracking, while the classification network finds the target among many examples generated around the target location in the previous frame, as well as the one estimated from the localization network in the current frame. CNN based trackers often have massive number of trainable parameters, and are prone to over-fitting to some particular object states, leading to less precision or tracking drift. We address this problem by learning a classification network based on 1 × 1 convolution and global average pooling. Extensive experimental results on popular benchmark datasets show that the proposed tracker achieves competitive results without using additional tracking videos for fine-tuning. The code is available at https://github.com/ZjjConan/DLST
Deep Learning has recently become hugely popular in machine learning for its ability to solve end-to-end learning systems, in which the features and the classifiers are learned simultaneously, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Its success is due to a combination of recent algorithmic breakthroughs, increasingly powerful computers, and access to significant amounts of data. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level differential privacy applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).
DPI Management application which resides on the north-bound of SDN architecture is to analyze the application signature data from the network. The data being read and analyzed are of format JSON for effective data representation and flows provisioned from North-bound application is also of JSON format. The data analytic engine analyzes the data stored in the non-relational data base and provides the information about real-time applications used by the network users. Allows the operator to provision flows dynamically with the data from the network to allow/block flows and also to boost the bandwidth. The DPI Management application allows decoupling of application with the controller; thus providing the facility to run it in any hyper-visor within network. Able to publish SNMP trap notifications to the network operators with application threshold and flow provisioning behavior. Data purging from non-relational database at frequent intervals to remove the obsolete analyzed data.
Nowadays, Internet Service Providers (ISPs) have been depending on Deep Packet Inspection (DPI) approaches, which are the most precise techniques for traffic identification and classification. However, constructing high performance DPI approaches imposes a vigilant and an in-depth computing system design because the demands for the memory and processing power. Membership query data structures, specifically Bloom filter (BF), have been employed as a matching check tool in DPI approaches. It has been utilized to store signatures fingerprint in order to examine the presence of these signatures in the incoming network flow. The main issue that arise when employing Bloom filter in DPI approaches is the need to use k hash functions which, in turn, imposes more calculations overhead that degrade the performance. Consequently, in this paper, a new design and implementation for a DPI approach have been proposed. This DPI utilizes a membership query data structure called Cuckoo filter (CF) as a matching check tool. CF has many advantages over BF like: less memory consumption, less false positive rate, higher insert performance, higher lookup throughput, support delete operation. The achieved experiments show that the proposed approach offers better performance results than others that utilize Bloom filter.
This paper proposes a novel recursive hashing scheme, in contrast to conventional "one-off" based hashing algorithms. Inspired by human's "nonsalient-to-salient" perception path, the proposed hashing scheme generates a series of binary codes based on progressively expanded salient regions. Built on a recurrent deep network, i.e., LSTM structure, the binary codes generated from later output nodes naturally inherit information aggregated from previously codes while explore novel information from the extended salient region, and therefore it possesses good scalability property. The proposed deep hashing network is trained via minimizing a triplet ranking loss, which is end-to-end trainable. Extensive experimental results on several image retrieval benchmarks demonstrate good performance gain over state-of-the-art image retrieval methods and its scalability property.
Hashing has been a widely-adopted technique for nearest neighbor search in large-scale image retrieval tasks. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, the cost of annotating data is often an obstacle when applying supervised hashing to a new domain. Moreover, the results can suffer from the robustness problem as the data at training and test stage may come from different distributions. This paper studies the exploration of generating synthetic data through semi-supervised generative adversarial networks (GANs), which leverages largely unlabeled and limited labeled training data to produce highly compelling data with intrinsic invariance and global coherence, for better understanding statistical structures of natural data. We demonstrate that the above two limitations can be well mitigated by applying the synthetic data for hashing. Specifically, a novel deep semantic hashing with GANs (DSH-GANs) is presented, which mainly consists of four components: a deep convolution neural networks (CNN) for learning image representations, an adversary stream to distinguish synthetic images from real ones, a hash stream for encoding image representations to hash codes and a classification stream. The whole architecture is trained end-to-end by jointly optimizing three losses, i.e., adversarial loss to correct label of synthetic or real for each sample, triplet ranking loss to preserve the relative similarity ordering in the input real-synthetic triplets and classification loss to classify each sample accurately. Extensive experiments conducted on both CIFAR-10 and NUS-WIDE image benchmarks validate the capability of exploiting synthetic images for hashing. Our framework also achieves superior results when compared to state-of-the-art deep hash models.
Deep web, a hidden and encrypted network that crawls beneath the surface web today has become a social hub for various criminals who carry out their crime through the cyber space and all the crime is being conducted and hosted on the Deep Web. This research paper is an effort to bring forth various techniques and ways in which an internet user can be safe online and protect his privacy through anonymity. Understanding how user's data and private information is phished and what are the risks of sharing personal information on social media.
Towards advancing the use of big keys as a practical defense against key exfiltration, this paper provides efficiency improvements for cryptographic schemes in the bounded retrieval model (BRM). We identify probe complexity (the number of scheme accesses to the slow storage medium storing the big key) as the dominant cost. Our main technical contribution is what we call the large-alphabet subkey prediction lemma. It gives good bounds on the predictability under leakage of a random sequence of blocks of the big key, as a function of the block size. We use it to significantly reduce the probe complexity required to attain a given level of security. Together with other techniques, this yields security-preserving performance improvements for BRM symmetric encryption schemes and BRM public-key identification schemes.
The Internet is becoming more and more content-oriented. CDN (Content Distribution Networks) has been a popular architecture compatible with the current Internet, and a new revolutionary paradigm such as ICN (Information Centric Networking) has studied. One of the main components in both CDN and ICN is considering cache on network. Despite a surge of extensive use of cache in the current and future Internet architectures, analysis on the performance of general cache networks are still quite limited due to complex inter-plays among various components and thus analytical intractability. Due to mathematical tractability, we consider 'static' cache policies and study asymptotic delay performance of those policies in cache networks, in particular, focusing on the impact of heterogeneous content popularities and nodes' geographical 'importances' in caching policies. Furthermore, our simulation results suggest that they perform quite similarly as popular 'dynamic' policies such as LFU (Least-Frequently-Used) and LRU (Least-Recently-Used). We believe that our theoretical findings provide useful engineering implications such as when and how various factors have impact on caching performance.
A significant advance in magnetic field management in a fully assembled superconducting radiofrequency cryomodule has been achieved and is reported here. Demagnetization of the entire cryomodule after assembly is a crucial step toward the goal of average magnetic flux density less than 0.5 μT at the location of the superconducting radio frequency cavities. An explanation of the physics of demagnetization and experimental results are presented.