Biblio
In the wake of diversity of service requirements and increasing push for extreme efficiency, adaptability propelled by machine learning (ML) a.k.a self organizing networks (SON) is emerging as an inevitable design feature for future mobile 5G networks. The implementation of SON with ML as a foundation requires significant amounts of real labeled sample data for the networks to train on, with high correlation between the amount of sample data and the effectiveness of the SON algorithm. As generally real labeled data is scarce therefore it can become bottleneck for ML empowered SON for unleashing their true potential. In this work, we propose a method of expanding these sample data sets using Generative Adversarial Networks (GANs), which are based on two interconnected deep artificial neural networks. This method is an alternative to taking more data to expand the sample set, preferred in cases where taking more data is not simple, feasible, or efficient. We demonstrate how the method can generate large amounts of realistic synthetic data, utilizing the GAN's ability of generation and discrimination, able to be easily added to the sample set. This method is, as an example, implemented with Call Data Records (CDRs) containing the start hour of a call and the duration of the call, in minutes taken from a real mobile operator. Results show that the method can be used with a relatively small sample set and little information about the statistics of the true CDRs and still make accurate synthetic ones.
Hyperspectral image (HSIs) with abundant spectral information but limited labeled dataset endows the rationality and necessity of semi-supervised spectral-based classification methods. Where, the utilizing approach of spectral information is significant to classification accuracy. In this paper, we propose a novel semi-supervised method based on generative adversarial network (GAN) with folded spectrum (FS-GAN). Specifically, the original spectral vector is folded to 2D square spectrum as input of GAN, which can generate spectral texture and provide larger receptive field over both adjacent and non-adjacent spectral bands for deep feature extraction. The generated fake folded spectrum, the labeled and unlabeled real folded spectrum are then fed to the discriminator for semi-supervised learning. A feature matching strategy is applied to prevent model collapse. Extensive experimental comparisons demonstrate the effectiveness of the proposed method.
Person re-identification(Person Re-ID) means that images of a pedestrian from cameras in a surveillance camera network can be automatically retrieved based on one of this pedestrian's image from another camera. The appearance change of pedestrians under different cameras poses a huge challenge to person re-identification. Person re-identification systems based on deep learning can effectively extract the appearance features of pedestrians. In this paper, the feature enhancement experiment is conducted, and the result showed that the current person reidentification datasets are relatively small and cannot fully meet the need of deep training. Therefore, this paper studied the method of using generative adversarial network to extend the person re-identification datasets and proposed a label smoothing regularization for outliers with weight (LSROW) algorithm to make full use of the generated data, effectively improved the accuracy of person re-identification.
Deep learning has been successfully applied to the ordinary image super-resolution (SR). However, since the synthetic aperture radar (SAR) images are often disturbed by multiplicative noise known as speckle and more blurry than ordinary images, there are few deep learning methods for the SAR image SR. In this paper, a deep generative adversarial network (DGAN) is proposed to reconstruct the pseudo high-resolution (HR) SAR images. First, a generator network is constructed to remove the noise of low-resolution SAR image and generate HR SAR image. Second, a discriminator network is used to differentiate between the pseudo super-resolution images and the realistic HR images. The adversarial objective function is introduced to make the pseudo HR SAR images closer to real SAR images. The experimental results show that our method can maintain the SAR image content with high-level noise suppression. The performance evaluation based on peak signal-to-noise-ratio and structural similarity index shows the superiority of the proposed method to the conventional CNN baselines.
Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.
In this paper, we present a semi-supervised remote sensing change detection method based on graph model with Generative Adversarial Networks (GANs). Firstly, the multi-temporal remote sensing change detection problem is converted as a problem of semi-supervised learning on graph where a majority of unlabeled nodes and a few labeled nodes are contained. Then, GANs are adopted to generate samples in a competitive manner and help improve the classification accuracy. Finally, a binary change map is produced by classifying the unlabeled nodes to a certain class with the help of both the labeled nodes and the unlabeled nodes on graph. Experimental results carried on several very high resolution remote sensing image data sets demonstrate the effectiveness of our method.
In today's society, even though the technology is so developed, the coloring of computer images has remained at the manual stage. As a carrier of human culture and art, film has existed in our history for hundred years. With the development of science and technology, movies have developed from the simple black-and-white film era to the current digital age. There is a very complicated process for coloring old movies. Aside from the traditional hand-painting techniques, the most common method is to use post-processing software for coloring movie frames. This kind of operation requires extraordinary skills, patience and aesthetics, which is a great test for the operator. In recent years, the extensive use of machine learning and neural networks has made it possible for computers to intelligently process images. Since 2016, various types of generative adversarial networks models have been proposed to make deep learning shine in the fields of image style transfer, image coloring, and image style change. In this case, the experiment uses the generative adversarial networks principle to process pictures and videos to realize the automatic rendering of old documentary movies.
The goal of content-based recommendation system is to retrieve and rank the list of items that are closest to the query item. Today, almost every e-commerce platform has a recommendation system strategy for products that customers can decide to buy. In this paper we describe our work on creating a Generative Adversarial Network based image retrieval system for e-commerce platforms to retrieve best similar images for a given product image specifically for shoes. We compare state-of-the-art solutions and provide results for the proposed deep learning network on a standard data set.
Semi-supervised learning has recently gained increasingly attention because it can combine abundant unlabeled data with carefully labeled data to train deep neural networks. However, common semi-supervised methods deeply rely on the quality of pseudo labels. In this paper, we proposed a new semi-supervised learning method based on Generative Adversarial Network (GAN), by using discriminator to learn the feature of both labeled and unlabeled data, instead of generating pseudo labels that cannot all be correct. Our approach, semi-supervised conditional GAN (SCGAN), builds upon the conditional GAN model, extending it to semi-supervised learning by changing the discriminator's output to a classification output and a real or false output. We evaluate our approach with basic semi-supervised model on MNIST dataset. It shows that our approach achieves the classification accuracy with 84.15%, outperforming the basic semi-supervised model with 72.94%, when labeled data are 1/600 of all data.
Phishing is typically deployed as an attack vector in the initial stages of a hacking endeavour. Due to it low-risk rightreward nature it has seen a widespread adoption, and detecting it has become a challenge in recent times. This paper proposes a novel means of detecting phishing websites using a Generative Adversarial Network. Taking into account the internal structure and external metadata of a website, the proposed approach uses a generator network which generates both legitimate as well as synthetic phishing features to train a discriminator network. The latter then determines if the features are either normal or phishing websites, before improving its detection accuracy based on the classification error. The proposed approach is evaluated using two different phishing datasets and is found to achieve a detection accuracy of up to 94%.
Cyber-Physical Systems (CPS) are growing with added complexity and functionality. Multidisciplinary interactions with physical systems are the major keys to CPS. However, sensors, actuators, controllers, and wireless communications are prone to attacks that compromise the system. Machine learning models have been utilized in controllers of automotive to learn, estimate, and provide the required intelligence in the control process. However, their estimation is also vulnerable to the attacks from physical or cyber domains. They have shown unreliable predictions against unknown biases resulted from the modeling. In this paper, we propose a novel control design using conditional generative adversarial networks that will enable a self-secured controller to capture the normal behavior of the control loop and the physical system, detect the anomaly, and recover from them. We experimented our novel control design on a self-secured BMS by driving a Nissan Leaf S on standard driving cycles while under various attacks. The performance of the design has been compared to the state-of-the-art; the self-secured BMS could detect the attacks with 83% accuracy and the recovery estimation error of 21% on average, which have improved by 28% and 8%, respectively.
Classifying Hyperspectral images with few training samples is a challenging problem. The generative adversarial networks (GAN) are promising techniques to address the problems. GAN constructs an adversarial game between a discriminator and a generator. The generator generates samples that are not distinguishable by the discriminator, and the discriminator determines whether or not a sample is composed of real data. In this paper, by introducing multilayer features fusion in GAN and a dynamic neighborhood voting mechanism, a novel algorithm for HSIs classification based on 1-D GAN was proposed. Extracting and fusing multiple layers features in discriminator, and using a little labeled samples, we fine-tuned a new sample 1-D CNN spectral classifier for HSIs. In order to improve the accuracy of the classification, we proposed a dynamic neighborhood voting mechanism to classify the HSIs with spatial features. The obtained results show that the proposed models provide competitive results compared to the state-of-the-art methods.
The increasing amount of malware variants seen in the wild is causing problems for Antivirus Software vendors, unable to keep up by creating signatures for each. The methods used to develop a signature, static and dynamic analysis, have various limitations. Machine learning has been used by Antivirus vendors to detect malware based on the information gathered from the analysis process. However, adversarial examples can cause machine learning algorithms to miss-classify new data. In this paper we describe a method for malware analysis by converting malware binaries to images and then preparing those images for training within a Generative Adversarial Network. These unsupervised deep neural networks are not susceptible to adversarial examples. The conversion to images from malware binaries should be faster than using dynamic analysis and it would still be possible to link malware families together. Using the Generative Adversarial Network, malware detection could be much more effective and reliable.
The problem of network representation learning, also known as network embedding, arises in many machine learning tasks assuming that there exist a small number of variabilities in the vertex representations which can capture the "semantics" of the original network structure. Most existing network embedding models, with shallow or deep architectures, learn vertex representations from the sampled vertex sequences such that the low-dimensional embeddings preserve the locality property and/or global reconstruction capability. The resultant representations, however, are difficult for model generalization due to the intrinsic sparsity of sampled sequences from the input network. As such, an ideal approach to address the problem is to generate vertex representations by learning a probability density function over the sampled sequences. However, in many cases, such a distribution in a low-dimensional manifold may not always have an analytic form. In this study, we propose to learn the network representations with adversarially regularized autoencoders (NetRA). NetRA learns smoothly regularized vertex representations that well capture the network structure through jointly considering both locality-preserving and global reconstruction constraints. The joint inference is encapsulated in a generative adversarial training process to circumvent the requirement of an explicit prior distribution, and thus obtains better generalization performance. We demonstrate empirically how well key properties of the network structure are captured and the effectiveness of NetRA on a variety of tasks, including network reconstruction, link prediction, and multi-label classification.
Studying human brain signals has always gathered great attention from the scientific community. In Brain Computer Interface (BCI) research, for example, changes of brain signals in relation to specific tasks (e.g., thinking something) are detected and used to control machines. While extracting spatio-temporal cues from brain signals for classifying state of human mind is an explored path, decoding and visualizing brain states is new and futuristic. Following this latter direction, in this paper, we propose an approach that is able not only to read the mind, but also to decode and visualize human thoughts. More specifically, we analyze brain activity, recorded by an ElectroEncephaloGram (EEG), of a subject while thinking about a digit, character or an object and synthesize visually the thought item. To accomplish this, we leverage the recent progress of adversarial learning by devising a conditional Generative Adversarial Network (GAN), which takes, as input, encoded EEG signals and generates corresponding images. In addition, since collecting large EEG signals in not trivial, our GAN model allows for learning distributions with limited training data. Performance analysis carried out on three different datasets – brain signals of multiple subjects thinking digits, characters, and objects – show that our approach is able to effectively generate images from thoughts of a person. They also demonstrate that EEG signals encode explicitly cues from thoughts which can be effectively used for generating semantically relevant visualizations.
How to generate multi-view images with realistic-looking appearance from only a single view input is a challenging problem. In this paper, we attack this problem by proposing a novel image generation model termed VariGANs, which combines the merits of the variational inference and the Generative Adversarial Networks (GANs). It generates the target image in a coarse-to-fine manner instead of a single pass which suffers from severe artifacts. It first performs variational inference to model global appearance of the object (e.g., shape and color) and produces coarse images of different views. Conditioned on the generated coarse images, it then performs adversarial learning to fill details consistent with the input and generate the fine images. Extensive experiments conducted on two clothing datasets, MVC and DeepFashion, have demonstrated that the generated images with the proposed VariGANs are more plausible than those generated by existing approaches, which provide more consistent global appearance as well as richer and sharper details.
The security of image steganography is an important basis for evaluating steganography algorithms. Steganography has recently made great progress in the long-term confrontation with steganalysis. To improve the security of image steganography, steganography must have the ability to resist detection by steganalysis algorithms. Traditional embedding-based steganography embeds the secret information into the content of an image, which unavoidably leaves a trace of the modification that can be detected by increasingly advanced machine-learning-based steganalysis algorithms. The concept of steganography without embedding (SWE), which does not need to modify the data of the carrier image, appeared to overcome the detection of machine-learning-based steganalysis algorithms. In this paper, we propose a novel image SWE method based on deep convolutional generative adversarial networks. We map the secret information into a noise vector and use the trained generator neural network model to generate the carrier image based on the noise vector. No modification or embedding operations are required during the process of image generation, and the information contained in the image can be extracted successfully by another neural network, called the extractor, after training. The experimental results show that this method has the advantages of highly accurate information extraction and a strong ability to resist detection by state-of-the-art image steganalysis algorithms.
Large-scale mobile traffic analytics is becoming essential to digital infrastructure provisioning, public transportation, events planning, and other domains. Monitoring city-wide mobile traffic is however a complex and costly process that relies on dedicated probes. Some of these probes have limited precision or coverage, others gather tens of gigabytes of logs daily, which independently offer limited insights. Extracting fine-grained patterns involves expensive spatial aggregation of measurements, storage, and post-processing. In this paper, we propose a mobile traffic super-resolution technique that overcomes these problems by inferring narrowly localised traffic consumption from coarse measurements. We draw inspiration from image processing and design a deep-learning architecture tailored to mobile networking, which combines Zipper Network (ZipNet) and Generative Adversarial neural Network (GAN) models. This enables to uniquely capture spatio-temporal relations between traffic volume snapshots routinely monitored over broad coverage areas ('low-resolution') and the corresponding consumption at 0.05 km2 level ('high-resolution') usually obtained after intensive computation. Experiments we conduct with a real-world data set demonstrate that the proposed ZipNet(-GAN) infers traffic consumption with remarkable accuracy and up to 100X higher granularity as compared to standard probing, while outperforming existing data interpolation techniques. To our knowledge, this is the first time super-resolution concepts are applied to large-scale mobile traffic analysis and our solution is the first to infer fine-grained urban traffic patterns from coarse aggregates.
Cross-modal audio-visual perception has been a long-lasting topic in psychology and neurology, and various studies have discovered strong correlations in human perception of auditory and visual stimuli. Despite work on computational multimodal modeling, the problem of cross-modal audio-visual generation has not been systematically studied in the literature. In this paper, we make the first attempt to solve this cross-modal generation problem leveraging the power of deep generative adversarial training. Specifically, we use conditional generative adversarial networks to achieve cross-modal audio-visual generation of musical performances. We explore different encoding methods for audio and visual signals, and work on two scenarios: instrument-oriented generation and pose-oriented generation. Being the first to explore this new problem, we compose two new datasets with pairs of images and sounds of musical performances of different instruments. Our experiments using both classification and human evaluation demonstrate that our model has the ability to generate one modality, i.e., audio/visual, from the other modality, i.e., visual/audio, to a good extent. Our experiments on various design choices along with the datasets will facilitate future research in this new problem space.
This work describes how automated data generation integrates in a big data pipeline. A lack of veracity in big data can cause models that are inaccurate, or biased by trends in the training data. This can lead to issues as a pipeline matures that are difficult to overcome. This work describes the use of a Generative Adversarial Network to generate sketch data, such as those that might be used in a human verification task. These generated sketches are verified as recognizable using a crowd-sourcing methodology, and finds that the generated sketches were correctly recognized 43.8% of the time, in contrast to human drawn sketches which were 87.7% accurate. This method is scalable and can be used to generate realistic data in many domains and bootstrap a dataset used for training a model prior to deployment.
Many malware families utilize domain generation algorithms (DGAs) to establish command and control (C&C) connections. While there are many methods to pseudorandomly generate domains, we focus in this paper on detecting (and generating) domains on a per-domain basis which provides a simple and flexible means to detect known DGA families. Recent machine learning approaches to DGA detection have been successful on fairly simplistic DGAs, many of which produce names of fixed length. However, models trained on limited datasets are somewhat blind to new DGA variants. In this paper, we leverage the concept of generative adversarial networks to construct a deep learning based DGA that is designed to intentionally bypass a deep learning based detector. In a series of adversarial rounds, the generator learns to generate domain names that are increasingly more difficult to detect. In turn, a detector model updates its parameters to compensate for the adversarially generated domains. We test the hypothesis of whether adversarially generated domains may be used to augment training sets in order to harden other machine learning models against yet-to-be-observed DGAs. We detail solutions to several challenges in training this character-based generative adversarial network. In particular, our deep learning architecture begins as a domain name auto-encoder (encoder + decoder) trained on domains in the Alexa one million. Then the encoder and decoder are reassembled competitively in a generative adversarial network (detector + generator), with novel neural architectures and training strategies to improve convergence.
- « first
- ‹ previous
- 1
- 2
- 3
- 4