Visible to the public Biblio

Filters: Keyword is Super-resolution  [Clear All Filters]
2020-06-12
Gu, Feng, Zhang, Hong, Wang, Chao, Wu, Fan.  2019.  SAR Image Super-Resolution Based on Noise-Free Generative Adversarial Network. IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium. :2575—2578.

Deep learning has been successfully applied to the ordinary image super-resolution (SR). However, since the synthetic aperture radar (SAR) images are often disturbed by multiplicative noise known as speckle and more blurry than ordinary images, there are few deep learning methods for the SAR image SR. In this paper, a deep generative adversarial network (DGAN) is proposed to reconstruct the pseudo high-resolution (HR) SAR images. First, a generator network is constructed to remove the noise of low-resolution SAR image and generate HR SAR image. Second, a discriminator network is used to differentiate between the pseudo super-resolution images and the realistic HR images. The adversarial objective function is introduced to make the pseudo HR SAR images closer to real SAR images. The experimental results show that our method can maintain the SAR image content with high-level noise suppression. The performance evaluation based on peak signal-to-noise-ratio and structural similarity index shows the superiority of the proposed method to the conventional CNN baselines.

Jiang, Ruituo, Li, Xu, Gao, Ang, Li, Lixin, Meng, Hongying, Yue, Shigang, Zhang, Lei.  2019.  Learning Spectral and Spatial Features Based on Generative Adversarial Network for Hyperspectral Image Super-Resolution. IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium. :3161—3164.

Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.

2018-11-19
Zhang, Chaoyun, Ouyang, Xi, Patras, Paul.  2017.  ZipNet-GAN: Inferring Fine-Grained Mobile Traffic Patterns via a Generative Adversarial Neural Network. Proceedings of the 13th International Conference on Emerging Networking EXperiments and Technologies. :363–375.

Large-scale mobile traffic analytics is becoming essential to digital infrastructure provisioning, public transportation, events planning, and other domains. Monitoring city-wide mobile traffic is however a complex and costly process that relies on dedicated probes. Some of these probes have limited precision or coverage, others gather tens of gigabytes of logs daily, which independently offer limited insights. Extracting fine-grained patterns involves expensive spatial aggregation of measurements, storage, and post-processing. In this paper, we propose a mobile traffic super-resolution technique that overcomes these problems by inferring narrowly localised traffic consumption from coarse measurements. We draw inspiration from image processing and design a deep-learning architecture tailored to mobile networking, which combines Zipper Network (ZipNet) and Generative Adversarial neural Network (GAN) models. This enables to uniquely capture spatio-temporal relations between traffic volume snapshots routinely monitored over broad coverage areas ('low-resolution') and the corresponding consumption at 0.05 km2 level ('high-resolution') usually obtained after intensive computation. Experiments we conduct with a real-world data set demonstrate that the proposed ZipNet(-GAN) infers traffic consumption with remarkable accuracy and up to 100X higher granularity as compared to standard probing, while outperforming existing data interpolation techniques. To our knowledge, this is the first time super-resolution concepts are applied to large-scale mobile traffic analysis and our solution is the first to infer fine-grained urban traffic patterns from coarse aggregates.

2017-09-27
Zheng, Huanhuan, Qu, Yanyun, Zeng, Kun.  2016.  Coupled Autoencoder Network with Joint Regularizations for Image Super-resolution. Proceedings of the International Conference on Internet Multimedia Computing and Service. :114–117.
This paper aims at building a sparse deep autoencoder network with joint regularizations for image super-resolution. A map is learned from the low-resolution feature space to high-resolution feature space. In the training stage, two autoencoder networks are built for image representation for low resolution images and their high resolution counterparts, respectively. A neural network is constructed to learn a map between the features of low resolution images and high resolution images. Furthermore, due to the local smoothness and the redundancy of an image, the joint variation regularizations are unified with the coupled autoencoder network (CAN). For the local smoothness, steerable kernel variation regularization is designed. For redundancy, non-local variation regularization is designed. The joint regularizations improve the quality of the super resolution image. Experimental results on Set5 demonstrate the effectiveness of our proposed method.
2017-03-08
Liu, Weijian, Chen, Zeqi, Chen, Yunhua, Yao, Ruohe.  2015.  An \#8467;1/2-BTV regularization algorithm for super-resolution. 2015 4th International Conference on Computer Science and Network Technology (ICCSNT). 01:1274–1281.

In this paper, we propose a novelregularization term for super-resolution by combining a bilateral total variation (BTV) regularizer and a sparsity prior model on the image. The term is composed of the weighted least squares minimization and the bilateral filter proposed by Elad, but adding an ℓ1/2 regularizer. It is referred to as ℓ1/2-BTV. The proposed algorithm serves to restore image details more precisely and eliminate image noise more effectively by introducing the sparsity of the ℓ1/2 regularizer into the traditional bilateral total variation (BTV) regularizer. Experiments were conducted on both simulated and real image sequences. The results show that the proposed algorithm generates high-resolution images of better quality, as defined by both de-noising and edge-preservation metrics, than other methods.