Visible to the public Biblio

Filters: Keyword is residual network  [Clear All Filters]
2020-11-09
Zhang, T., Wang, R., Ding, J., Li, X., Li, B..  2018.  Face Recognition Based on Densely Connected Convolutional Networks. 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM). :1–6.
The face recognition methods based on convolutional neural network have achieved great success. The existing model usually used the residual network as the core architecture. The residual network is good at reusing features, but it is difficult to explore new features. And the densely connected network can be used to explore new features. We proposed a face recognition model named Dense Face to explore the performance of densely connected network in face recognition. The model is based on densely connected convolutional neural network and composed of Dense Block layers, transition layers and classification layer. The model was trained with the joint supervision of center loss and softmax loss through feature normalization and enabled the convolutional neural network to learn more discriminative features. The Dense Face model was trained using the public available CASIA-WebFace dataset and was tested on the LFW and the CAS-PEAL-Rl datasets. Experimental results showed that the densely connected convolutional neural network has achieved higher face verification accuracy and has better robustness than other model such as VGG Face and ResNet model.
2020-06-12
Jiang, Ruituo, Li, Xu, Gao, Ang, Li, Lixin, Meng, Hongying, Yue, Shigang, Zhang, Lei.  2019.  Learning Spectral and Spatial Features Based on Generative Adversarial Network for Hyperspectral Image Super-Resolution. IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium. :3161—3164.

Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.