Visible to the public Biblio

Filters: Keyword is loss function  [Clear All Filters]
2022-07-12
Duan, Xiaowei, Han, Yiliang, Wang, Chao, Ni, Huanhuan.  2021.  Optimization of Encrypted Communication Length Based on Generative Adversarial Network. 2021 IEEE 4th International Conference on Big Data and Artificial Intelligence (BDAI). :165—170.
With the development of artificial intelligence and cryptography, intelligent cryptography will be the trend of encrypted communications in the future. Abadi designed an encrypted communication model based on a generative adversarial network, which can communicate securely when the adversary knows the ciphertext. The communication party and the adversary fight against each other to continuously improve their own capabilities to achieve a state of secure communication. However, this model can only have a better communication effect under the 16 bits communication length, and cannot adapt to the length of modern encrypted communication. Combine the neural network structure in DCGAN to optimize the neural network of the original model, and at the same time increase the batch normalization process, and optimize the loss function in the original model. Experiments show that under the condition of the maximum 2048-bit communication length, the decryption success rate of communication reaches about 0.97, while ensuring that the adversary’s guess error rate is about 0.95, and the training speed is greatly increased to keep it below 5000 steps, ensuring safety and efficiency Communication.
2021-02-23
Liao, D., Huang, S., Tan, Y., Bai, G..  2020.  Network Intrusion Detection Method Based on GAN Model. 2020 International Conference on Computer Communication and Network Security (CCNS). :153—156.

The existing network intrusion detection methods have less label samples in the training process, and the detection accuracy is not high. In order to solve this problem, this paper designs a network intrusion detection method based on the GAN model by using the adversarial idea contained in the GAN. The model enhances the original training set by continuously generating samples, which expanding the label sample set. In order to realize the multi-classification of samples, this paper transforms the previous binary classification model of the generated adversarial network into a supervised learning multi-classification model. The loss function of training is redefined, so that the corresponding training method and parameter setting are obtained. Under the same experimental conditions, several performance indicators are used to compare the detection ability of the proposed method, the original classification model and other models. The experimental results show that the method proposed in this paper is more stable, robust, accurate detection rate, has good generalization ability, and can effectively realize network intrusion detection.

2020-12-07
Wang, C., He, M..  2018.  Image Style Transfer with Multi-target Loss for loT Applications. 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN). :296–299.

Transferring the style of an image is a fundamental problem in computer vision. Which extracts the features of a context image and a style image, then fixes them to produce a new image with features of the both two input images. In this paper, we introduce an artificial system to separate and recombine the content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. We use a pre-trained deep convolutional neural network VGG19 to extract the feature map of the input style image and context image. Then we define a loss function that captures the difference between the output image and the two input images. We use the gradient descent algorithm to update the output image to minimize the loss function. Experiment results show the feasibility of the method.

2020-06-12
Jiang, Ruituo, Li, Xu, Gao, Ang, Li, Lixin, Meng, Hongying, Yue, Shigang, Zhang, Lei.  2019.  Learning Spectral and Spatial Features Based on Generative Adversarial Network for Hyperspectral Image Super-Resolution. IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium. :3161—3164.

Super-resolution (SR) of hyperspectral images (HSIs) aims to enhance the spatial/spectral resolution of hyperspectral imagery and the super-resolved results will benefit many remote sensing applications. A generative adversarial network for HSIs super-resolution (HSRGAN) is proposed in this paper. Specifically, HSRGAN constructs spectral and spatial blocks with residual network in generator to effectively learn spectral and spatial features from HSIs. Furthermore, a new loss function which combines the pixel-wise loss and adversarial loss together is designed to guide the generator to recover images approximating the original HSIs and with finer texture details. Quantitative and qualitative results demonstrate that the proposed HSRGAN is superior to the state of the art methods like SRCNN and SRGAN for HSIs spatial SR.

2015-05-06
Jingkuan Song, Yi Yang, Xuelong Li, Zi Huang, Yang Yang.  2014.  Robust Hashing With Local Models for Approximate Similarity Search. Cybernetics, IEEE Transactions on. 44:1225-1236.

Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing ℓ2,1-norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.