Visible to the public Biblio

Filters: Author is Jiang, W.  [Clear All Filters]
2019-02-22
Hu, D., Wang, L., Jiang, W., Zheng, S., Li, B..  2018.  A Novel Image Steganography Method via Deep Convolutional Generative Adversarial Networks. IEEE Access. 6:38303-38314.

The security of image steganography is an important basis for evaluating steganography algorithms. Steganography has recently made great progress in the long-term confrontation with steganalysis. To improve the security of image steganography, steganography must have the ability to resist detection by steganalysis algorithms. Traditional embedding-based steganography embeds the secret information into the content of an image, which unavoidably leaves a trace of the modification that can be detected by increasingly advanced machine-learning-based steganalysis algorithms. The concept of steganography without embedding (SWE), which does not need to modify the data of the carrier image, appeared to overcome the detection of machine-learning-based steganalysis algorithms. In this paper, we propose a novel image SWE method based on deep convolutional generative adversarial networks. We map the secret information into a noise vector and use the trained generator neural network model to generate the carrier image based on the noise vector. No modification or embedding operations are required during the process of image generation, and the information contained in the image can be extracted successfully by another neural network, called the extractor, after training. The experimental results show that this method has the advantages of highly accurate information extraction and a strong ability to resist detection by state-of-the-art image steganalysis algorithms.

2019-01-21
Fei, Y., Ning, J., Jiang, W..  2018.  A quantifiable Attack-Defense Trees model for APT attack. 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). :2303–2306.
In order to deal with APT(Advanced Persistent Threat) attacks, this paper proposes a quantifiable Attack-Defense Tree model. First, the model gives both attack and defense leaf node a variety of security attributes. And then quantifies the nodes through the analytic hierarchy process. Finally, it analyzes the impact of the defense measures on the attack behavior. Through the application of the model, we can see that the quantifiable Attack-Defense Tree model can well describe the impact of defense measures on attack behavior.
2018-11-19
Huang, H., Wang, H., Luo, W., Ma, L., Jiang, W., Zhu, X., Li, Z., Liu, W..  2017.  Real-Time Neural Style Transfer for Videos. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). :7044–7052.

Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.