Sultan, Bisma, Wani, M. Arif.
2022.
Multi-data Image Steganography using Generative Adversarial Networks. 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom). :454–459.
The success of deep learning based steganography has shifted focus of researchers from traditional steganography approaches to deep learning based steganography. Various deep steganographic models have been developed for improved security, capacity and invisibility. In this work a multi-data deep learning steganography model has been developed using a well known deep learning model called Generative Adversarial Networks (GAN) more specifically using deep convolutional Generative Adversarial Networks (DCGAN). The model is capable of hiding two different messages, meant for two different receivers, inside a single cover image. The proposed model consists of four networks namely Generator, Steganalyzer Extractor1 and Extractor2 network. The Generator hides two secret messages inside one cover image which are extracted using two different extractors. The Steganalyzer network differentiates between the cover and stego images generated by the generator network. The experiment has been carried out on CelebA dataset. Two commonly used distortion metrics Peak signal-to-Noise ratio (PSNR) and Structural Similarity Index Metric (SSIM) are used for measuring the distortion in the stego image The results of experimentation show that the stego images generated have good imperceptibility and high extraction rates.
Chai, Heyan, Su, Weijun, Tang, Siyu, Ding, Ye, Fang, Binxing, Liao, Qing.
2022.
Improving Anomaly Detection with a Self-Supervised Task Based on Generative Adversarial Network. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3563–3567.
Existing anomaly detection models show success in detecting abnormal images with generative adversarial networks on the insufficient annotation of anomalous samples. However, existing models cannot accurately identify the anomaly samples which are close to the normal samples. We assume that the main reason is that these methods ignore the diversity of patterns in normal samples. To alleviate the above issue, this paper proposes a novel anomaly detection framework based on generative adversarial network, called ADe-GAN. More concretely, we construct a self-supervised learning task to fully explore the pattern information and latent representations of input images. In model inferring stage, we design a new abnormality score approach by jointly considering the pattern information and reconstruction errors to improve the performance of anomaly detection. Extensive experiments show that the ADe-GAN outperforms the state-of-the-art methods over several real-world datasets.
ISSN: 2379-190X
Zhang, Yuhang, Zhang, Qian, Jiang, Man, Su, Jiangtao.
2022.
SCGAN: Generative Adversarial Networks of Skip Connection for Face Image Inpainting. 2022 Ninth International Conference on Social Networks Analysis, Management and Security (SNAMS). :1–6.
Deep learning has been widely applied for jobs involving face inpainting, however, there are usually some problems, such as incoherent inpainting edges, lack of diversity of generated images and other problems. In order to get more feature information and improve the inpainting effect, we therefore propose a Generative Adversarial Network of Skip Connection (SCGAN), which connects the encoder layers and the decoder layers by skip connection in the generator. The coherence and consistency of the image inpainting edges are improved, and the finer features of the image inpainting are refined, simultaneously using the discriminator's local and global double discriminators model. We also employ WGAN-GP loss to enhance model stability during training, prevent model collapse, and increase the variety of inpainting face images. Finally, experiments on the CelebA dataset and the LFW dataset are performed, and the model's performance is assessed using the PSNR and SSIM indices. Our model's face image inpainting is more realistic and coherent than that of other models, and the model training is more reliable.
ISSN: 2831-7343
Pardede, Hilman, Zilvan, Vicky, Ramdan, Ade, Yuliani, Asri R., Suryawati, Endang, Kusumowardani, Renni.
2022.
Adversarial Networks-Based Speech Enhancement with Deep Regret Loss. 2022 5th International Conference on Networking, Information Systems and Security: Envisage Intelligent Systems in 5g//6G-based Interconnected Digital Worlds (NISS). :1–6.
Speech enhancement is often applied for speech-based systems due to the proneness of speech signals to additive background noise. While speech processing-based methods are traditionally used for speech enhancement, with advancements in deep learning technologies, many efforts have been made to implement them for speech enhancement. Using deep learning, the networks learn mapping functions from noisy data to clean ones and then learn to reconstruct the clean speech signals. As a consequence, deep learning methods can reduce what is so-called musical noise that is often found in traditional speech enhancement methods. Currently, one popular deep learning architecture for speech enhancement is generative adversarial networks (GAN). However, the cross-entropy loss that is employed in GAN often causes the training to be unstable. So, in many implementations of GAN, the cross-entropy loss is replaced with the least-square loss. In this paper, to improve the training stability of GAN using cross-entropy loss, we propose to use deep regret analytic generative adversarial networks (Dragan) for speech enhancements. It is based on applying a gradient penalty on cross-entropy loss. We also employ relativistic rules to stabilize the training of GAN. Then, we applied it to the least square and Dragan losses. Our experiments suggest that the proposed method improve the quality of speech better than the least-square loss on several objective quality metrics.
Brian, Gianluca, Faonio, Antonio, Obremski, Maciej, Ribeiro, João, Simkin, Mark, Skórski, Maciej, Venturi, Daniele.
2022.
The Mother of All Leakages: How to Simulate Noisy Leakages via Bounded Leakage (Almost) for Free. IEEE Transactions on Information Theory. 68:8197–8227.
We show that the most common flavors of noisy leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to a small statistical simulation error and a slight loss in the leakage parameter. The latter holds true in particular for one of the most used noisy-leakage models, where the noisiness is measured using the conditional average min-entropy (Naor and Segev, CRYPTO’09 and SICOMP’12). Our reductions between noisy and bounded leakage are achieved in two steps. First, we put forward a new leakage model (dubbed the dense leakage model) and prove that dense leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to small statistical distance. Second, we show that the most common noisy-leakage models fall within the class of dense leakage, with good parameters. Third, we prove lower bounds on the amount of bounded leakage required for simulation with sub-constant error, showing that our reductions are nearly optimal. In particular, our results imply that useful general simulation of noisy leakage based on statistical distance and mutual information is impossible. We also provide a complete picture of the relationships between different noisy-leakage models. Our result finds applications to leakage-resilient cryptography, where we are often able to lift security in the presence of bounded leakage to security in the presence of noisy leakage, both in the information-theoretic and in the computational setting. Remarkably, this lifting procedure makes only black-box use of the underlying schemes. Additionally, we show how to use lower bounds in communication complexity to prove that bounded-collusion protocols (Kumar, Meka, and Sahai, FOCS’19) for certain functions do not only require long transcripts, but also necessarily need to reveal enough information about the inputs.
Conference Name: IEEE Transactions on Information Theory