Visible to the public Biblio

Filters: Keyword is deep neural network (dnn)  [Clear All Filters]
2023-01-05
Garcia, Carla E., Camana, Mario R., Koo, Insoo.  2022.  DNN aided PSO based-scheme for a Secure Energy Efficiency Maximization in a cooperative NOMA system with a non-linear EH. 2022 Thirteenth International Conference on Ubiquitous and Future Networks (ICUFN). :155–160.
Physical layer security is an emerging security area to tackle wireless security communications issues and complement conventional encryption-based techniques. Thus, we propose a novel scheme based on swarm intelligence optimization technique and a deep neural network (DNN) for maximizing the secrecy energy efficiency (SEE) in a cooperative relaying underlay cognitive radio- and non-orthogonal multiple access (NOMA) system with a non-linear energy harvesting user which is exposed to multiple eavesdroppers. Satisfactorily, simulation results show that the proposed particle swarm optimization (PSO)-DNN framework achieves close performance to that of the optimal solutions, with a meaningful reduction in computation complexity.
2018-12-10
Kwon, Hyun, Yoon, Hyunsoo, Choi, Daeseon.  2018.  POSTER: Zero-Day Evasion Attack Analysis on Race Between Attack and Defense. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :805–807.

Deep neural networks (DNNs) exhibit excellent performance in machine learning tasks such as image recognition, pattern recognition, speech recognition, and intrusion detection. However, the usage of adversarial examples, which are intentionally corrupted by noise, can lead to misclassification. As adversarial examples are serious threats to DNNs, both adversarial attacks and methods of defending against adversarial examples have been continuously studied. Zero-day adversarial examples are created with new test data and are unknown to the classifier; hence, they represent a more significant threat to DNNs. To the best of our knowledge, there are no analytical studies in the literature of zero-day adversarial examples with a focus on attack and defense methods through experiments using several scenarios. Therefore, in this study, zero-day adversarial examples are practically analyzed with an emphasis on attack and defense methods through experiments using various scenarios composed of a fixed target model and an adaptive target model. The Carlini method was used for a state-of-the-art attack, while an adversarial training method was used as a typical defense method. We used the MNIST dataset and analyzed success rates of zero-day adversarial examples, average distortions, and recognition of original samples through several scenarios of fixed and adaptive target models. Experimental results demonstrate that changing the parameters of the target model in real time leads to resistance to adversarial examples in both the fixed and adaptive target models.