Visible to the public Biblio

Filters: Author is Sharma, Yash  [Clear All Filters]
2020-06-22
Singh, Shradhanjali, Sharma, Yash.  2019.  A Review on DNA based Cryptography for Data hiding. 2019 International Conference on Intelligent Sustainable Systems (ICISS). :282–285.
In today's world, securing data is becoming one of the main issues, the elaboration of the fusion of cryptography and steganography are contemplating as the sphere of on-going research. This can be gain by cryptography, steganography, and fusion of these two, where message firstly encoding using any cryptography techniques and then conceal into any cover medium using steganography techniques. Biological structure of DNA is used as the cover medium due to high storage capacity, simple encoding method, massive parallelism and randomness DNA cryptography can be used in identification card and tickets. Currently work in this field is still in the developmental stage and a lot of investigation is required to reach a fully-fledged stage. This paper provides a review of the existing method of DNA based cryptography
2018-06-07
Chen, Pin-Yu, Zhang, Huan, Sharma, Yash, Yi, Jinfeng, Hsieh, Cho-Jui.  2017.  ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks Without Training Substitute Models. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. :15–26.
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs. Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to efficiently attack black-box models. By exploiting zeroth order optimization, improved attacks to the targeted DNN can be accomplished, sparing the need for training substitute models and avoiding the loss in attack transferability. Experimental results on MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective as the state-of-the-art white-box attack (e.g., Carlini and Wagner's attack) and significantly outperforms existing black-box attacks via substitute models.