Visible to the public Biblio

Filters: Keyword is frequency analysis  [Clear All Filters]
2022-01-10
Padma, Bh, Chandravathi, D, Pratibha, Lanka.  2021.  Defense Against Frequency Analysis In Elliptic Curve Cryptography Using K-Means Clustering. 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). :64–69.
Elliptic Curve Cryptography (ECC) is a revolution in asymmetric key cryptography which is based on the hardness of discrete logarithms. ECC offers lightweight encryption as it presents equal security for smaller keys, and reduces processing overhead. But asymmetric schemes are vulnerable to several cryptographic attacks such as plaintext attacks, known cipher text attacks etc. Frequency analysis is a type of cipher text attack which is a passive traffic analysis scenario, where an opponent studies the frequency or occurrence of single letter or groups of letters in a cipher text to predict the plain text part. Block cipher modes are not used in asymmetric key encryption because encrypting many blocks with an asymmetric scheme is literally slow and CBC propagates transmission errors. Therefore, in this research we present a new approach to defence against frequency analysis in ECC using K-Means clustering to defence against Frequency Analysis. In this proposed methodology, security of ECC against frequency analysis is achieved by clustering the points of the curve and selecting different cluster for encoding a text each time it is encrypted. This technique destroys the regularities in the cipher text and thereby guards against cipher text attacks.
2021-02-01
Bai, Y., Guo, Y., Wei, J., Lu, L., Wang, R., Wang, Y..  2020.  Fake Generated Painting Detection Via Frequency Analysis. 2020 IEEE International Conference on Image Processing (ICIP). :1256–1260.
With the development of deep neural networks, digital fake paintings can be generated by various style transfer algorithms. To detect the fake generated paintings, we analyze the fake generated and real paintings in Fourier frequency domain and observe statistical differences and artifacts. Based on our observations, we propose Fake Generated Painting Detection via Frequency Analysis (FGPD-FA) by extracting three types of features in frequency domain. Besides, we also propose a digital fake painting detection database for assessing the proposed method. Experimental results demonstrate the excellence of the proposed method in different testing conditions.
2020-12-28
Raju, R. S., Lipasti, M..  2020.  BlurNet: Defense by Filtering the Feature Maps. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :38—46.

Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations (RP2), generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this lowpass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.