Visible to the public Biblio

Filters: Keyword is gray-box attack  [Clear All Filters]
2020-07-20
Lee, Seungkwang, Kim, Taesung, Kang, Yousung.  2018.  A Masked White-Box Cryptographic Implementation for Protecting Against Differential Computation Analysis. IEEE Transactions on Information Forensics and Security. 13:2602–2615.
Recently, gray-box attacks on white-box cryptographic implementations have succeeded. These attacks are more efficient than white-box attacks because they can be performed without detailed knowledge of the target implementation. The success of the gray-box attack is reportedly due to the unbalanced encodings used to generate the white-box lookup table. In this paper, we propose a method to protect the gray-box attack against white-box implementations. The basic idea is to apply the masking technique before encoding intermediate values during the white-box lookup table generation. Because we do not require any random source in runtime, it is possible to perform efficient encryption and decryption using our method. The security and performance analysis shows that the proposed method can be a reliable and efficient countermeasure.
2019-01-16
Hashemi, Mohammad, Cusack, Greg, Keller, Eric.  2018.  Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses. Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security. :25–36.
It has been shown that adversaries can craft example inputs to neural networks which are similar to legitimate inputs but have been created to purposely cause the neural network to misclassify the input. These adversarial examples are crafted, for example, by calculating gradients of a carefully defined loss function with respect to the input. As a countermeasure, some researchers have tried to design robust models by blocking or obfuscating gradients, even in white-box settings. Another line of research proposes introducing a separate detector to attempt to detect adversarial examples. This approach also makes use of gradient obfuscation techniques, for example, to prevent the adversary from trying to fool the detector. In this paper, we introduce stochastic substitute training, a gray-box approach that can craft adversarial examples for defenses which obfuscate gradients. For those defenses that have tried to make models more robust, with our technique, an adversary can craft adversarial examples with no knowledge of the defense. For defenses that attempt to detect the adversarial examples, with our technique, an adversary only needs very limited information about the defense to craft adversarial examples. We demonstrate our technique by applying it against two defenses which make models more robust and two defenses which detect adversarial examples.