Cheng, Leixiao, Meng, Fei.
2022.
An Improvement on “CryptCloud$^\textrm+\$$: Secure and Expressive Data Access Control for Cloud Storage”. IEEE Transactions on Services Computing. :1–2.
Recently, Ning et al. proposed the “CryptCloud$^\textrm+\$$: Secure and Expressive Data Access Control for Cloud Storage” in IEEE Transaction on Services Computing. This work provided two versatile ciphertext-policy attribute-based encryption (CP-ABE) schemes to achieve flexible access control on encrypted data, namely ATER-CP-ABE and ATIR-CP-ABE, both of which have attractive advantages, such as white-box malicious user traceability, semi-honest authority accountability, public auditing and user revocation. However, we find a bug of access control in both schemes, i.e., a non-revoked user with attribute set \$S\$ can decrypt the ciphertext \$ct\$ encrypted under any access policy \$(A,\textbackslashrho )\$, regardless of whether \$S\$ satisfies \$(A,\textbackslashrho )\$ or not. This paper carefully analyzes the bug, and makes an improvement on Ning's pioneering work, so as to fix it.
Conference Name: IEEE Transactions on Services Computing
Rakin, Adnan Siraj, Chowdhuryy, Md Hafizul Islam, Yao, Fan, Fan, Deliang.
2022.
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories. 2022 IEEE Symposium on Security and Privacy (SP). :1157–1174.
Recent advancements in Deep Neural Networks (DNNs) have enabled widespread deployment in multiple security-sensitive domains. The need for resource-intensive training and the use of valuable domain-specific training data have made these models the top intellectual property (IP) for model owners. One of the major threats to DNN privacy is model extraction attacks where adversaries attempt to steal sensitive information in DNN models. In this work, we propose an advanced model extraction framework DeepSteal that steals DNN weights remotely for the first time with the aid of a memory side-channel attack. Our proposed DeepSteal comprises two key stages. Firstly, we develop a new weight bit information extraction method, called HammerLeak, through adopting the rowhammer-based fault technique as the information leakage vector. HammerLeak leverages several novel system-level techniques tailored for DNN applications to enable fast and efficient weight stealing. Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model. We evaluate the proposed model extraction framework on three popular image datasets (e.g., CIFAR-10/100/GTSRB) and four DNN architectures (e.g., ResNet-18/34/Wide-ResNetNGG-11). The extracted substitute model has successfully achieved more than 90% test accuracy on deep residual networks for the CIFAR-10 dataset. Moreover, our extracted substitute model could also generate effective adversarial input samples to fool the victim model. Notably, it achieves similar performance (i.e., 1-2% test accuracy under attack) as white-box adversarial input attack (e.g., PGD/Trades).
ISSN: 2375-1207
Singh, Inderjeet, Araki, Toshinori, Kakizaki, Kazuya.
2022.
Powerful Physical Adversarial Examples Against Practical Face Recognition Systems. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW). :301–310.
It is well-known that the most existing machine learning (ML)-based safety-critical applications are vulnerable to carefully crafted input instances called adversarial examples (AXs). An adversary can conveniently attack these target systems from digital as well as physical worlds. This paper aims to the generation of robust physical AXs against face recognition systems. We present a novel smoothness loss function and a patch-noise combo attack for realizing powerful physical AXs. The smoothness loss interjects the concept of delayed constraints during the attack generation process, thereby causing better handling of optimization complexity and smoother AXs for the physical domain. The patch-noise combo attack combines patch noise and imperceptibly small noises from different distributions to generate powerful registration-based physical AXs. An extensive experimental analysis found that our smoothness loss results in robust and more transferable digital and physical AXs than the conventional techniques. Notably, our smoothness loss results in a 1.17 and 1.97 times better mean attack success rate (ASR) in physical white-box and black-box attacks, respectively. Our patch-noise combo attack furthers the performance gains and results in 2.39 and 4.74 times higher mean ASR than conventional technique in physical world white-box and black-box attacks, respectively.
ISSN: 2690-621X
Lin, Xuanwei, Dong, Chen, Liu, Ximeng, Zhang, Yuanyuan.
2022.
SPA: An Efficient Adversarial Attack on Spiking Neural Networks using Spike Probabilistic. 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid). :366–375.
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model's accuracy under white-box attack decreases by 9.2S% 31.1S% better than others, and average success rates are 74.87% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting,
Sweigert, Devin, Chowdhury, Md Minhaz, Rifat, Nafiz.
2022.
Exploit Security Vulnerabilities by Penetration Testing. 2022 IEEE International Conference on Electro Information Technology (eIT). :527–532.
When we setup a computer network, we need to know if an attacker can get into the system. We need to do a series of test that shows the vulnerabilities of the network setup. These series of tests are commonly known Penetration Test. The need for penetration testing was not well known before. This paper highlights how penetration started and how it became as popular as it has today. The internet played a big part into the push to getting the idea of penetration testing started. The styles of penetration testing can vary from physical to network or virtual based testing which either can be a benefit to how a company becomes more secure. This paper presents the steps of penetration testing that a company or organization needs to carry out, to find out their own security flaws.
Li, Fang-Qi, Wang, Shi-Lin, Zhu, Yun.
2022.
Fostering The Robustness Of White-Box Deep Neural Network Watermarks By Neuron Alignment. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3049–3053.
The wide application of deep learning techniques is boosting the regulation of deep learning models, especially deep neural networks (DNN), as commercial products. A necessary prerequisite for such regulations is identifying the owner of deep neural networks, which is usually done through the watermark. Current DNN watermarking schemes, particularly white-box ones, are uniformly fragile against a family of functionality equivalence attacks, especially the neuron permutation. This operation can effortlessly invalidate the ownership proof and escape copyright regulations. To enhance the robustness of white-box DNN watermarking schemes, this paper presents a procedure that aligns neurons into the same order as when the watermark is embedded, so the watermark can be correctly recognized. This neuron alignment process significantly facilitates the functionality of established deep neural network watermarking schemes.
Liu, Xiaolei, Li, Xiaoyu, Zheng, Desheng, Bai, Jiayu, Peng, Yu, Zhang, Shibin.
2022.
Automatic Selection Attacks Framework for Hard Label Black-Box Models. IEEE INFOCOM 2022 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–7.
The current adversarial attacks against machine learning models can be divided into white-box attacks and black-box attacks. Further the black-box can be subdivided into soft label and hard label black-box, but the latter has the deficiency of only returning the class with the highest prediction probability, which leads to the difficulty in gradient estimation. However, due to its wide application, it is of great research significance and application value to explore hard label blackbox attacks. This paper proposes an Automatic Selection Attacks Framework (ASAF) for hard label black-box models, which can be explained in two aspects based on the existing attack methods. Firstly, ASAF applies model equivalence to select substitute models automatically so as to generate adversarial examples and then completes black-box attacks based on their transferability. Secondly, specified feature selection and parallel attack method are proposed to shorten the attack time and improve the attack success rate. The experimental results show that ASAF can achieve more than 90% success rate of nontargeted attack on the common models of traditional dataset ResNet-101 (CIFAR10) and InceptionV4 (ImageNet). Meanwhile, compared with FGSM and other attack algorithms, the attack time is reduced by at least 89.7% and 87.8% respectively in two traditional datasets. Besides, it can achieve 90% success rate of attack on the online model, BaiduAI digital recognition. In conclusion, ASAF is the first automatic selection attacks framework for hard label blackbox models, in which specified feature selection and parallel attack methods speed up automatic attacks.
Speith, Julian, Schweins, Florian, Ender, Maik, Fyrbiak, Marc, May, Alexander, Paar, Christof.
2022.
How Not to Protect Your IP – An Industry-Wide Break of IEEE 1735 Implementations. 2022 IEEE Symposium on Security and Privacy (SP). :1656–1671.
Modern hardware systems are composed of a variety of third-party Intellectual Property (IP) cores to implement their overall functionality. Since hardware design is a globalized process involving various (untrusted) stakeholders, a secure management of the valuable IP between authors and users is inevitable to protect them from unauthorized access and modification. To this end, the widely adopted IEEE standard 1735-2014 was created to ensure confidentiality and integrity. In this paper, we outline structural weaknesses in IEEE 1735 that cannot be fixed with cryptographic solutions (given the contemporary hardware design process) and thus render the standard inherently insecure. We practically demonstrate the weaknesses by recovering the private keys of IEEE 1735 implementations from major Electronic Design Automation (EDA) tool vendors, namely Intel, Xilinx, Cadence, Siemens, Microsemi, and Lattice, while results on a seventh case study are withheld. As a consequence, we can decrypt, modify, and re-encrypt all allegedly protected IP cores designed for the respective tools, thus leading to an industry-wide break. As part of this analysis, we are the first to publicly disclose three RSA-based white-box schemes that are used in real-world products and present cryptanalytical attacks for all of them, finally resulting in key recovery.
Zhan, Yike, Zheng, Baolin, Wang, Qian, Mou, Ningping, Guo, Binqing, Li, Qi, Shen, Chao, Wang, Cong.
2022.
Towards Black-Box Adversarial Attacks on Interpretable Deep Learning Systems. 2022 IEEE International Conference on Multimedia and Expo (ICME). :1–6.
Recent works have empirically shown that neural network interpretability is susceptible to malicious manipulations. However, existing attacks against Interpretable Deep Learning Systems (IDLSes) all focus on the white-box setting, which is obviously unpractical in real-world scenarios. In this paper, we make the first attempt to attack IDLSes in the decision-based black-box setting. We propose a new framework called Dual Black-box Adversarial Attack (DBAA) which can generate adversarial examples that are misclassified as the target class, yet have very similar interpretations to their benign cases. We conduct comprehensive experiments on different combinations of classifiers and interpreters to illustrate the effectiveness of DBAA. Empirical results show that in all the cases, DBAA achieves high attack success rates and Intersection over Union (IoU) scores.
Miao, Weiwei, Jin, Chao, Zeng, Zeng, Bao, Zhejing, Wei, Xiaogang, Zhang, Rui.
2022.
A White-Box SM4 Implementation by Introducing Pseudo States Applied to Edge IoT Agents. 2022 4th Asia Energy and Electrical Engineering Symposium (AEEES). :154–160.
With the widespread application of power Internet of Things (IoT), the edge IoT agents are often threatened by various attacks, among which the white-box attack is the most serious. The white-box implementation of the cryptography algorithm can hide key information even in the white-box attack context by means of obfuscation. However, under the specially designed attack, there is still a risk of the information being recovered within a certain time complexity. In this paper, by introducing pseudo states, a new white-box implementation of SM4 algorithm is proposed. The encryption and decryption processes are implemented in the form of matrices and lookup tables, which are obfuscated by scrambling encodings. The introduction of pseudo states could complicate the obfuscation, leading to the great improvement in the security. The number of pseudo states can be changed according to the requirements of security. Through several quantitative indicators, including diversity, ambiguity, the time complexity required to extract the key and the value space of the key and external encodings, it is proved that the security of the proposed implementation could been enhanced significantly, compared with the existing schemes under similar memory occupation.
Xie, Nanjiang, Gong, Zheng, Tang, Yufeng, Wang, Lei, Wen, Yamin.
2022.
Protecting White-Box Block Ciphers with Galois/Counter Mode. 2022 IEEE Conference on Dependable and Secure Computing (DSC). :1–7.
All along, white-box cryptography researchers focus on the design and implementation of certain primitives but less to the practice of the cipher working modes. For example, the Galois/Counter Mode (GCM) requires block ciphers to perform only the encrypting operations, which inevitably facing code-lifting attacks under the white-box security model. In this paper, a code-lifting resisted GCM (which is named WBGCM) is proposed to mitigate this security drawbacks in the white-box context. The basic idea is to combining external encodings with exclusive-or operations in GCM, and therefore two different schemes are designed with external encodings (WBGCM-EE) and maskings (WBGCM-Maksing), respectively. Furthermore, WBGCM is instantiated with Chow et al.'s white-box AES, and the experiments show that the processing speeds of WBGCM-EE and WBGCM-Masking achieves about 5 MBytes/Second with a marginal storage overhead.
Xu, Zheng.
2022.
The application of white-box encryption algorithms for distributed devices on the Internet of Things. 2022 3rd International Conference on Computer Vision, Image and Deep Learning & International Conference on Computer Engineering and Applications (CVIDL & ICCEA). :298–301.
With the rapid development of the Internet of Things and the exploration of its application scenarios, embedded devices are deployed in various environments to collect information and data. In such environments, the security of embedded devices cannot be guaranteed and are vulnerable to various attacks, even device capture attacks. When embedded devices are attacked, the attacker can obtain the information transmitted by the channel during the encryption process and the internal operation of the encryption. In this paper, we analyze various existing white-box schemes and show whether they are suitable for application in IoT. We propose an application of WBEAs for distributed devices in IoT scenarios and conduct experiments on several devices in IoT scenarios.
Levina, Alla, Kamnev, Ivan.
2022.
Protection Metric Model of White-Box Algorithms. 2022 11th Mediterranean Conference on Embedded Computing (MECO). :1–3.
Systems based on WB protection have a limited lifetime, measured in months and sometimes days. Unfortunately, to understand for how long the application will be uncompromised, if possible, only empirically. However, it is possible to make a preliminary assessment of the security of a particular implementation, depending on the methods and their number used in the implementation, it will allow reallocating resources to more effective means of protection.