Visible to the public Biblio

Filters: Keyword is White Box Security  [Clear All Filters]
2022-12-20
Lin, Xuanwei, Dong, Chen, Liu, Ximeng, Zhang, Yuanyuan.  2022.  SPA: An Efficient Adversarial Attack on Spiking Neural Networks using Spike Probabilistic. 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid). :366–375.
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model's accuracy under white-box attack decreases by 9.2S% 31.1S% better than others, and average success rates are 74.87% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting,
Sweigert, Devin, Chowdhury, Md Minhaz, Rifat, Nafiz.  2022.  Exploit Security Vulnerabilities by Penetration Testing. 2022 IEEE International Conference on Electro Information Technology (eIT). :527–532.
When we setup a computer network, we need to know if an attacker can get into the system. We need to do a series of test that shows the vulnerabilities of the network setup. These series of tests are commonly known Penetration Test. The need for penetration testing was not well known before. This paper highlights how penetration started and how it became as popular as it has today. The internet played a big part into the push to getting the idea of penetration testing started. The styles of penetration testing can vary from physical to network or virtual based testing which either can be a benefit to how a company becomes more secure. This paper presents the steps of penetration testing that a company or organization needs to carry out, to find out their own security flaws.
Li, Fang-Qi, Wang, Shi-Lin, Zhu, Yun.  2022.  Fostering The Robustness Of White-Box Deep Neural Network Watermarks By Neuron Alignment. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3049–3053.
The wide application of deep learning techniques is boosting the regulation of deep learning models, especially deep neural networks (DNN), as commercial products. A necessary prerequisite for such regulations is identifying the owner of deep neural networks, which is usually done through the watermark. Current DNN watermarking schemes, particularly white-box ones, are uniformly fragile against a family of functionality equivalence attacks, especially the neuron permutation. This operation can effortlessly invalidate the ownership proof and escape copyright regulations. To enhance the robustness of white-box DNN watermarking schemes, this paper presents a procedure that aligns neurons into the same order as when the watermark is embedded, so the watermark can be correctly recognized. This neuron alignment process significantly facilitates the functionality of established deep neural network watermarking schemes.
Liu, Xiaolei, Li, Xiaoyu, Zheng, Desheng, Bai, Jiayu, Peng, Yu, Zhang, Shibin.  2022.  Automatic Selection Attacks Framework for Hard Label Black-Box Models. IEEE INFOCOM 2022 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–7.

The current adversarial attacks against machine learning models can be divided into white-box attacks and black-box attacks. Further the black-box can be subdivided into soft label and hard label black-box, but the latter has the deficiency of only returning the class with the highest prediction probability, which leads to the difficulty in gradient estimation. However, due to its wide application, it is of great research significance and application value to explore hard label blackbox attacks. This paper proposes an Automatic Selection Attacks Framework (ASAF) for hard label black-box models, which can be explained in two aspects based on the existing attack methods. Firstly, ASAF applies model equivalence to select substitute models automatically so as to generate adversarial examples and then completes black-box attacks based on their transferability. Secondly, specified feature selection and parallel attack method are proposed to shorten the attack time and improve the attack success rate. The experimental results show that ASAF can achieve more than 90% success rate of nontargeted attack on the common models of traditional dataset ResNet-101 (CIFAR10) and InceptionV4 (ImageNet). Meanwhile, compared with FGSM and other attack algorithms, the attack time is reduced by at least 89.7% and 87.8% respectively in two traditional datasets. Besides, it can achieve 90% success rate of attack on the online model, BaiduAI digital recognition. In conclusion, ASAF is the first automatic selection attacks framework for hard label blackbox models, in which specified feature selection and parallel attack methods speed up automatic attacks.

Speith, Julian, Schweins, Florian, Ender, Maik, Fyrbiak, Marc, May, Alexander, Paar, Christof.  2022.  How Not to Protect Your IP – An Industry-Wide Break of IEEE 1735 Implementations. 2022 IEEE Symposium on Security and Privacy (SP). :1656–1671.
Modern hardware systems are composed of a variety of third-party Intellectual Property (IP) cores to implement their overall functionality. Since hardware design is a globalized process involving various (untrusted) stakeholders, a secure management of the valuable IP between authors and users is inevitable to protect them from unauthorized access and modification. To this end, the widely adopted IEEE standard 1735-2014 was created to ensure confidentiality and integrity. In this paper, we outline structural weaknesses in IEEE 1735 that cannot be fixed with cryptographic solutions (given the contemporary hardware design process) and thus render the standard inherently insecure. We practically demonstrate the weaknesses by recovering the private keys of IEEE 1735 implementations from major Electronic Design Automation (EDA) tool vendors, namely Intel, Xilinx, Cadence, Siemens, Microsemi, and Lattice, while results on a seventh case study are withheld. As a consequence, we can decrypt, modify, and re-encrypt all allegedly protected IP cores designed for the respective tools, thus leading to an industry-wide break. As part of this analysis, we are the first to publicly disclose three RSA-based white-box schemes that are used in real-world products and present cryptanalytical attacks for all of them, finally resulting in key recovery.
Zhan, Yike, Zheng, Baolin, Wang, Qian, Mou, Ningping, Guo, Binqing, Li, Qi, Shen, Chao, Wang, Cong.  2022.  Towards Black-Box Adversarial Attacks on Interpretable Deep Learning Systems. 2022 IEEE International Conference on Multimedia and Expo (ICME). :1–6.
Recent works have empirically shown that neural network interpretability is susceptible to malicious manipulations. However, existing attacks against Interpretable Deep Learning Systems (IDLSes) all focus on the white-box setting, which is obviously unpractical in real-world scenarios. In this paper, we make the first attempt to attack IDLSes in the decision-based black-box setting. We propose a new framework called Dual Black-box Adversarial Attack (DBAA) which can generate adversarial examples that are misclassified as the target class, yet have very similar interpretations to their benign cases. We conduct comprehensive experiments on different combinations of classifiers and interpreters to illustrate the effectiveness of DBAA. Empirical results show that in all the cases, DBAA achieves high attack success rates and Intersection over Union (IoU) scores.
Miao, Weiwei, Jin, Chao, Zeng, Zeng, Bao, Zhejing, Wei, Xiaogang, Zhang, Rui.  2022.  A White-Box SM4 Implementation by Introducing Pseudo States Applied to Edge IoT Agents. 2022 4th Asia Energy and Electrical Engineering Symposium (AEEES). :154–160.
With the widespread application of power Internet of Things (IoT), the edge IoT agents are often threatened by various attacks, among which the white-box attack is the most serious. The white-box implementation of the cryptography algorithm can hide key information even in the white-box attack context by means of obfuscation. However, under the specially designed attack, there is still a risk of the information being recovered within a certain time complexity. In this paper, by introducing pseudo states, a new white-box implementation of SM4 algorithm is proposed. The encryption and decryption processes are implemented in the form of matrices and lookup tables, which are obfuscated by scrambling encodings. The introduction of pseudo states could complicate the obfuscation, leading to the great improvement in the security. The number of pseudo states can be changed according to the requirements of security. Through several quantitative indicators, including diversity, ambiguity, the time complexity required to extract the key and the value space of the key and external encodings, it is proved that the security of the proposed implementation could been enhanced significantly, compared with the existing schemes under similar memory occupation.
Xie, Nanjiang, Gong, Zheng, Tang, Yufeng, Wang, Lei, Wen, Yamin.  2022.  Protecting White-Box Block Ciphers with Galois/Counter Mode. 2022 IEEE Conference on Dependable and Secure Computing (DSC). :1–7.
All along, white-box cryptography researchers focus on the design and implementation of certain primitives but less to the practice of the cipher working modes. For example, the Galois/Counter Mode (GCM) requires block ciphers to perform only the encrypting operations, which inevitably facing code-lifting attacks under the white-box security model. In this paper, a code-lifting resisted GCM (which is named WBGCM) is proposed to mitigate this security drawbacks in the white-box context. The basic idea is to combining external encodings with exclusive-or operations in GCM, and therefore two different schemes are designed with external encodings (WBGCM-EE) and maskings (WBGCM-Maksing), respectively. Furthermore, WBGCM is instantiated with Chow et al.'s white-box AES, and the experiments show that the processing speeds of WBGCM-EE and WBGCM-Masking achieves about 5 MBytes/Second with a marginal storage overhead.
Xu, Zheng.  2022.  The application of white-box encryption algorithms for distributed devices on the Internet of Things. 2022 3rd International Conference on Computer Vision, Image and Deep Learning & International Conference on Computer Engineering and Applications (CVIDL & ICCEA). :298–301.
With the rapid development of the Internet of Things and the exploration of its application scenarios, embedded devices are deployed in various environments to collect information and data. In such environments, the security of embedded devices cannot be guaranteed and are vulnerable to various attacks, even device capture attacks. When embedded devices are attacked, the attacker can obtain the information transmitted by the channel during the encryption process and the internal operation of the encryption. In this paper, we analyze various existing white-box schemes and show whether they are suitable for application in IoT. We propose an application of WBEAs for distributed devices in IoT scenarios and conduct experiments on several devices in IoT scenarios.
Levina, Alla, Kamnev, Ivan.  2022.  Protection Metric Model of White-Box Algorithms. 2022 11th Mediterranean Conference on Embedded Computing (MECO). :1–3.
Systems based on WB protection have a limited lifetime, measured in months and sometimes days. Unfortunately, to understand for how long the application will be uncompromised, if possible, only empirically. However, it is possible to make a preliminary assessment of the security of a particular implementation, depending on the methods and their number used in the implementation, it will allow reallocating resources to more effective means of protection.
2022-01-31
El-Allami, Rida, Marchisio, Alberto, Shafique, Muhammad, Alouani, Ihsen.  2021.  Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters. 2021 Design, Automation Test in Europe Conference Exhibition (DATE). :774–779.
Deep Learning (DL) algorithms have gained popularity owing to their practical problem-solving capacity. However, they suffer from a serious integrity threat, i.e., their vulnerability to adversarial attacks. In the quest for DL trustworthiness, recent works claimed the inherent robustness of Spiking Neural Networks (SNNs) to these attacks, without considering the variability in their structural spiking parameters. This paper explores the security enhancement of SNNs through internal structural parameters. Specifically, we investigate the SNNs robustness to adversarial attacks with different values of the neuron's firing voltage thresholds and time window boundaries. We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters. Our results show a significant impact of the structural parameters on the SNNs' security, and promising sweet spots can be reached to design trustworthy SNNs with 85% higher robustness than a traditional non-spiking DL system. To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks. The proposed contributions and the experimental framework is available online 11https://github.com/rda-ela/SNN-Adversarial-Attacks to the community for reproducible research.
Kwon, Sujin, Kang, Ju-Sung, Yeom, Yongjin.  2021.  Analysis of public-key cryptography using a 3-regular graph with a perfect dominating set. 2021 IEEE Region 10 Symposium (TENSYMP). :1–6.

Research on post-quantum cryptography (PQC) to improve the security against quantum computers has been actively conducted. In 2020, NIST announced the final PQC candidates whose design rationales rely on NP-hard or NP-complete problems. It is believed that cryptography based on NP-hard problem might be secure against attacks using quantum computers. N. Koblitz introduced the concept of public-key cryptography using a 3-regular graph with a perfect dominating set in the 1990s. The proposed cryptosystem is based on NP-complete problem to find a perfect dominating set in the given graph. Later, S. Yoon proposed a variant scheme using a perfect minus dominating function. However, their works have not received much attention since these schemes produce huge ciphertexts and are hard to implement efficiently. Also, the security parameters such as key size and plaintext-ciphertext size have not been proposed yet. We conduct security and performance analysis of their schemes and discuss the practical range of security parameters. As an application, the scheme with one-wayness property can be used as an encoding method in the white-box cryptography (WBC).

Zhang, Yun, Li, Hongwei, Xu, Guowen, Luo, Xizhao, Dong, Guishan.  2021.  Generating Audio Adversarial Examples with Ensemble Substituted Models. ICC 2021 - IEEE International Conference on Communications. :1–6.
The rapid development of machine learning technology has prompted the applications of Automatic Speech Recognition(ASR). However, studies have shown that the state-of-the-art ASR technologies are still vulnerable to various attacks, which undermines the stability of ASR destructively. In general, most of the existing attack techniques for the ASR model are based on white box scenarios, where the adversary uses adversarial samples to generate a substituted model corresponding to the target model. On the contrary, there are fewer attack schemes in the black-box scenario. Moreover, no scheme considers the problem of how to construct the architecture of the substituted models. In this paper, we point out that constructing a good substituted model architecture is crucial to the effectiveness of the attack, as it helps to generate a more sophisticated set of adversarial examples. We evaluate the performance of different substituted models by comprehensive experiments, and find that ensemble substituted models can achieve the optimal attack effect. The experiment shows that our approach performs attack over 80% success rate (2% improvement compared to the latest work) meanwhile maintaining the authenticity of the original sample well.
Shivaie, Mojtaba, Mokhayeri, Mohammad, Narooie, Mohammadali, Ansari, Meisam.  2021.  A White-Box Decision Tree-Based Preventive Strategy for Real-Time Islanding Detection Using Wide-Area Phasor Measurement. 2021 IEEE Texas Power and Energy Conference (TPEC). :1–6.
With the ever-increasing energy demand and enormous development of generation capacity, modern bulk power systems are mostly pushed to operate with narrower security boundaries. Therefore, timely and reliable assessment of power system security is an inevitable necessity to prevent widespread blackouts and cascading outages. In this paper, a new white-box decision tree-based preventive strategy is presented to evaluate and enhance the power system dynamic security versus the credible N-K contingencies originating from transient instabilities. As well, a competent operating measure is expertly defined to detect and identify the islanding and non-islanding conditions with the aid of a wide-area phasor measurement system. The newly developed strategy is outlined by a three-level simulation with the aim of guaranteeing the power system dynamic security. In the first-level, six hundred islanding and non-islanding scenarios are generated using an enhanced version of the ID3 algorithm, referred to as the C4.5 algorithms. In the second-level, optimal C4.5 decision trees are offline trained based on operating parameters achieved by the reduction error pruning method. In the third level, however, all trained decision trees are rigorously investigated offline and online; and then, the most accurate and reliable decision tree is selected. The newly developed strategy is examined on the IEEE New England 39-bus test system, and its effectiveness is assured by simulation studies.
Kumová, Věra, Pilát, Martin.  2021.  Beating White-Box Defenses with Black-Box Attacks. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Deep learning has achieved great results in the last decade, however, it is sensitive to so called adversarial attacks - small perturbations of the input that cause the network to classify incorrectly. In the last years a number of attacks and defenses against these attacks were described. Most of the defenses however focus on defending against gradient-based attacks. In this paper, we describe an evolutionary attack and show that the adversarial examples produced by the attack have different features than those from gradient-based attacks. We also show that these features mean that one of the state-of-the-art defenses fails to detect such attacks.
Velez, Miguel, Jamshidi, Pooyan, Siegmund, Norbert, Apel, Sven, Kästner, Christian.  2021.  White-Box Analysis over Machine Learning: Modeling Performance of Configurable Systems. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). :1072–1084.

Performance-influence models can help stakeholders understand how and where configuration options and their interactions influence the performance of a system. With this understanding, stakeholders can debug performance behavior and make deliberate configuration decisions. Current black-box techniques to build such models combine various sampling and learning strategies, resulting in tradeoffs between measurement effort, accuracy, and interpretability. We present Comprex, a white-box approach to build performance-influence models for configurable systems, combining insights of local measurements, dynamic taint analysis to track options in the implementation, compositionality, and compression of the configuration space, without relying on machine learning to extrapolate incomplete samples. Our evaluation on 4 widely-used, open-source projects demonstrates that Comprex builds similarly accurate performance-influence models to the most accurate and expensive black-box approach, but at a reduced cost and with additional benefits from interpretable and local models.

Zhao, Rui.  2021.  The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms. 2021 2nd International Conference on Computing and Data Science (CDS). :287–295.
With the further development in the fields of computer vision, network security, natural language processing and so on so forth, deep learning technology gradually exposed certain security risks. The existing deep learning algorithms cannot effectively describe the essential characteristics of data, making the algorithm unable to give the correct result in the face of malicious input. Based on current security threats faced by deep learning, this paper introduces the problem of adversarial examples in deep learning, sorts out the existing attack and defense methods of black box and white box, and classifies them. It briefly describes the application of some adversarial examples in different scenarios in recent years, compares several defense technologies of adversarial examples, and finally summarizes the problems in this research field and prospects its future development. This paper introduces the common white box attack methods in detail, and further compares the similarities and differences between the attack of black and white boxes. Correspondingly, the author also introduces the defense methods, and analyzes the performance of these methods against the black and white box attack.
Yim, Hyoungshin, Kang, Ju-Sung, Yeom, Yongjin.  2021.  An Efficient Structural Analysis of SAS and its Application to White-Box Cryptography. 2021 IEEE Region 10 Symposium (TENSYMP). :1–6.

Structural analysis is the study of finding component functions for a given function. In this paper, we proceed with structural analysis of structures consisting of the S (nonlinear Substitution) layer and the A (Affine or linear) layer. Our main interest is the S1AS2 structure with different substitution layers and large input/output sizes. The purpose of our structural analysis is to find the functionally equivalent oracle F* and its component functions for a given encryption oracle F(= S2 ∘ A ∘ S1). As a result, we can construct the decryption oracle F*−1 explicitly and break the one-wayness of the building blocks used in a White-box implementation. Our attack consists of two steps: S layer recovery using multiset properties and A layer recovery using differential properties. We present the attack algorithm for each step and estimate the time complexity. Finally, we discuss the applicability of S1AS2 structural analysis in a White-box Cryptography environment.

Wang, Xiying, Ni, Rongrong, Li, Wenjie, Zhao, Yao.  2021.  Adversarial Attack on Fake-Faces Detectors Under White and Black Box Scenarios. 2021 IEEE International Conference on Image Processing (ICIP). :3627–3631.
Generative Adversarial Network (GAN) models have been widely used in various fields. More recently, styleGAN and styleGAN2 have been developed to synthesize faces that are indistinguishable to the human eyes, which could pose a threat to public security. But latest work has shown that it is possible to identify fakes using powerful CNN networks as classifiers. However, the reliability of these techniques is unknown. Therefore, in this paper we focus on the generation of content-preserving images from fake faces to spoof classifiers. Two GAN-based frameworks are proposed to achieve the goal in the white-box and black-box. For the white-box, a network without up/down sampling is proposed to generate face images to confuse the classifier. In the black-box scenario (where the classifier is unknown), real data is introduced as a guidance for GAN structure to make it adversarial, and a Real Extractor as an auxiliary network to constrain the feature distance between the generated images and the real data to enhance the adversarial capability. Experimental results show that the proposed method effectively reduces the detection accuracy of forensic models with good transferability.
Levina, Alla, Kamnev, Ivan, Zikratov, Igor.  2021.  Implementation White-Box Cryptography for Elliptic Curve Cryptography. 2021 10th Mediterranean Conference on Embedded Computing (MECO). :1–4.

The development of technologies makes it possible to increase the power of information processing systems, but the modernization of processors brings not only an increase in performance but also an increase in the number of errors and vulnerabilities that can allow an attacker to attack the system and gain access to confidential information. White-Box cryptography allows (due to its structure) not only monitoring possible changes but also protects the processed data even with full access of the attacker to the environment. Elliptic Curve Cryptography (ECC) due to its properties, is becoming stronger and stronger in our lives, as it allows you to get strong encryption at a lower cost of processing your own algorithm. This allows you to reduce the load on the system and increase its performance.

2021-03-04
Afreen, A., Aslam, M., Ahmed, S..  2020.  Analysis of Fileless Malware and its Evasive Behavior. 2020 International Conference on Cyber Warfare and Security (ICCWS). :1—8.

Malware is any software that causes harm to the user information, computer systems or network. Modern computing and internet systems are facing increase in malware threats from the internet. It is observed that different malware follows the same patterns in their structure with minimal alterations. The type of threats has evolved, from file-based malware to fileless malware, such kind of threats are also known as Advance Volatile Threat (AVT). Fileless malware is complex and evasive, exploiting pre-installed trusted programs to infiltrate information with its malicious intent. Fileless malware is designed to run in system memory with a very small footprint, leaving no artifacts on physical hard drives. Traditional antivirus signatures and heuristic analysis are unable to detect this kind of malware due to its sophisticated and evasive nature. This paper provides information relating to detection, mitigation and analysis for such kind of threat.

Sun, H., Liu, L., Feng, L., Gu, Y. X..  2014.  Introducing Code Assets of a New White-Box Security Modeling Language. 2014 IEEE 38th International Computer Software and Applications Conference Workshops. :116—121.

This paper argues about a new conceptual modeling language for the White-Box (WB) security analysis. In the WB security domain, an attacker may have access to the inner structure of an application or even the entire binary code. It becomes pretty easy for attackers to inspect, reverse engineer, and tamper the application with the information they steal. The basis of this paper is the 14 patterns developed by a leading provider of software protection technologies and solutions. We provide a part of a new modeling language named i-WBS (White-Box Security) to describe problems of WB security better. The essence of White-Box security problem is code security. We made the new modeling language focus on code more than ever before. In this way, developers who are not security experts can easily understand what they need to really protect.

Carlini, N., Farid, H..  2020.  Evading Deepfake-Image Detectors with White- and Black-Box Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2804—2813.

It is now possible to synthesize highly realistic images of people who do not exist. Such content has, for example, been implicated in the creation of fraudulent socialmedia profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content.We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near- 0% accuracy. We develop five attack case studies on a state- of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1% of the image area to reduce the classifier's AUC to 0.08; or add a single noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.

Wang, Y., Wang, Z., Xie, Z., Zhao, N., Chen, J., Zhang, W., Sui, K., Pei, D..  2020.  Practical and White-Box Anomaly Detection through Unsupervised and Active Learning. 2020 29th International Conference on Computer Communications and Networks (ICCCN). :1—9.

To ensure quality of service and user experience, large Internet companies often monitor various Key Performance Indicators (KPIs) of their systems so that they can detect anomalies and identify failure in real time. However, due to a large number of various KPIs and the lack of high-quality labels, existing KPI anomaly detection approaches either perform well only on certain types of KPIs or consume excessive resources. Therefore, to realize generic and practical KPI anomaly detection in the real world, we propose a KPI anomaly detection framework named iRRCF-Active, which contains an unsupervised and white-box anomaly detector based on Robust Random Cut Forest (RRCF), and an active learning component. Specifically, we novelly propose an improved RRCF (iRRCF) algorithm to overcome the drawbacks of applying original RRCF in KPI anomaly detection. Besides, we also incorporate the idea of active learning to make our model benefit from high-quality labels given by experienced operators. We conduct extensive experiments on a large-scale public dataset and a private dataset collected from a large commercial bank. The experimental resulta demonstrate that iRRCF-Active performs better than existing traditional statistical methods, unsupervised learning methods and supervised learning methods. Besides, each component in iRRCF-Active has also been demonstrated to be effective and indispensable.

Amadori, A., Michiels, W., Roelse, P..  2020.  Automating the BGE Attack on White-Box Implementations of AES with External Encodings. 2020 IEEE 10th International Conference on Consumer Electronics (ICCE-Berlin). :1—6.

Cloud-based payments, virtual car keys, and digital rights management are examples of consumer electronics applications that use secure software. White-box implementations of the Advanced Encryption Standard (AES) are important building blocks of secure software systems, and the attack of Billet, Gilbert, and Ech-Chatbi (BGE) is a well-known attack on such implementations. A drawback from the adversary’s or security tester’s perspective is that manual reverse engineering of the implementation is required before the BGE attack can be applied. This paper presents a method to automate the BGE attack on a class of white-box AES implementations with a specific type of external encoding. The new method was implemented and applied successfully to a CHES 2016 capture the flag challenge.