Biblio
Filters: Keyword is Training [Clear All Filters]
Strong PUF Security Metrics: Response Sensitivity to Small Challenge Perturbations. 2022 23rd International Symposium on Quality Electronic Design (ISQED). :1—10.
.
2022. This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong PUFs. These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial PUF complexity analyses, and are much easier and more efficient to apply: They do not require detailed knowledge of various ML methods, substantial computation times, or the availability of an internal parametric model of the studied PUF. Our metrics also can be standardized particularly easily. This avoids the sometimes inconclusive or contradictory findings of existing ML-based security test, which may result from the usage of different or non-optimized ML algorithms and hyperparameters, differing hardware resources, or varying numbers of challenge-response pairs in the training phase.This first manuscript within the abovementioned sequence treats one of the conceptually most straightforward security metrics on that path: It investigates the effects that small perturbations in the PUF-challenges have on the resulting PUF-responses. We first develop and implement several sub-metrics that realize this approach in practice. We then empirically show that these metrics have surprising predictive power, and compare our obtained test scores with the known real-world security of several popular Strong PUF designs. The latter include (XOR) Arbiter PUFs, Feed-Forward Arbiter PUFs, and (XOR) Bistable Ring PUFs. Along the way, our manuscript also suggests techniques for representing the results of our metrics graphically, and for interpreting them in a meaningful manner.
Automation of the Information Collection Process by Osint Methods for Penetration Testing During Information Security Audit. 2022 Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus). :242—246.
.
2022. The purpose of this article is to consider one of the options for automating the process of collecting information from open sources when conducting penetration testing in an organization's information security audit using the capabilities of the Python programming language. Possible primary vectors for collecting information about the organization, personnel, software, and hardware are shown. The basic principles of operation of the software product are presented in a visual form, which allows automated analysis of information from open sources about the object under study.
HyBP: Hybrid Isolation-Randomization Secure Branch Predictor. 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA). :346—359.
.
2022. Recently exposed vulnerabilities reveal the necessity to improve the security of branch predictors. Branch predictors record history about the execution of different processes, and such information from different processes are stored in the same structure and thus accessible to each other. This leaves the attackers with the opportunities for malicious training and malicious perception. Physical or logical isolation mechanisms such as using dedicated tables and flushing during context-switch can provide security but incur non-trivial costs in space and/or execution time. Randomization mechanisms incurs the performance cost in a different way: those with higher securities add latency to the critical path of the pipeline, while the simpler alternatives leave vulnerabilities to more sophisticated attacks.This paper proposes HyBP, a practical hybrid protection and effective mechanism for building secure branch predictors. The design applies the physical isolation and randomization in the right component to achieve the best of both worlds. We propose to protect the smaller tables with physically isolation based on (thread, privilege) combination; and protect the large tables with randomization. Surprisingly, the physical isolation also significantly enhances the security of the last-level tables by naturally filtering out accesses, reducing the information flow to these bigger tables. As a result, key changes can happen less frequently and be performed conveniently at context switches. Moreover, we propose a latency hiding design for a strong cipher by precomputing the "code book" with a validated, cryptographically strong cipher. Overall, our design incurs a performance penalty of 0.5% compared to 5.1% of physical isolation under the default context switching interval in Linux.
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :588—599.
.
2022. Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a particular trigger. Several works attempt to detect whether a given DNN has been injected with a specific trigger during the training. In a parallel line of research, the lottery ticket hypothesis reveals the existence of sparse sub-networks which are capable of reaching competitive performance as the dense network after independent training. Connecting these two dots, we investigate the problem of Trojan DNN detection from the brand new lens of sparsity, even when no clean training data is available. Our crucial observation is that the Trojan features are significantly more stable to network pruning than benign features. Leveraging that, we propose a novel Trojan network detection regime: first locating a “winning Trojan lottery ticket” which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated sub-network. Extensive experiments on various datasets, i.e., CIFAR-10, CIFAR-100, and ImageNet, with different network architectures, i.e., VGG-16, ResNet-18, ResNet-20s, and DenseNet-100 demonstrate the effectiveness of our proposal. Codes are available at https://github.com/VITA-Group/Backdoor-LTH.
Robust and Resilient Federated Learning for Securing Future Networks. 2022 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit). :351—356.
.
2022. Machine Learning (ML) and Artificial Intelligence (AI) techniques are widely adopted in the telecommunication industry, especially to automate beyond 5G networks. Federated Learning (FL) recently emerged as a distributed ML approach that enables localized model training to keep data decentralized to ensure data privacy. In this paper, we identify the applicability of FL for securing future networks and its limitations due to the vulnerability to poisoning attacks. First, we investigate the shortcomings of state-of-the-art security algorithms for FL and perform an attack to circumvent FoolsGold algorithm, which is known as one of the most promising defense techniques currently available. The attack is launched with the addition of intelligent noise at the poisonous model updates. Then we propose a more sophisticated defense strategy, a threshold-based clustering mechanism to complement FoolsGold. Moreover, we provide a comprehensive analysis of the impact of the attack scenario and the performance of the defense mechanism.
Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning. 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :1—8.
.
2022. Federated learning (FL) has emerged as a promising paradigm for distributed training of machine learning models. In FL, several participants train a global model collaboratively by only sharing model parameter updates while keeping their training data local. However, FL was recently shown to be vulnerable to data poisoning attacks, in which malicious participants send parameter updates derived from poisoned training data. In this paper, we focus on defending against targeted data poisoning attacks, where the attacker’s goal is to make the model misbehave for a small subset of classes while the rest of the model is relatively unaffected. To defend against such attacks, we first propose a method called MAPPS for separating malicious updates from benign ones. Using MAPPS, we propose three methods for attack detection: MAPPS + X-Means, MAPPS + VAT, and their Ensemble. Then, we propose an attack mitigation approach in which a "clean" model (i.e., a model that is not negatively impacted by an attack) can be trained despite the existence of a poisoning attempt. We empirically evaluate all of our methods using popular image classification datasets. Results show that we can achieve \textgreater 95% true positive rates while incurring only \textless 2% false positive rate. Furthermore, the clean models that are trained using our proposed methods have accuracy comparable to models trained in an attack-free scenario.
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :20844—20853.
.
2022. In recent years, the security of AI systems has drawn increasing research attention, especially in the medical imaging realm. To develop a secure medical image analysis (MIA) system, it is a must to study possible backdoor attacks (BAs), which can embed hidden malicious behaviors into the system. However, designing a unified BA method that can be applied to various MIA systems is challenging due to the diversity of imaging modalities (e.g., X-Ray, CT, and MRI) and analysis tasks (e.g., classification, detection, and segmentation). Most existing BA methods are designed to attack natural image classification models, which apply spatial triggers to training images and inevitably corrupt the semantics of poisoned pixels, leading to the failures of attacking dense prediction models. To address this issue, we propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various MIA tasks. Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images. Since it preserves the semantics of the poisoned image pixels, FIBA can perform attacks on both classification and dense prediction models. Experiments on three benchmarks in MIA (i.e., ISIC-2019 [4] for skin lesion classification, KiTS-19 [17] for kidney tumor segmentation, and EAD-2019 [1] for endoscopic artifact detection), validate the effectiveness of FIBA and its superiority over stateof-the-art methods in attacking MIA models and bypassing backdoor defense. Source code will be available at code.
A Survey on Data Poisoning Attacks and Defenses. 2022 7th IEEE International Conference on Data Science in Cyberspace (DSC). :48—55.
.
2022. With the widespread deployment of data-driven services, the demand for data volumes continues to grow. At present, many applications lack reliable human supervision in the process of data collection, which makes the collected data contain low-quality data or even malicious data. This low-quality or malicious data make AI systems potentially face much security challenges. One of the main security threats in the training phase of machine learning is data poisoning attacks, which compromise model integrity by contaminating training data to make the resulting model skewed or unusable. This paper reviews the relevant researches on data poisoning attacks in various task environments: first, the classification of attacks is summarized, then the defense methods of data poisoning attacks are sorted out, and finally, the possible research directions in the prospect.
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers. 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN). :77—87.
.
2022. Graph-based Semi-Supervised Learning (GSSL) is a practical solution to learn from a limited amount of labelled data together with a vast amount of unlabelled data. However, due to their reliance on the known labels to infer the unknown labels, these algorithms are sensitive to data quality. It is therefore essential to study the potential threats related to the labelled data, more specifically, label poisoning. In this paper, we propose a novel data poisoning method which efficiently approximates the result of label inference to identify the inputs which, if poisoned, would produce the highest number of incorrectly inferred labels. We extensively evaluate our approach on three classification problems under 24 different experimental settings each. Compared to the state of the art, our influence-driven attack produces an average increase of error rate 50% higher, while being faster by multiple orders of magnitude. Moreover, our method can inform engineers of inputs that deserve investigation (relabelling them) before training the learning model. We show that relabelling one-third of the poisoned inputs (selected based on their influence) reduces the poisoning effect by 50%. ACM Reference Format: Adriano Franci, Maxime Cordy, Martin Gubri, Mike Papadakis, and Yves Le Traon. 2022. Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers. In 1st Conference on AI Engineering - Software Engineering for AI (CAIN’22), May 16–24, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3522664.3528606
Enhancing Cyber Security in IoT Systems using FL-based IDS with Differential Privacy. 2022 Global Information Infrastructure and Networking Symposium (GIIS). :30—34.
.
2022. Nowadays, IoT networks and devices exist in our everyday life, capturing and carrying unlimited data. However, increasing penetration of connected systems and devices implies rising threats for cybersecurity with IoT systems suffering from network attacks. Artificial Intelligence (AI) and Machine Learning take advantage of huge volumes of IoT network logs to enhance their cybersecurity in IoT. However, these data are often desired to remain private. Federated Learning (FL) provides a potential solution which enables collaborative training of attack detection model among a set of federated nodes, while preserving privacy as data remain local and are never disclosed or processed on central servers. While FL is resilient and resolves, up to a point, data governance and ownership issues, it does not guarantee security and privacy by design. Adversaries could interfere with the communication process, expose network vulnerabilities, and manipulate the training process, thus affecting the performance of the trained model. In this paper, we present a federated learning model which can successfully detect network attacks in IoT systems. Moreover, we evaluate its performance under various settings of differential privacy as a privacy preserving technique and configurations of the participating nodes. We prove that the proposed model protects the privacy without actually compromising performance. Our model realizes a limited performance impact of only ∼ 7% less testing accuracy compared to the baseline while simultaneously guaranteeing security and applicability.
PPIoV: A Privacy Preserving-Based Framework for IoV- Fog Environment Using Federated Learning and Blockchain. 2022 IEEE World AI IoT Congress (AIIoT). :597—603.
.
2022. The integration of the Internet-of-Vehicles (IoV) and fog computing benefits from cooperative computing and analysis of environmental data while avoiding network congestion and latency. However, when private data is shared across fog nodes or the cloud, there exist privacy issues that limit the effectiveness of IoV systems, putting drivers' safety at risk. To address this problem, we propose a framework called PPIoV, which is based on Federated Learning (FL) and Blockchain technologies to preserve the privacy of vehicles in IoV.Typical machine learning methods are not well suited for distributed and highly dynamic systems like IoV since they train on data with local features. Therefore, we use FL to train the global model while preserving privacy. Also, our approach is built on a scheme that evaluates the reliability of vehicles participating in the FL training process. Moreover, PPIoV is built on blockchain to establish trust across multiple communication nodes. For example, when the local learned model updates from the vehicles and fog nodes are communicated with the cloud to update the global learned model, all transactions take place on the blockchain. The outcome of our experimental study shows that the proposed method improves the global model's accuracy as a result of allowing reputed vehicles to update the global model.
PrivPAS: A real time Privacy-Preserving AI System and applied ethics. 2022 IEEE 16th International Conference on Semantic Computing (ICSC). :9—16.
.
2022. With 3.78 billion social media users worldwide in 2021 (48% of the human population), almost 3 billion images are shared daily. At the same time, a consistent evolution of smartphone cameras has led to a photography explosion with 85% of all new pictures being captured using smartphones. However, lately, there has been an increased discussion of privacy concerns when a person being photographed is unaware of the picture being taken or has reservations about the same being shared. These privacy violations are amplified for people with disabilities, who may find it challenging to raise dissent even if they are aware. Such unauthorized image captures may also be misused to gain sympathy by third-party organizations, leading to a privacy breach. Privacy for people with disabilities has so far received comparatively less attention from the AI community. This motivates us to work towards a solution to generate privacy-conscious cues for raising awareness in smartphone users of any sensitivity in their viewfinder content. To this end, we introduce PrivPAS (A real time Privacy-Preserving AI System) a novel framework to identify sensitive content. Additionally, we curate and annotate a dataset to identify and localize accessibility markers and classify whether an image is sensitive to a featured subject with a disability. We demonstrate that the proposed lightweight architecture, with a memory footprint of a mere 8.49MB, achieves a high mAP of 89.52% on resource-constrained devices. Furthermore, our pipeline, trained on face anonymized data. achieves an F1-score of 73.1%.
Mixed Differential Privacy in Computer Vision. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8366—8376.
.
2022. We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yields punishing trade-offs in vision tasks. A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset. AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off. AdaMix reduces the error increase from the non-private upper bound from the 167–311% of the baseline, on average across 6 datasets, to 68-92% depending on the desired privacy level selected by the user. AdaMix tackles the trade-off arising in visual classification, whereby the most privacy sensitive data, corresponding to isolated points in representation space, are also critical for high classification accuracy. In addition, AdaMix comes with strong theoretical privacy guarantees and convergence analysis.
A Generative Adversarial Approach for Sybil Attacks Recognition for Vehicular Crowdsensing. 2022 International Conference on Connected Vehicle and Expo (ICCVE). :1–7.
.
2022. Vehicular crowdsensing (VCS) is a subset of crowd-sensing where data collection is outsourced to group vehicles. Here, an entity interested in collecting data from a set of Places of Sensing Interest (PsI), advertises a set of sensing tasks, and the associated rewards. Vehicles attracted by the offered rewards deviate from their ongoing trajectories to visit and collect from one or more PsI. In this win-to-win scenario, vehicles reach their final destination with the extra reward, and the entity obtains the desired samples. Unfortunately, the efficiency of VCS can be undermined by the Sybil attack, in which an attacker can benefit from the injection of false vehicle identities. In this paper, we present a case study and analyze the effects of such an attack. We also propose a defense mechanism based on generative adversarial neural networks (GANs). We discuss GANs' advantages, and drawbacks in the context of VCS, and new trends in GANs' training that make them suitable for VCS.
Deep Learning Technique Based Intrusion Detection in Cyber-Security Networks. 2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon). :1–7.
.
2022. As a result of the inherent weaknesses of the wireless medium, ad hoc networks are susceptible to a broad variety of threats and assaults. As a direct consequence of this, intrusion detection, as well as security, privacy, and authentication in ad-hoc networks, have developed into a primary focus of current study. This body of research aims to identify the dangers posed by a variety of assaults that are often seen in wireless ad-hoc networks and provide strategies to counteract those dangers. The Black hole assault, Wormhole attack, Selective Forwarding attack, Sybil attack, and Denial-of-Service attack are the specific topics covered in this thesis. In this paper, we describe a trust-based safe routing protocol with the goal of mitigating the interference of black hole nodes in the course of routing in mobile ad-hoc networks. The overall performance of the network is negatively impacted when there are black hole nodes in the route that routing takes. As a result, we have developed a routing protocol that reduces the likelihood that packets would be lost as a result of black hole nodes. This routing system has been subjected to experimental testing in order to guarantee that the most secure path will be selected for the delivery of packets between a source and a destination. The invasion of wormholes into a wireless network results in the segmentation of the network as well as a disorder in the routing. As a result, we provide an effective approach for locating wormholes by using ordinal multi-dimensional scaling and round trip duration in wireless ad hoc networks with either sparse or dense topologies. Wormholes that are linked by both short route and long path wormhole linkages may be found using the approach that was given. In order to guarantee that this ad hoc network does not include any wormholes that go unnoticed, this method is subjected to experimental testing. In order to fight against selective forwarding attacks in wireless ad-hoc networks, we have developed three different techniques. The first method is an incentive-based algorithm that makes use of a reward-punishment system to drive cooperation among three nodes for the purpose of vi forwarding messages in crowded ad-hoc networks. A unique adversarial model has been developed by our team, and inside it, three distinct types of nodes and the activities they participate in are specified. We have shown that the suggested strategy that is based on incentives prohibits nodes from adopting an individualistic behaviour, which ensures collaboration in the process of packet forwarding. To guarantee that intermediate nodes in resource-constrained ad-hoc networks accurately convey packets, the second approach proposes a game theoretic model that uses non-cooperative game theory. This model is based on the idea that game theory may be used. This game reaches a condition of desired equilibrium, which assures that cooperation in multi-hop communication is physically possible, and it is this state that is discovered. In the third algorithm, we present a detection approach that locates malicious nodes in multihop hierarchical ad-hoc networks by employing binary search and control packets. We have shown that the cluster head is capable of accurately identifying the malicious node by analysing the sequences of packets that are dropped along the path leading from a source node to the cluster head. A lightweight symmetric encryption technique that uses Binary Playfair is presented here as a means of safeguarding the transport of data. We demonstrate via experimentation that the suggested encryption method is efficient with regard to the amount of energy used, the amount of time required for encryption, and the memory overhead. This lightweight encryption technique is used in clustered wireless ad-hoc networks to reduce the likelihood of a sybil attack occurring in such networks
FedMix: A Sybil Attack Detection System Considering Cross-layer Information Fusion and Privacy Protection. 2022 19th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). :199–207.
.
2022. Sybil attack is one of the most dangerous internal attacks in Vehicular Ad Hoc Network (VANET). It affects the function of the VANET network by maliciously claiming or stealing multiple identity propagation error messages. In order to prevent VANET from Sybil attacks, many solutions have been proposed. However, the existing solutions are specific to the physical or application layer's single-level data and lack research on cross-layer information fusion detection. Moreover, these schemes involve a large number of sensitive data access and transmission, do not consider users' privacy, and can also bring a severe communication burden, which will make these schemes unable to be actually implemented. In this context, this paper introduces FedMix, the first federated Sybil attack detection system that considers cross-layer information fusion and provides privacy protection. The system can integrate VANET physical layer data and application layer data for joint analyses simultaneously. The data resides locally in the vehicle for local training. Then, the central agency only aggregates the generated model and finally distributes it to the vehicles for attack detection. This process does not involve transmitting and accessing any vehicle's original data. Meanwhile, we also designed a new model aggregation algorithm called SFedAvg to solve the problems of unbalanced vehicle data quality and low aggregation efficiency. Experiments show that FedMix can provide an intelligent model with equivalent performance under the premise of privacy protection and significantly reduce communication overhead, compared with the traditional centralized training attack detection model. In addition, the SFedAvg algorithm and cross-layer information fusion bring better aggregation efficiency and detection performance, respectively.
Feature-based Intrusion Detection System with Support Vector Machine. 2022 IEEE International Conference on Blockchain and Distributed Systems Security (ICBDS). :1—7.
.
2022. Today billions of people are accessing the internet around the world. There is a need for new technology to provide security against malicious activities that can take preventive/ defensive actions against constantly evolving attacks. A new generation of technology that keeps an eye on such activities and responds intelligently to them is the intrusion detection system employing machine learning. It is difficult for traditional techniques to analyze network generated data due to nature, amount, and speed with which the data is generated. The evolution of advanced cyber threats makes it difficult for existing IDS to perform up to the mark. In addition, managing large volumes of data is beyond the capabilities of computer hardware and software. This data is not only vast in scope, but it is also moving quickly. The system architecture suggested in this study uses SVM to train the model and feature selection based on the information gain ratio measure ranking approach to boost the overall system's efficiency and increase the attack detection rate. This work also addresses the issue of false alarms and trying to reduce them. In the proposed framework, the UNSW-NB15 dataset is used. For analysis, the UNSW-NB15 and NSL-KDD datasets are used. Along with SVM, we have also trained various models using Naive Bayes, ANN, RF, etc. We have compared the result of various models. Also, we can extend these trained models to create an ensemble approach to improve the performance of IDS.
Comparative Study of Machine Learning Techniques for Intrusion Detection Systems. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:274—283.
.
2022. Being a part of today’s technical world, we are connected through a vast network. More we are addicted to these modernization techniques we need security. There must be reliability in a network security system so that it is capable of doing perfect monitoring of the whole network of an organization so that any unauthorized users or intruders wouldn’t be able to halt our security breaches. Firewalls are there for securing our internal network from unauthorized outsiders but still some time possibility of attacks is there as according to a survey 60% of attacks were internal to the network. So, the internal system needs the same higher level of security just like external. So, understanding the value of security measures with accuracy, efficiency, and speed we got to focus on implementing and comparing an improved intrusion detection system. A comprehensive literature review has been done and found that some feature selection techniques with standard scaling combined with Machine Learning Techniques can give better results over normal existing ML Techniques. In this survey paper with the help of the Uni-variate Feature selection method, the selection of 14 essential features out of 41 is performed which are used in comparative analysis. We implemented and compared both binary class classification and multi-class classification-based Intrusion Detection Systems (IDS) for two Supervised Machine Learning Techniques Support Vector Machine and Classification and Regression Techniques.
Predicting Distributed Denial of Service Attacks in Machine Learning Field. 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE). :594—597.
.
2022. A persistent and serious danger to the Internet is a denial of service attack on a large scale (DDoS) attack using machine learning. Because they originate at the low layers, new Infections that use genuine hypertext transfer protocol requests to overload target resources are more untraceable than application layer-based cyberattacks. Using network flow traces to construct an access matrix, this research presents a method for detecting distributed denial of service attack machine learning assaults. Independent component analysis decreases the number of attributes utilized in detection because it is multidimensional. Independent component analysis can be used to translate features into high dimensions and then locate feature subsets. Furthermore, during the training and testing phase of the updated source support vector machine for classification, their performance it is possible to keep track of the detection rate and false alarms. Modified source support vector machine is popular for pattern classification because it produces good results when compared to other approaches, and it outperforms other methods in testing even when given less information about the dataset. To increase classification rate, modified source support Vector machine is used, which is optimized using BAT and the modified Cuckoo Search method. When compared to standard classifiers, the acquired findings indicate better performance.
Research on GIS Isolating Switch Mechanical Fault Diagnosis based on Cross-Validation Parameter Optimization Support Vector Machine. 2022 IEEE International Conference on High Voltage Engineering and Applications (ICHVE). :1—4.
.
2022. GIS equipment is an important component of power system, and mechanical failure often occurs in the process of equipment operation. In order to realize GIS equipment mechanical fault intelligent detection, this paper presents a mechanical fault diagnosis model for GIS equipment based on cross-validation parameter optimization support vector machine (CV-SVM). Firstly, vibration experiment of isolating switch was carried out based on true 110 kV GIS vibration simulation experiment platform. Vibration signals were sampled under three conditions: normal, plum finger angle change fault, plum finger abrasion fault. Then, the c and G parameters of SVM are optimized by cross validation method and grid search method. A CV-SVM model for mechanical fault diagnosis was established. Finally, training and verification are carried out by using the training set and test set models in different states. The results show that the optimization of cross-validation parameters can effectively improve the accuracy of SVM classification model. It can realize the accurate identification of GIS equipment mechanical fault. This method has higher diagnostic efficiency and performance stability than traditional machine learning. This study can provide reference for on-line monitoring and intelligent fault diagnosis analysis of GIS equipment mechanical vibration.
Research and Design of Network Information Security Attack and Defense Practical Training Platform based on ThinkPHP Framework. 2022 2nd Asia-Pacific Conference on Communications Technology and Computer Science (ACCTCS). :27—31.
.
2022. To solve the current problem of scarce information security talents, this paper proposes to design a network information security attack and defense practical training platform based on ThinkPHP framework. It provides help for areas with limited resources and also offers a communication platform for the majority of information security enthusiasts and students. The platform is deployed using ThinkPHP, and in order to meet the personalized needs of the majority of users, support vector machine algorithms are added to the platform to provide a more convenient service for users.
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories. 2022 IEEE Symposium on Security and Privacy (SP). :1157–1174.
.
2022. Recent advancements in Deep Neural Networks (DNNs) have enabled widespread deployment in multiple security-sensitive domains. The need for resource-intensive training and the use of valuable domain-specific training data have made these models the top intellectual property (IP) for model owners. One of the major threats to DNN privacy is model extraction attacks where adversaries attempt to steal sensitive information in DNN models. In this work, we propose an advanced model extraction framework DeepSteal that steals DNN weights remotely for the first time with the aid of a memory side-channel attack. Our proposed DeepSteal comprises two key stages. Firstly, we develop a new weight bit information extraction method, called HammerLeak, through adopting the rowhammer-based fault technique as the information leakage vector. HammerLeak leverages several novel system-level techniques tailored for DNN applications to enable fast and efficient weight stealing. Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model. We evaluate the proposed model extraction framework on three popular image datasets (e.g., CIFAR-10/100/GTSRB) and four DNN architectures (e.g., ResNet-18/34/Wide-ResNetNGG-11). The extracted substitute model has successfully achieved more than 90% test accuracy on deep residual networks for the CIFAR-10 dataset. Moreover, our extracted substitute model could also generate effective adversarial input samples to fool the victim model. Notably, it achieves similar performance (i.e., 1-2% test accuracy under attack) as white-box adversarial input attack (e.g., PGD/Trades).
ISSN: 2375-1207
A Heuristic for an Online Applicability of Anomaly Detection Techniques. 2022 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :107—112.
.
2022. OHODIN is an online extension for data streams of the kNN-based ODIN anomaly detection approach. It provides a detection-threshold heuristic that is based on extreme value theory. In contrast to sophisticated anomaly and novelty detection approaches the decision-making process of ODIN is interpretable by humans, making it interesting for certain applications. However, it is limited in terms of the underlying detection method. In this article, we present an extension of the OHODIN to further detection techniques to reinforce OHODIN capability of online data streams anomaly detection. We introduce the algorithm modifications and an experimental evaluation with competing state-of-the-art anomaly detection approaches.
Intelligent fault diagnosis technology of power transformer based on Artificial Intelligence. 2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC). 6:1968—1971.
.
2022. Transformer is the key equipment of power system, and its stable operation is very important to the security of power system In practical application, with the progress of technology, the performance of transformer becomes more and more important, but faults also occur from time to time in practical application, and the traditional manual fault diagnosis needs to consume a lot of time and energy. At present, the rapid development of artificial intelligence technology provides a new research direction for timely and accurate detection and treatment of transformer faults. In this paper, a method of transformer fault diagnosis using artificial neural network is proposed. The neural network algorithm is used for off-line learning and training of the operation state data of normal and fault states. By adjusting the relationship between neuron nodes, the mapping relationship between fault characteristics and fault location is established by using network layer learning, Finally, the reasoning process from fault feature to fault location is realized to realize intelligent fault diagnosis.
XAI based model evaluation by applying domain knowledge. 2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). :1—6.
.
2022. Artificial intelligence(AI) is used in decision support systems which learn and perceive features as a function of the number of layers and the weights computed during training. Due to their inherent black box nature, it is insufficient to consider accuracy, precision and recall as metrices for evaluating a model's performance. Domain knowledge is also essential to identify features that are significant by the model to arrive at its decision. In this paper, we consider a use case of face mask recognition to explain the application and benefits of XAI. Eight models used to solve the face mask recognition problem were selected. GradCAM Explainable AI (XAI) is used to explain the state-of-art models. Models that were selecting incorrect features were eliminated even though, they had a high accuracy. Domain knowledge relevant to face mask recognition viz., facial feature importance is applied to identify the model that picked the most appropriate features to arrive at the decision. We demonstrate that models with high accuracies need not be necessarily select the right features. In applications requiring rapid deployment, this method can act as a deciding factor in shortlisting models with a guarantee that the models are looking at the right features for arriving at the classification. Furthermore, the outcomes of the model can be explained to the user enhancing their confidence on the AI model being deployed in the field.