Visible to the public Biblio

Found 765 results

Filters: Keyword is Training  [Clear All Filters]
2022-07-12
Vekaria, Komal Bhupendra, Calyam, Prasad, Wang, Songjie, Payyavula, Ramya, Rockey, Matthew, Ahmed, Nafis.  2021.  Cyber Range for Research-Inspired Learning of “Attack Defense by Pretense” Principle and Practice. IEEE Transactions on Learning Technologies. 14:322—337.
There is an increasing trend in cloud adoption of enterprise applications in, for example, manufacturing, healthcare, and finance. Such applications are routinely subject to targeted cyberattacks, which result in significant loss of sensitive data (e.g., due to data exfiltration in advanced persistent threats) or valuable utilities (e.g., due to resource the exfiltration of power in cryptojacking). There is a critical need to train highly skilled cybersecurity professionals, who are capable of defending against such targeted attacks. In this article, we present the design, development, and evaluation of the Mizzou Cyber Range, an online platform to learn basic/advanced cyber defense concepts and perform training exercises to engender the next-generation cybersecurity workforce. Mizzou Cyber Range features flexibility, scalability, portability, and extendability in delivering cyberattack/defense learning modules to students. We detail our “research-inspired learning” and “learn-apply-create” three-phase pedagogy methodologies in the development of four learning modules that include laboratory exercises and self-study activities using realistic cloud-based application testbeds. The learning modules allow students to gain skills in using latest technologies (e.g., elastic capacity provisioning, software-defined everything infrastructure) to implement sophisticated “attack defense by pretense” techniques. Students can also use the learning modules to understand the attacker-defender game in order to create disincentives (i.e., pretense initiation) that make the attacker's tasks more difficult, costly, time consuming, and uncertain. Lastly, we show the benefits of our Mizzou Cyber Range through the evaluation of student learning using auto-grading, rank assessments with peer standing, and monitoring of students' performance via feedback from prelab evaluation surveys and postlab technical assessments.
2022-07-05
Schoneveld, Liam, Othmani, Alice.  2021.  Towards a General Deep Feature Extractor for Facial Expression Recognition. 2021 IEEE International Conference on Image Processing (ICIP). :2339—2342.
The human face conveys a significant amount of information. Through facial expressions, the face is able to communicate numerous sentiments without the need for verbalisation. Visual emotion recognition has been extensively studied. Recently several end-to-end trained deep neural networks have been proposed for this task. However, such models often lack generalisation ability across datasets. In this paper, we propose the Deep Facial Expression Vector ExtractoR (DeepFEVER), a new deep learning-based approach that learns a visual feature extractor general enough to be applied to any other facial emotion recognition task or dataset. DeepFEVER outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets. DeepFEVER’s extracted features also generalise extremely well to other datasets – even those unseen during training – namely, the Real-World Affective Faces (RAF) dataset.
Arabian, H., Wagner-Hartl, V., Geoffrey Chase, J., Möller, K..  2021.  Facial Emotion Recognition Focused on Descriptive Region Segmentation. 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). :3415—3418.
Facial emotion recognition (FER) is useful in many different applications and could offer significant benefit as part of feedback systems to train children with Autism Spectrum Disorder (ASD) who struggle to recognize facial expressions and emotions. This project explores the potential of real time FER based on the use of local regions of interest combined with a machine learning approach. Histogram of Oriented Gradients (HOG) was implemented for feature extraction, along with 3 different classifiers, 2 based on k-Nearest Neighbor and 1 using Support Vector Machine (SVM) classification. Model performance was compared using accuracy of randomly selected validation sets after training on random training sets of the Oulu-CASIA database. Image classes were distributed evenly, and accuracies of up to 98.44% were observed with small variation depending on data distributions. The region selection methodology provided a compromise between accuracy and number of extracted features, and validated the hypothesis a focus on smaller informative regions performs just as well as the entire image.
Siyaka, Hassan Opotu, Owolabi, Olumide, Bisallah, I. Hashim.  2021.  A New Facial Image Deviation Estimation and Image Selection Algorithm (Fide-Isa) for Facial Image Recognition Systems: The Mathematical Models. 2021 1st International Conference on Multidisciplinary Engineering and Applied Science (ICMEAS). :1—7.
Deep learning models have been successful and shown to perform better in terms of accuracy and efficiency for facial recognition applications. However, they require huge amount of data samples that were well annotated to be successful. Their data requirements have led to some complications which include increased processing demands of the systems where such systems were to be deployed. Reducing the training sample sizes of deep learning models is still an open problem. This paper proposes the reduction of the number of samples required by the convolutional neutral network used in training a facial recognition system using a new Facial Image Deviation Estimation and Image Selection Algorithm (FIDE-ISA). The algorithm was used to select appropriate facial image training samples incrementally based on their facial deviation. This will reduce the need for huge dataset in training deep learning models. Preliminary results indicated a 100% accuracy for models trained with 54 images (at least 3 images per individual) and above.
Mukherjee, Debottam, Chakraborty, Samrat, Banerjee, Ramashis, Bhunia, Joydeep.  2021.  A Novel Real-Time False Data Detection Strategy for Smart Grid. 2021 IEEE 9th Region 10 Humanitarian Technology Conference (R10-HTC). :1—6.
State estimation algorithm ensures an effective realtime monitoring of the modern smart grid leading to an accurate determination of the current operating states. Recently, a new genre of data integrity attacks namely false data injection attack (FDIA) has shown its deleterious effects by bypassing the traditional bad data detection technique. Modern grid operators must detect the presence of such attacks in the raw field measurements to guarantee a safe and reliable operation of the grid. State forecasting based FDIA identification schemes have recently shown its efficacy by determining the deviation of the estimated states due to an attack. This work emphasizes on a scalable deep learning state forecasting model which can accurately determine the presence of FDIA in real-time. An optimal set of hyper-parameters of the proposed architecture leads to an effective forecasting of the operating states with minimal error. A diligent comparison between other state of the art forecasting strategies have promoted the effectiveness of the proposed neural network. A comprehensive analysis on the IEEE 14 bus test bench effectively promotes the proposed real-time attack identification strategy.
Parizad, Ali, Hatziadoniu, Constantine.  2021.  Semi-Supervised False Data Detection Using Gated Recurrent Units and Threshold Scoring Algorithm. 2021 IEEE Power & Energy Society General Meeting (PESGM). :01—05.
In recent years, cyber attackers are targeting the power system and imposing different damages to the national economy and public safety. False Data Injection Attack (FDIA) is one of the main types of Cyber-Physical attacks that adversaries can manipulate power system measurements and modify system data. Consequently, it may result in incorrect decision-making and control operations and lead to devastating effects. In this paper, we propose a two-stage detection method. In the first step, Gated Recurrent Unit (GRU), as a deep learning algorithm, is employed to forecast the data for the future horizon. Meanwhile, hyperparameter optimization is implemented to find the optimum parameters (i.e., number of layers, epoch, batch size, β1, β2, etc.) in the supervised learning process. In the second step, an unsupervised scoring algorithm is employed to find the sequences of false data. Furthermore, two penalty factors are defined to prevent the objective function from greedy behavior. We assess the capability of the proposed false data detection method through simulation studies on a real-world data set (ComEd. dataset, Northern Illinois, USA). The results demonstrate that the proposed method can detect different types of attacks, i.e., scaling, simple ramp, professional ramp, and random attacks, with good performance metrics (i.e., recall, precision, F1 Score). Furthermore, the proposed deep learning method can mitigate false data with the estimated true values.
Tufail, Shahid, Batool, Shanzeh, Sarwat, Arif I..  2021.  False Data Injection Impact Analysis In AI-Based Smart Grid. SoutheastCon 2021. :01—07.
As the traditional grids are transitioning to the smart grid, they are getting more prone to cyber-attacks. Among all the cyber-attack one of the most dangerous attack is false data injection attack. When this attack is performed with historical information of the data packet the attack goes undetected. As the false data is included for training and testing the model, the accuracy is decreased, and decision making is affected. In this paper we analyzed the impact of the false data injection attack(FDIA) on AI based smart grid. These analyses were performed using two different multi-layer perceptron architectures with one of the independent variables being compared and modified by the attacker. The root-mean squared values were compared with different models.
2022-07-01
Cody, Tyler, Beling, Peter A..  2021.  Heterogeneous Transfer in Deep Learning for Spectrogram Classification in Cognitive Communications. 2021 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW). :1—5.
Machine learning offers performance improvements and novel functionality, but its life cycle performance is understudied. In areas like cognitive communications, where systems are long-lived, life cycle trade-offs are key to system design. Herein, we consider the use of deep learning to classify spectrograms. We vary the label-space over which the network makes classifications, as may emerge with changes in use over a system’s life cycle, and compare heterogeneous transfer learning performance across label-spaces between model architectures. Our results offer an empirical example of life cycle challenges to using machine learning for cognitive communications. They evidence important trade-offs among performance, training time, and sensitivity to the order in which the label-space is changed. And they show that fine-tuning can be used in the heterogeneous transfer of spectrogram classifiers.
Hashim, Aya, Medani, Razan, Attia, Tahani Abdalla.  2021.  Defences Against web Application Attacks and Detecting Phishing Links Using Machine Learning. 2020 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE). :1–6.
In recent years web applications that are hacked every day estimated to be 30 000, and in most cases, web developers or website owners do not even have enough knowledge about what is happening on their sites. Web hackers can use many attacks to gain entry or compromise legitimate web applications, they can also deceive people by using phishing sites to collect their sensitive and private information. In response to this, the need is raised to take proper measures to understand the risks and be aware of the vulnerabilities that may affect the website and hence the normal business flow. In the scope of this study, mitigations against the most common web application attacks are set, and the web administrator is provided with ways to detect phishing links which is a social engineering attack, the study also demonstrates the generation of web application logs that simplifies the process of analyzing the actions of abnormal users to show when behavior is out of bounds, out of scope, or against the rules. The methods of mitigation are accomplished by secure coding techniques and the methods for phishing link detection are performed by various machine learning algorithms and deep learning techniques. The developed application has been tested and evaluated against various attack scenarios, the outcomes obtained from the test process showed that the website had successfully mitigated these dangerous web application attacks, and for the detection of phishing links part, a comparison is made between different algorithms to find the best one, and the outcome of the best model gave 98% accuracy.
Soltani, Sanaz, Shojafar, Mohammad, Mostafaei, Habib, Pooranian, Zahra, Tafazolli, Rahim.  2021.  Link Latency Attack in Software-Defined Networks. 2021 17th International Conference on Network and Service Management (CNSM). :187–193.
Software-Defined Networking (SDN) has found applications in different domains, including wired- and wireless networks. The SDN controller has a global view of the network topology, which is vulnerable to topology poisoning attacks, e.g., link fabrication and host-location hijacking. The adversaries can leverage these attacks to monitor the flows or drop them. However, current defence systems such as TopoGuard and TopoGuard+ can detect such attacks. In this paper, we introduce the Link Latency Attack (LLA) that can successfully bypass the systems' defence mechanisms above. In LLA, the adversary can add a fake link into the network and corrupt the controller's view from the network topology. This can be accomplished by compromising the end hosts without the need to attack the SDN-enabled switches. We develop a Machine Learning-based Link Guard (MLLG) system to provide the required defence for LLA. We test the performance of our system using an emulated network on Mininet, and the obtained results show an accuracy of 98.22% in detecting the attack. Interestingly, MLLG improves 16% the accuracy of TopoGuard+.
2022-06-30
Cao, Yu.  2021.  Digital Character CAPTCHA Recognition Using Convolution Network. 2021 2nd International Conference on Computing and Data Science (CDS). :130—135.
Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a type of automatic program to determine whether the user is human or not. The most common type of CAPTCHA is a kind of message interpretation by twisting the letters and adding slight noises in the background, plays a role of verification code. In this paper, we will introduce the basis of Convolutional Neural Network first. Then based on the handwritten digit recognition using CNN, we will develop a network for CAPTCHA image recognition.
2022-06-14
Singh, A K, Goyal, Navneet.  2021.  Detection of Malicious Webpages Using Deep Learning. 2021 IEEE International Conference on Big Data (Big Data). :3370–3379.
Malicious Webpages have been a serious threat on Internet for the past few years. As per the latest Google Transparency reports, they continue to be top ranked amongst online threats. Various techniques have been used till date to identify malicious sites, to include, Static Heuristics, Honey Clients, Machine Learning, etc. Recently, with the rapid rise of Deep Learning, an interest has aroused to explore Deep Learning techniques for detecting Malicious Webpages. In this paper Deep Learning has been utilized for such classification. The model proposed in this research has used a Deep Neural Network (DNN) with two hidden layers to distinguish between Malicious and Benign Webpages. This DNN model gave high accuracy of 99.81% with very low False Positives (FP) and False Negatives (FN), and with near real-time response on test sample. The model outperformed earlier machine learning solutions in accuracy, precision, recall and time performance metrics.
Schneider, Madeleine, Aspinall, David, Bastian, Nathaniel D..  2021.  Evaluating Model Robustness to Adversarial Samples in Network Intrusion Detection. 2021 IEEE International Conference on Big Data (Big Data). :3343–3352.
Adversarial machine learning, a technique which seeks to deceive machine learning (ML) models, threatens the utility and reliability of ML systems. This is particularly relevant in critical ML implementations such as those found in Network Intrusion Detection Systems (NIDS). This paper considers the impact of adversarial influence on NIDS and proposes ways to improve ML based systems. Specifically, we consider five feature robustness metrics to determine which features in a model are most vulnerable, and four defense methods. These methods are tested on six ML models with four adversarial sample generation techniques. Our results show that across different models and adversarial generation techniques, there is limited consistency in vulnerable features or in effectiveness of defense method.
2022-06-13
Gupta, B. B., Gaurav, Akshat, Peraković, Dragan.  2021.  A Big Data and Deep Learning based Approach for DDoS Detection in Cloud Computing Environment. 2021 IEEE 10th Global Conference on Consumer Electronics (GCCE). :287–290.
Recently, as a result of the COVID-19 pandemic, the internet service has seen an upsurge in use. As a result, the usage of cloud computing apps, which offer services to end users on a subscription basis, rises in this situation. However, the availability and efficiency of cloud computing resources are impacted by DDoS attacks, which are designed to disrupt the availability and processing power of cloud computing services. Because there is no effective way for detecting or filtering DDoS attacks, they are a dependable weapon for cyber-attackers. Recently, researchers have been experimenting with machine learning (ML) methods in order to create efficient machine learning-based strategies for detecting DDoS assaults. In this context, we propose a technique for detecting DDoS attacks in a cloud computing environment using big data and deep learning algorithms. The proposed technique utilises big data spark technology to analyse a large number of incoming packets and a deep learning machine learning algorithm to filter malicious packets. The KDDCUP99 dataset was used for training and testing, and an accuracy of 99.73% was achieved.
2022-06-09
Cobb, Adam D., Jalaian, Brian A., Bastian, Nathaniel D., Russell, Stephen.  2021.  Robust Decision-Making in the Internet of Battlefield Things Using Bayesian Neural Networks. 2021 Winter Simulation Conference (WSC). :1–12.
The Internet of Battlefield Things (IoBT) is a dynamically composed network of intelligent sensors and actuators that operate as a command and control, communications, computers, and intelligence complex-system with the aim to enable multi-domain operations. The use of artificial intelligence can help transform the IoBT data into actionable insight to create information and decision advantage on the battlefield. In this work, we focus on how accounting for uncertainty in IoBT systems can result in more robust and safer systems. Human trust in these systems requires the ability to understand and interpret how machines make decisions. Most real-world applications currently use deterministic machine learning techniques that cannot incorporate uncertainty. In this work, we focus on the machine learning task of classifying vehicles from their audio recordings, comparing deterministic convolutional neural networks (CNNs) with Bayesian CNNs to show that correctly estimating the uncertainty can help lead to robust decision-making in IoBT.
Karim, Hassan, Rawat, Danda B..  2021.  Evaluating Machine Learning Classifiers for Data Sharing in Internet of Battlefield Things. 2021 IEEE Symposium Series on Computational Intelligence (SSCI). :01–07.
The most widely used method to prevent adversaries from eavesdropping on sensitive sensor, robot, and war fighter communications is mathematically strong cryptographic algorithms. However, prevailing cryptographic protocol mandates are often made without consideration of resource constraints of devices in the internet of Battlefield Things (IoBT). In this article, we address the challenges of IoBT sensor data exchange in contested environments. Battlefield IoT (Internet of Things) devices need to exchange data and receive feedback from other devices such as tanks and command and control infrastructure for analysis, tracking, and real-time engagement. Since data in IoBT systems may be massive or sparse, we introduced a machine learning classifier to determine what type of data to transmit under what conditions. We compared Support Vector Machine, Bayes Point Match, Boosted Decision Trees, Decision Forests, and Decision Jungles on their abilities to recommend the optimal confidentiality preserving data and transmission path considering dynamic threats. We created a synthesized dataset that simulates platoon maneuvers and IED detection components. We found Decision Jungles to produce the most accurate results while requiring the least resources during training to produce those results. We also introduced the JointField blockchain network for joint and allied force data sharing. With our classifier, strategists, and system designers will be able to enable adaptive responses to threats while engaged in real-time field conflict.
Deshmukh, Monika S., Bhaladhare, Pavan Ravikesh.  2021.  Intrusion Detection System (DBN-IDS) for IoT using Optimization Enabled Deep Belief Neural Network. 2021 5th International Conference on Information Systems and Computer Networks (ISCON). :1–4.
In the era of Internet of Things (IoT), the connection links are established from devices easily, which is vulnerable to insecure attacks from intruders, hence intrusion detection system in IoT is the need of an hour. One of the important thing for any organization is securing the confidential information and data from outside attacks as well as unauthorized access. There are many attempts made by the researchers to develop the strong intrusion detection system having high accuracy. These systems suffer from many disadvantages like unacceptable accuracy rates including high False Positive Rate (FPR) and high False Negative Rate (FNR), more execution time and failure rate. More of these system models are developed by using traditional machine learning techniques, which have performance limitations in terms of accuracy and timeliness both. These limitations can be overcome by using the deep learning techniques. Deep learning techniques have the capability to generate highly accurate results and are fault tolerant. Here, the intrusion detection model for IoT is designed by using the Taylor-Spider Monkey optimization (Taylor-SMO) which will be developed to train the Deep belief neural network (DBN) towards achieving an accurate intrusion detection model. The deep learning accuracy gets increased with increasing number of training data samples and testing data samples. The optimization based algorithm for training DBN helps to reduce the FPR and FNR in intrusion detection. The system will be implemented by using the NSL KDD dataset. Also, this model will be trained by using the samples from this dataset, before which feature extraction will be applied and only relevant set of attributes will be selected for model development. This approach can lead to better and satisfactory results in intrusion detection.
Hoarau, Kevin, Tournoux, Pierre Ugo, Razafindralambo, Tahiry.  2021.  Suitability of Graph Representation for BGP Anomaly Detection. 2021 IEEE 46th Conference on Local Computer Networks (LCN). :305–310.
The Border Gateway Protocol (BGP) is in charge of the route exchange at the Internet scale. Anomalies in BGP can have several causes (mis-configuration, outage and attacks). These anomalies are classified into large or small scale anomalies. Machine learning models are used to analyze and detect anomalies from the complex data extracted from BGP behavior. Two types of data representation can be used inside the machine learning models: a graph representation of the network (graph features) or a statistical computation on the data (statistical features). In this paper, we evaluate and compare the accuracy of machine learning models using graph features and statistical features on both large and small scale BGP anomalies. We show that statistical features have better accuracy for large scale anomalies, and graph features increase the detection accuracy by 15% for small scale anomalies and are well suited for BGP small scale anomaly detection.
Alsyaibani, Omar Muhammad Altoumi, Utami, Ema, Hartanto, Anggit Dwi.  2021.  An Intrusion Detection System Model Based on Bidirectional LSTM. 2021 3rd International Conference on Cybernetics and Intelligent System (ICORIS). :1–6.
Intrusion Detection System (IDS) is used to identify malicious traffic on the network. Apart from rule-based IDS, machine learning and deep learning based on IDS are also being developed to improve the accuracy of IDS detection. In this study, the public dataset CIC IDS 2017 was used in developing deep learning-based IDS because this dataset contains the new types of attacks. In addition, this dataset also meets the criteria as an intrusion detection dataset. The dataset was split into train data, validation data and test data. We proposed Bidirectional Long-Short Term Memory (LSTM) for building neural network. We created 24 scenarios with various changes in training parameters which were trained for 100 epochs. The training parameters used as research variables are optimizer, activation function, and learning rate. As addition, Dropout layer and L2-regularizer were implemented on every scenario. The result shows that the model used Adam optimizer, Tanh activation function and a learning rate of 0.0001 produced the highest accuracy compared to other scenarios. The accuracy and F1 score reached 97.7264% and 97.7516%. The best model was trained again until 1000 iterations and the performance increased to 98.3448% in accuracy and 98.3793% in F1 score. The result exceeded several previous works on the same dataset.
Iashvili, Giorgi, Iavich, Maksim, Bocu, Razvan, Odarchenko, Roman, Gnatyuk, Sergiy.  2021.  Intrusion Detection System for 5G with a Focus on DOS/DDOS Attacks. 2021 11th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). 2:861–864.
The industry of telecommunications is being transformed towards 5G technology, because it has to deal with the emerging and existing use cases. Because, 5G wireless networks need rather large data rates and much higher coverage of the dense base station deployment with the bigger capacity, much better Quality of Service - QoS, and the need very low latency [1–3]. The provision of the needed services which are envisioned by 5G technologies need the new service models of deployment, networking architectures, processing technologies and storage to be defined. These technologies will cause the new problems for the cybersecurity of 5G systems and the security of their functionality. The developers and researchers working in this field make their best to secure 5G systems. The researchers showed that 5G systems have the security challenges. The researchers found the vulnerabilities in 5G systems which allow attackers to integrate malicious code into the system and make the different types of the illegitimate actions. MNmap, Battery drain attacks and MiTM can be successfully implemented on 5G. The paper makes the analysis of the existing cyber security problems in 5G technology. Based on the analysis, we suggest the novel Intrusion Detection System - IDS by means of the machine-learning algorithms. In the related papers the scientists offer to use NSL-KDD in order to train IDS. In our paper we offer to train IDS using the big datasets of DOS/DDOS attacks, besides of training using NSL-KDD. The research also offers the methodology of integration of the offered intrusion detection systems into an standard architecture of 5G. The paper also offers the pseudo code of the designed system.
2022-06-08
Chen, Lin, Qiu, Huijun, Kuang, Xiaoyun, Xu, Aidong, Yang, Yiwei.  2021.  Intelligent Data Security Threat Discovery Model Based on Grid Data. 2021 6th International Conference on Image, Vision and Computing (ICIVC). :458–463.
With the rapid construction and popularization of smart grid, the security of data in smart grid has become the basis for the safe and stable operation of smart grid. This paper proposes a data security threat discovery model for smart grid. Based on the prediction data analysis method, combined with migration learning technology, it analyzes different data, uses data matching process to classify the losses, and accurately predicts the analysis results, finds the security risks in the data, and prevents the illegal acquisition of data. The reinforcement learning and training process of this method distinguish the effective authentication and illegal access to data.
Imtiaz, Sayem Mohammad, Sultana, Kazi Zakia, Varde, Aparna S..  2021.  Mining Learner-friendly Security Patterns from Huge Published Histories of Software Applications for an Intelligent Tutoring System in Secure Coding. 2021 IEEE International Conference on Big Data (Big Data). :4869–4876.

Security patterns are proven solutions to recurring problems in software development. The growing importance of secure software development has introduced diverse research efforts on security patterns that mostly focused on classification schemes, evolution and evaluation of the patterns. Despite a huge mature history of research and popularity among researchers, security patterns have not fully penetrated software development practices. Besides, software security education has not been benefited by these patterns though a commonly stated motivation is the dissemination of expert knowledge and experience. This is because the patterns lack a simple embodiment to help students learn about vulnerable code, and to guide new developers on secure coding. In order to address this problem, we propose to conduct intelligent data mining in the context of software engineering to discover learner-friendly software security patterns. Our proposed model entails knowledge discovery from large scale published real-world vulnerability histories in software applications. We harness association rule mining for frequent pattern discovery to mine easily comprehensible and explainable learner-friendly rules, mainly of the type "flaw implies fix" and "attack type implies flaw", so as to enhance training in secure coding which in turn would augment secure software development. We propose to build a learner-friendly intelligent tutoring system (ITS) based on the newly discovered security patterns and rules explored. We present our proposed model based on association rule mining in secure software development with the goal of building this ITS. Our proposed model and prototype experiments are discussed in this paper along with challenges and ongoing work.

Guo, Jiansheng, Qi, Liang, Suo, Jiao.  2021.  Research on Data Classification of Intelligent Connected Vehicles Based on Scenarios. 2021 International Conference on E-Commerce and E-Management (ICECEM). :153–158.
The intelligent connected vehicle industry has entered a period of opportunity, industry data is accumulating rapidly, and the formulation of industry standards to regulate big data management and application is imminent. As the basis of data security, data classification has received unprecedented attention. By combing through the research and development status of data classification in various industries, this article combines industry characteristics and re-examines the framework of industry data classification from the aspects of information security and data assetization, and tries to find the balance point between data security and data value. The intelligent networked automobile industry provides support for big data applications, this article combines the characteristics of the connected vehicle industry, re-examines the data characteristics of the intelligent connected vehicle industry from the 2 aspects as information security and data assetization, and eventually proposes a scene-based hierarchical framework. The framework includes the complete classification process, model, and quantifiable parameters, which provides a solution and theoretical endorsement for the construction of a big data automatic classification system for the intelligent connected vehicle industry and safe data open applications.
Wang, Runhao, Kang, Jiexiang, Yin, Wei, Wang, Hui, Sun, Haiying, Chen, Xiaohong, Gao, Zhongjie, Wang, Shuning, Liu, Jing.  2021.  DeepTrace: A Secure Fingerprinting Framework for Intellectual Property Protection of Deep Neural Networks. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :188–195.

Deep Neural Networks (DNN) has gained great success in solving several challenging problems in recent years. It is well known that training a DNN model from scratch requires a lot of data and computational resources. However, using a pre-trained model directly or using it to initialize weights cost less time and often gets better results. Therefore, well pre-trained DNN models are valuable intellectual property that we should protect. In this work, we propose DeepTrace, a framework for model owners to secretly fingerprinting the target DNN model using a special trigger set and verifying from outputs. An embedded fingerprint can be extracted to uniquely identify the information of model owner and authorized users. Our framework benefits from both white-box and black-box verification, which makes it useful whether we know the model details or not. We evaluate the performance of DeepTrace on two different datasets, with different DNN architectures. Our experiment shows that, with the advantages of combining white-box and black-box verification, our framework has very little effect on model accuracy, and is robust against different model modifications. It also consumes very little computing resources when extracting fingerprint.

2022-06-06
Jobst, Matthias, Liu, Chen, Partzsch, Johannes, Yan, Yexin, Kappel, David, Gonzalez, Hector A., Ji, Yue, Vogginger, Bernhard, Mayr, Christian.  2020.  Event-based Neural Network for ECG Classification with Delta Encoding and Early Stopping. 2020 6th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP). :1–4.
We present a scalable architecture based on a trained filter bank for input pre-processing and a recurrent neural network (RNN) for the detection of atrial fibrillation in electrocardiogram (ECG) signals, with the focus on enabling a very efficient hardware implementation as application-specific integrated circuit (ASIC). Our already very efficient base architecture is further improved by replacing the RNN with a delta-encoded gated recurrent unit (GRU) and adding a confidence measure (CM) for terminating the computation as early as possible. With these optimizations, we demonstrate a reduction of the processing load of 58 % on an internal dataset while still achieving near state-of-the-art classification results on the Physionet ECG dataset with only 1202 parameters.