Biblio
Filters: Keyword is machine learning [Clear All Filters]
Comparison Of Different Machine Learning Methods Applied To Obesity Classification. 2022 International Conference on Machine Learning and Intelligent Systems Engineering (MLISE). :467—472.
.
2022. Estimation for obesity levels is always an important topic in medical field since it can provide useful guidance for people that would like to lose weight or keep fit. The article tries to find a model that can predict obesity and provides people with the information of how to avoid overweight. To be more specific, this article applied dimension reduction to the data set to simplify the data and tried to Figure out a most decisive feature of obesity through Principal Component Analysis (PCA) based on the data set. The article also used some machine learning methods like Support Vector Machine (SVM), Decision Tree to do prediction of obesity and wanted to find the major reason of obesity. In addition, the article uses Artificial Neural Network (ANN) to do prediction which has more powerful feature extraction ability to do this. Finally, the article found that family history of obesity is the most decisive feature, and it may because of obesity may be greatly affected by genes or the family eating diet may have great influence. And both ANN and Decision tree’s accuracy of prediction is higher than 90%.
Fashion Images Classification using Machine Learning, Deep Learning and Transfer Learning Models. 2022 7th International Conference on Image and Signal Processing and their Applications (ISPA). :1—5.
.
2022. Fashion is the way we present ourselves which mainly focuses on vision, has attracted great interest from computer vision researchers. It is generally used to search fashion products in online shopping malls to know the descriptive information of the product. The main objectives of our paper is to use deep learning (DL) and machine learning (ML) methods to correctly identify and categorize clothing images. In this work, we used ML algorithms (support vector machines (SVM), K-Nearest Neirghbors (KNN), Decision tree (DT), Random Forest (RF)), DL algorithms (Convolutionnal Neurals Network (CNN), AlexNet, GoogleNet, LeNet, LeNet5) and the transfer learning using a pretrained models (VGG16, MobileNet and RestNet50). We trained and tested our models online using google colaboratory with Tensorflow/Keras and Scikit-Learn libraries that support deep learning and machine learning in Python. The main metric used in our study to evaluate the performance of ML and DL algorithms is the accuracy and matrix confusion. The best result for the ML models is obtained with the use of ANN (88.71%) and for the DL models is obtained for the GoogleNet architecture (93.75%). The results obtained showed that the number of epochs and the depth of the network have an effect in obtaining the best results.
Empirical Research on Multifactor Quantitative Stock Selection Strategy Based on Machine Learning. 2022 3rd International Conference on Pattern Recognition and Machine Learning (PRML). :380—383.
.
2022. In this paper, stock selection strategy design based on machine learning and multi-factor analysis is a research hotspot in quantitative investment field. Four machine learning algorithms including support vector machine, gradient lifting regression, random forest and linear regression are used to predict the rise and fall of stocks by taking stock fundamentals as input variables. The portfolio strategy is constructed on this basis. Finally, the stock selection strategy is further optimized. The empirical results show that the multifactor quantitative stock selection strategy has a good stock selection effect, and yield performance under the support vector machine algorithm is the best. With the increase of the number of factors, there is an inverse relationship between the fitting degree and the yield under various algorithms.
Comparison of Different Machine Learning Algorithms Based on Intrusion Detection System. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:667—672.
.
2022. An IDS is a system that helps in detecting any kind of doubtful activity on a computer network. It is capable of identifying suspicious activities at both the levels i.e. locally at the system level and in transit at the network level. Since, the system does not have its own dataset as a result it is inefficient in identifying unknown attacks. In order to overcome this inefficiency, we make use of ML. ML assists in analysing and categorizing attacks on diverse datasets. In this study, the efficacy of eight machine learning algorithms based on KDD CUP99 is assessed. Based on our implementation and analysis, amongst the eight Algorithms considered here, Support Vector Machine (SVM), Random Forest (RF) and Decision Tree (DT) have the highest testing accuracy of which got SVM does have the highest accuracy
Sentiment Analysis of Covid19 Vaccines Tweets Using NLP and Machine Learning Classifiers. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:225—230.
.
2022. Sentiment Analysis (SA) is an approach for detecting subjective information such as thoughts, outlooks, reactions, and emotional state. The majority of previous SA work treats it as a text-classification problem that requires labelled input to train the model. However, obtaining a tagged dataset is difficult. We will have to do it by hand the majority of the time. Another concern is that the absence of sufficient cross-domain portability creates challenging situation to reuse same-labelled data across applications. As a result, we will have to manually classify data for each domain. This research work applies sentiment analysis to evaluate the entire vaccine twitter dataset. The work involves the lexicon analysis using NLP libraries like neattext, textblob and multi class classification using BERT. This word evaluates and compares the results of the machine learning algorithms.
Machine Learning-Based Heart Disease Prediction: A Study for Home Personalized Care. 2022 IEEE 32nd International Workshop on Machine Learning for Signal Processing (MLSP). :01—06.
.
2022. This study develops a framework for personalized care to tackle heart disease risk using an at-home system. The machine learning models used to predict heart disease are Logistic Regression, K - Nearest Neighbor, Support Vector Machine, Naive Bayes, Decision Tree, Random Forest and XG Boost. Timely and efficient detection of heart disease plays an important role in health care. It is essential to detect cardiovascular disease (CVD) at the earliest, consult a specialist doctor before the severity of the disease and start medication. The performance of the proposed model was assessed using the Cleveland Heart Disease dataset from the UCI Machine Learning Repository. Compared to all machine learning algorithms, the Random Forest algorithm shows a better performance accuracy score of 90.16%. The best model may evaluate patient fitness rather than routine hospital visits. The proposed work will reduce the burden on hospitals and help hospitals reach only critical patients.
A Machine Learning Study on the Model Performance of Human Resources Predictive Algorithms. 2022 4th International Conference on Applied Machine Learning (ICAML). :405—409.
.
2022. A good ecological environment is crucial to attracting talents, cultivating talents, retaining talents and making talents fully effective. This study provides a solution to the current mainstream problem of how to deal with excellent employee turnover in advance, so as to promote the sustainable and harmonious human resources ecological environment of enterprises with a shortage of talents.This study obtains open data sets and conducts data preprocessing, model construction and model optimization, and describes a set of enterprise employee turnover prediction models based on RapidMiner workflow. The data preprocessing is completed with the help of the data statistical analysis software IBM SPSS Statistic and RapidMiner.Statistical charts, scatter plots and boxplots for analysis are generated to realize data visualization analysis. Machine learning, model application, performance vector, and cross-validation through RapidMiner's multiple operators and workflows. Model design algorithms include support vector machines, naive Bayes, decision trees, and neural networks. Comparing the performance parameters of the algorithm model from the four aspects of accuracy, precision, recall and F1-score. It is concluded that the performance of the decision tree algorithm model is the highest. The performance evaluation results confirm the effectiveness of this model in sustainable exploring of enterprise employee turnover prediction in human resource management.
Classification of Mobile Phone Price Dataset Using Machine Learning Algorithms. 2022 3rd International Conference on Pattern Recognition and Machine Learning (PRML). :438—443.
.
2022. With the development of technology, mobile phones are an indispensable part of human life. Factors such as brand, internal memory, wifi, battery power, camera and availability of 4G are now modifying consumers' decisions on buying mobile phones. But people fail to link those factors with the price of mobile phones; in this case, this paper is aimed to figure out the problem by using machine learning algorithms like Support Vector Machine, Decision Tree, K Nearest Neighbors and Naive Bayes to train the mobile phone dataset before making predictions of the price level. We used appropriate algorithms to predict smartphone prices based on accuracy, precision, recall and F1 score. This not only helps customers have a better choice on the mobile phone but also gives advice to businesses selling mobile phones that the way to set reasonable prices with the different features they offer. This idea of predicting prices level will give support to customers to choose mobile phones wisely in the future. The result illustrates that among the 4 classifiers, SVM returns to the most desirable performance with 94.8% of accuracy, 97.3 of F1 score (without feature selection) and 95.5% of accuracy, 97.7% of F1 score (with feature selection).
A machine learning approach to predict the result of League of Legends. 2022 International Conference on Machine Learning and Knowledge Engineering (MLKE). :38—45.
.
2022. Nowadays, the MOBA game is the game type with the most audiences and players around the world. Recently, the League of Legends has become an official sport as an e-sport among 37 events in the 2022 Asia Games held in Hangzhou. As the development in the e-sport, analytical skills are also involved in this field. The topic of this research is to use the machine learning approach to analyze the data of the League of Legends and make a prediction about the result of the game. In this research, the method of machine learning is applied to the dataset which records the first 10 minutes in diamond-ranked games. Several popular machine learning (AdaBoost, GradientBoost, RandomForest, ExtraTree, SVM, Naïve Bayes, KNN, LogisticRegression, and DecisionTree) are applied to test the performance by cross-validation. Then several algorithms that outperform others are selected to make a voting classifier to predict the game result. The accuracy of the voting classifier is 72.68%.
Predicting Creditworthiness of Smartphone Users in Indonesia during the COVID-19 pandemic using Machine Learning. 2021 International Seminar on Machine Learning, Optimization, and Data Science (ISMODE). :223—227.
.
2022. In this research work, we attempted to predict the creditworthiness of smartphone users in Indonesia during the COVID-19 pandemic using machine learning. Principal Component Analysis (PCA) and Kmeans algorithms are used for the prediction of creditworthiness with the used a dataset of 1050 respondents consisting of twelve questions to smartphone users in Indonesia during the COVID-19 pandemic. The four different classification algorithms (Logistic Regression, Support Vector Machine, Decision Tree, and Naive Bayes) were tested to classify the creditworthiness of smartphone users in Indonesia. The tests carried out included testing for accuracy, precision, recall, F1-score, and Area Under Curve Receiver Operating Characteristics (AUCROC) assesment. Logistic Regression algorithm shows the perfect performances whereas Naïve Bayes (NB) shows the least. The results of this research also provide new knowledge about the influential and non-influential variables based on the twelve questions conducted to the respondents of smartphone users in Indonesia during the COVID-19 pandemic.
Malware Detection Approach Based on the Swarm-Based Behavioural Analysis over API Calling Sequence. 2022 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC). :27—32.
.
2022. The rapidly increasing malware threats must be coped with new effective malware detection methodologies. Current malware threats are not limited to daily personal transactions but dowelled deeply within large enterprises and organizations. This paper introduces a new methodology for detecting and discriminating malicious versus normal applications. In this paper, we employed Ant-colony optimization to generate two behavioural graphs that characterize the difference in the execution behavior between malware and normal applications. Our proposed approach relied on the API call sequence generated when an application is executed. We used the API calls as one of the most widely used malware dynamic analysis features. Our proposed method showed distinctive behavioral differences between malicious and non-malicious applications. Our experimental results showed a comparative performance compared to other machine learning methods. Therefore, we can employ our method as an efficient technique in capturing malicious applications.
Detection of Botnets in IoT Networks using Graph Theory and Machine Learning. 2022 6th International Conference on Trends in Electronics and Informatics (ICOEI). :590—597.
.
2022. The Internet of things (IoT) is proving to be a boon in granting internet access to regularly used objects and devices. Sensors, programs, and other innovations interact and trade information with different gadgets and frameworks over the web. Even in modern times, IoT gadgets experience the ill effects of primary security threats, which expose them to many dangers and malware, one among them being IoT botnets. Botnets carry out attacks by serving as a vector and this has become one of the significant dangers on the Internet. These vectors act against associations and carry out cybercrimes. They are used to produce spam, DDOS attacks, click frauds, and steal confidential data. IoT gadgets bring various challenges unlike the common malware on PCs and Android devices as IoT gadgets have heterogeneous processor architecture. Numerous researches use static or dynamic analysis for detection and classification of botnets on IoT gadgets. Most researchers haven't addressed the multi-architecture issue and they use a lot of computing resources for analyzing. Therefore, this approach attempts to classify botnets in IoT by using PSI-Graphs which effectively addresses the problem of encryption in IoT botnet detection, tackles the multi-architecture problem, and reduces computation time. It proposes another methodology for describing and recognizing botnets utilizing graph-based Machine Learning techniques and Exploratory Data Analysis to analyze the data and identify how separable the data is to recognize bots at an earlier stage so that IoT devices can be prevented from being attacked.
A Survey of Explainable Graph Neural Networks for Cyber Malware Analysis. 2022 IEEE International Conference on Big Data (Big Data). :2932—2939.
.
2022. Malicious cybersecurity activities have become increasingly worrisome for individuals and companies alike. While machine learning methods like Graph Neural Networks (GNNs) have proven successful on the malware detection task, their output is often difficult to understand. Explainable malware detection methods are needed to automatically identify malicious programs and present results to malware analysts in a way that is human interpretable. In this survey, we outline a number of GNN explainability methods and compare their performance on a real-world malware detection dataset. Specifically, we formulated the detection problem as a graph classification problem on the malware Control Flow Graphs (CFGs). We find that gradient-based methods outperform perturbation-based methods in terms of computational expense and performance on explainer-specific metrics (e.g., Fidelity and Sparsity). Our results provide insights into designing new GNN-based models for cyber malware detection and attribution.
Research on Intellectual Property Protection of Artificial Intelligence Creation in China Based on SVM Kernel Methods. 2022 International Conference on Blockchain Technology and Information Security (ICBCTIS). :230–236.
.
2022. Artificial intelligence creation comes into fashion and has brought unprecedented challenges to intellectual property law. In order to study the viewpoints of AI creation copyright ownership from professionals in different institutions, taking the papers of AI creation on CNKI from 2016 to 2021, we applied orthogonal design and analysis of variance method to construct the dataset. A kernel-SVM classifier with different kernel methods in addition to some shallow machine learning classifiers are selected in analyzing and predicting the copyright ownership of AI creation. Support vector machine (svm) is widely used in statistics and the performance of SVM method is closely related to the choice of the kernel function. SVM with RBF kernel surpasses the other seven kernel-SVM classifiers and five shallow classifier, although the accuracy provided by all of them was not satisfactory. Various performance metrics such as accuracy, F1-score are used to evaluate the performance of KSVM and other classifiers. The purpose of this study is to explore the overall viewpoints of AI creation copyright ownership, investigate the influence of different features on the final copyright ownership and predict the most likely viewpoint in the future. And it will encourage investors, researchers and promote intellectual property protection in China.
Automatic labeling of the elements of a vulnerability report CVE with NLP. 2022 IEEE 23rd International Conference on Information Reuse and Integration for Data Science (IRI). :164—165.
.
2022. Common Vulnerabilities and Exposures (CVE) databases contain information about vulnerabilities of software products and source code. If individual elements of CVE descriptions can be extracted and structured, then the data can be used to search and analyze CVE descriptions. Herein we propose a method to label each element in CVE descriptions by applying Named Entity Recognition (NER). For NER, we used BERT, a transformer-based natural language processing model. Using NER with machine learning can label information from CVE descriptions even if there are some distortions in the data. An experiment involving manually prepared label information for 1000 CVE descriptions shows that the labeling accuracy of the proposed method is about 0.81 for precision and about 0.89 for recall. In addition, we devise a way to train the data by dividing it into labels. Our proposed method can be used to label each element automatically from CVE descriptions.
An Insider Threat Detection Method Based on Heterogeneous Graph Embedding. 2022 IEEE 8th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :11—16.
.
2022. Insider threats have high risk and concealment characteristics, which makes traditional anomaly detection methods less effective in insider threat detection. Existing detection methods ignore the logical relationship between user behaviors and the consistency of behavior sequences among homogeneous users, resulting in poor model effects. We propose an insider threat detection method based on internal user heterogeneous graph embedding. Firstly, according to the characteristics of CERT data, comprehensively consider the relationship between users, the time sequence, and logical relationship, and construct a heterogeneous graph. In the second step, according to the characteristics of heterogeneous graphs, the embedding learning of graph nodes is carried out according to random walk and Word2vec. Finally, we propose an Insider Threat Detection Design (ITDD) model which can map and the user behavior sequence information into a high-dimensional feature space. In the CERT r5.2 dataset, compared with a variety of traditional machine learning methods, the effect of our method is significantly better than the final result.
An Analysis of Insider Attack Detection Using Machine Learning Algorithms. 2022 IEEE 2nd International Conference on Mobile Networks and Wireless Communications (ICMNWC). :1—7.
.
2022. Among the greatest obstacles in cybersecurity is insider threat, which is a well-known massive issue. This anomaly shows that the vulnerability calls for specialized detection techniques, and resources that can help with the accurate and quick detection of an insider who is harmful. Numerous studies on identifying insider threats and related topics were also conducted to tackle this problem are proposed. Various researches sought to improve the conceptual perception of insider risks. Furthermore, there are numerous drawbacks, including a dearth of actual cases, unfairness in drawing decisions, a lack of self-optimization in learning, which would be a huge concern and is still vague, and the absence of an investigation that focuses on the conceptual, technological, and numerical facets concerning insider threats and identifying insider threats from a wide range of perspectives. The intention of the paper is to afford a thorough exploration of the categories, levels, and methodologies of modern insiders based on machine learning techniques. Further, the approach and evaluation metrics for predictive models based on machine learning are discussed. The paper concludes by outlining the difficulties encountered and offering some suggestions for efficient threat identification using machine learning.
A Framework to Detect the Malicious Insider Threat in Cloud Environment using Supervised Learning Methods. 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom). :354—358.
.
2022. A malicious insider threat is more vulnerable to an organization. It is necessary to detect the malicious insider because of its huge impact to an organization. The occurrence of a malicious insider threat is less but quite destructive. So, the major focus of this paper is to detect the malicious insider threat in an organization. The traditional insider threat detection algorithm is not suitable for real time insider threat detection. A supervised learning-based anomaly detection technique is used to classify, predict and detect the malicious and non-malicious activity based on highest level of anomaly score. In this paper, a framework is proposed to detect the malicious insider threat using supervised learning-based anomaly detection. It is used to detect the malicious insider threat activity using One-Class Support Vector Machine (OCSVM). The experimental results shows that the proposed framework using OCSVM performs well and detects the malicious insider who obtain huge anomaly score than a normal user.
Data-Driven Digital Twins in Surgery utilizing Augmented Reality and Machine Learning. 2022 IEEE International Conference on Communications Workshops (ICC Workshops). :580–585.
.
2022. On the one hand, laparoscopic surgery as medical state-of-the-art method is minimal invasive, and thus less stressful for patients. On the other hand, laparoscopy implies higher demands on physicians, such as mental load or preparation time, hence appropriate technical support is essential for quality and suc-cess. Medical Digital Twins provide an integrated and virtual representation of patients' and organs' data, and thus a generic concept to make complex information accessible by surgeons. In this way, minimal invasive surgery could be improved significantly, but requires also a much more complex software system to achieve the various resulting requirements. The biggest challenges for these systems are the safe and precise mapping of the digital twin to reality, i.e. dealing with deformations, movement and distortions, as well as balance out the competing requirement for intuitive and immersive user access and security. The case study ARAILIS is presented as a proof in concept for such a system and provides a starting point for further research. Based on the insights delivered by this prototype, a vision for future Medical Digital Twins in surgery is derived and discussed.
ISSN: 2694-2941
Cybersecurity Education in the Age of Artificial Intelligence: A Novel Proactive and Collaborative Learning Paradigm. 2022 IEEE Frontiers in Education Conference (FIE). :1–5.
.
2022. This Innovative Practice Work-in-Progress paper presents a virtual, proactive, and collaborative learning paradigm that can engage learners with different backgrounds and enable effective retention and transfer of the multidisciplinary AI-cybersecurity knowledge. While progress has been made to better understand the trustworthiness and security of artificial intelligence (AI) techniques, little has been done to translate this knowledge to education and training. There is a critical need to foster a qualified cybersecurity workforce that understands the usefulness, limitations, and best practices of AI technologies in the cybersecurity domain. To address this import issue, in our proposed learning paradigm, we leverage multidisciplinary expertise in cybersecurity, AI, and statistics to systematically investigate two cohesive research and education goals. First, we develop an immersive learning environment that motivates the students to explore AI/machine learning (ML) development in the context of real-world cybersecurity scenarios by constructing learning models with tangible objects. Second, we design a proactive education paradigm with the use of hackathon activities based on game-based learning, lifelong learning, and social constructivism. The proposed paradigm will benefit a wide range of learners, especially underrepresented students. It will also help the general public understand the security implications of AI. In this paper, we describe our proposed learning paradigm and present our current progress of this ongoing research work. In the current stage, we focus on the first research and education goal and have been leveraging cost-effective Minecraft platform to develop an immersive learning environment where the learners are able to investigate the insights of the emerging AI/ML concepts by constructing related learning modules via interacting with tangible AI/ML building blocks.
ISSN: 2377-634X
Semi-supervised Trojan Nets Classification Using Anomaly Detection Based on SCOAP Features. 2022 IEEE International Symposium on Circuits and Systems (ISCAS). :2423—2427.
.
2022. Recently, hardware Trojan has become a serious security concern in the integrated circuit (IC) industry. Due to the globalization of semiconductor design and fabrication processes, ICs are highly vulnerable to hardware Trojan insertion by malicious third-party vendors. Therefore, the development of effective hardware Trojan detection techniques is necessary. Testability measures have been proven to be efficient features for Trojan nets classification. However, most of the existing machine-learning-based techniques use supervised learning methods, which involve time-consuming training processes, need to deal with the class imbalance problem, and are not pragmatic in real-world situations. Furthermore, no works have explored the use of anomaly detection for hardware Trojan detection tasks. This paper proposes a semi-supervised hardware Trojan detection method at the gate level using anomaly detection. We ameliorate the existing computation of the Sandia Controllability/Observability Analysis Program (SCOAP) values by considering all types of D flip-flops and adopt semi-supervised anomaly detection techniques to detect Trojan nets. Finally, a novel topology-based location analysis is utilized to improve the detection performance. Testing on 17 Trust-Hub Trojan benchmarks, the proposed method achieves an overall 99.47% true positive rate (TPR), 99.99% true negative rate (TNR), and 99.99% accuracy.
Anomaly Detection based on Robust Spatial-temporal Modeling for Industrial Control Systems. 2022 IEEE 19th International Conference on Mobile Ad Hoc and Smart Systems (MASS). :355—363.
.
2022. Industrial Control Systems (ICS) are increasingly facing the threat of False Data Injection (FDI) attacks. As an emerging intrusion detection scheme for ICS, process-based Intrusion Detection Systems (IDS) can effectively detect the anomalies caused by FDI attacks. Specifically, such IDS establishes anomaly detection model which can describe the normal pattern of industrial processes, then perform real-time anomaly detection on industrial process data. However, this method suffers low detection accuracy due to the complexity and instability of industrial processes. That is, the process data inherently contains sophisticated nonlinear spatial-temporal correlations which are hard to be explicitly described by anomaly detection model. In addition, the noise and disturbance in process data prevent the IDS from distinguishing the real anomaly events. In this paper, we propose an Anomaly Detection approach based on Robust Spatial-temporal Modeling (AD-RoSM). Concretely, to explicitly describe the spatial-temporal correlations within the process data, a neural based state estimation model is proposed by utilizing 1D CNN for temporal modeling and multi-head self attention mechanism for spatial modeling. To perform robust anomaly detection in the presence of noise and disturbance, a composite anomaly discrimination model is designed so that the outputs of the state estimation model can be analyzed with a combination of threshold strategy and entropy-based strategy. We conducted extensive experiments on two benchmark ICS security datasets to demonstrate the effectiveness of our approach.
A Study on a DDH-Based Keyed Homomorphic Encryption Suitable to Machine Learning in the Cloud. 2022 IEEE International Conference on Consumer Electronics – Taiwan. :167—168.
.
2022. Homomorphic encryption is suitable for a machine learning in the cloud such as a privacy-preserving machine learning. However, ordinary homomorphic public key encryption has a problem that public key holders can generate ciphertexts and anyone can execute homomorphic operations. In this paper, we will propose a solution based on the Keyed Homomorphic-Public Key Encryption proposed by Emura et al.
Test Case Filtering based on Generative Adversarial Networks. 2022 IEEE 23rd International Conference on High Performance Switching and Routing (HPSR). :65–69.
.
2022. Fuzzing is a popular technique for finding soft-ware vulnerabilities. Despite their success, the state-of-art fuzzers will inevitably produce a large number of low-quality inputs. In recent years, Machine Learning (ML) based selection strategies have reported promising results. However, the existing ML-based fuzzers are limited by the lack of training data. Because the mutation strategy of fuzzing can not effectively generate useful input, it is prohibitively expensive to collect enough inputs to train models. In this paper, propose a generative adversarial networks based solution to generate a large number of inputs to solve the problem of insufficient data. We implement the proposal in the American Fuzzy Lop (AFL), and the experimental results show that it can find more crashes at the same time compared with the original AFL.
ISSN: 2325-5609
Facial Expression Intensity Estimation Considering Change Characteristic of Facial Feature Values for Each Facial Expression. 2022 23rd ACIS International Summer Virtual Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Summer). :15—21.
.
2022. Facial expression intensity, which quantifies the degree of facial expression, has been proposed. It is calculated based on how much facial feature values change compared to an expressionless face. The estimation has two aspects. One is to classify facial expressions, and the other is to estimate their intensity. However, it is difficult to do them at the same time. There- fore, in this work, the estimation of intensity and the classification of expression are separated. We suggest an explicit method and an implicit method. In the explicit one, a classifier determines which types of expression the inputs are, and each regressor determines its intensity. On the other hand, in the implicit one, we give zero values or non-zero values to regressors for each type of facial expression as ground truth, depending on whether or not an input image is the correct facial expression. We evaluated the two methods and, as a result, found that they are effective for facial expression recognition.