Biblio

Found 433 results

Filters: Keyword is Neural networks  [Clear All Filters]
2020-12-15
Prakash, A., Walambe, R..  2018.  Military Surveillance Robot Implementation Using Robot Operating System. 2018 IEEE Punecon. :1—5.

Robots are becoming more and more prevalent in many real world scenarios. Housekeeping, medical aid, human assistance are a few common implementations of robots. Military and Security are also major areas where robotics is being researched and implemented. Robots with the purpose of surveillance in war zones and terrorist scenarios need specific functionalities to perform their tasks with precision and efficiency. In this paper, we present a model of Military Surveillance Robot developed using Robot Operating System. The map generation based on Kinect sensor is presented and some test case scenarios are discussed with results.

2019-01-21
Isakov, M., Bu, L., Cheng, H., Kinsy, M. A..  2018.  Preventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators. 2018 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :62–67.

Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This provides an incentive for adversaries to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.

2018-12-10
Ndichu, S., Ozawa, S., Misu, T., Okada, K..  2018.  A Machine Learning Approach to Malicious JavaScript Detection using Fixed Length Vector Representation. 2018 International Joint Conference on Neural Networks (IJCNN). :1–8.

To add more functionality and enhance usability of web applications, JavaScript (JS) is frequently used. Even with many advantages and usefulness of JS, an annoying fact is that many recent cyberattacks such as drive-by-download attacks exploit vulnerability of JS codes. In general, malicious JS codes are not easy to detect, because they sneakily exploit vulnerabilities of browsers and plugin software, and attack visitors of a web site unknowingly. To protect users from such threads, the development of an accurate detection system for malicious JS is soliciting. Conventional approaches often employ signature and heuristic-based methods, which are prone to suffer from zero-day attacks, i.e., causing many false negatives and/or false positives. For this problem, this paper adopts a machine-learning approach to feature learning called Doc2Vec, which is a neural network model that can learn context information of texts. The extracted features are given to a classifier model (e.g., SVM and neural networks) and it judges the maliciousness of a JS code. In the performance evaluation, we use the D3M Dataset (Drive-by-Download Data by Marionette) for malicious JS codes and JSUPACK for benign ones for both training and test purposes. We then compare the performance to other feature learning methods. Our experimental results show that the proposed Doc2Vec features provide better accuracy and fast classification in malicious JS code detection compared to conventional approaches.

2019-01-16
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J..  2018.  Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. :1778–1787.
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin.1
2020-10-05
Cruz, Rodrigo Santa, Fernando, Basura, Cherian, Anoop, Gould, Stephen.  2018.  Neural Algebra of Classifiers. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). :729—737.

The world is fundamentally compositional, so it is natural to think of visual recognition as the recognition of basic visually primitives that are composed according to well-defined rules. This strategy allows us to recognize unseen complex concepts from simple visual primitives. However, the current trend in visual recognition follows a data greedy approach where huge amounts of data are required to learn models for any desired visual concept. In this paper, we build on the compositionality principle and develop an "algebra" to compose classifiers for complex visual concepts. To this end, we learn neural network modules to perform boolean algebra operations on simple visual classifiers. Since these modules form a complete functional set, a classifier for any complex visual concept defined as a boolean expression of primitives can be obtained by recursively applying the learned modules, even if we do not have a single training sample. As our experiments show, using such a framework, we can compose classifiers for complex visual concepts outperforming standard baselines on two well-known visual recognition benchmarks. Finally, we present a qualitative analysis of our method and its properties.

2019-01-16
Carlini, N., Wagner, D..  2018.  Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. 2018 IEEE Security and Privacy Workshops (SPW). :1–7.
We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (recognizing up to 50 characters per second of audio). We apply our white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.
2020-11-23
Wang, X., Li, J..  2018.  Design of Intelligent Home Security Monitoring System Based on Android. 2018 2nd IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference (IMCEC). :2621–2624.
In view of the problem that the health status and safety monitoring of the traditional intelligent home are mainly dependent on the manual inspection, this paper introduces the intelligent home-based remote monitoring system by introducing the Internet-based Internet of Things technology into the intelligent home condition monitoring and safety assessment. The system's Android remote operation based on the MVP model to develop applications, the use of neural networks to deal with users daily use of operational data to establish the network data model, combined with S3C2440A microcontrollers in the gateway to the embedded Linux to facilitate different intelligent home drivers development. Finally, the power line communication network is used to connect the intelligent electrical appliances to the gateway. By calculating the success rate of the routing nodes, the success rate of the network nodes of 15 intelligent devices is 98.33%. The system can intelligent home many electrical appliances at the same time monitoring, to solve the system data and network congestion caused by the problem can not he security monitoring.
2019-12-05
Yu, Yiding, Wang, Taotao, Liew, Soung Chang.  2018.  Deep-Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks. 2018 IEEE International Conference on Communications (ICC). :1-7.

This paper investigates the use of deep reinforcement learning (DRL) in the design of a "universal" MAC protocol referred to as Deep-reinforcement Learning Multiple Access (DLMA). The design framework is partially inspired by the vision of DARPA SC2, a 3-year competition whereby competitors are to come up with a clean-slate design that "best share spectrum with any network(s), in any environment, without prior knowledge, leveraging on machine-learning technique". While the scope of DARPA SC2 is broad and involves the redesign of PHY, MAC, and Network layers, this paper's focus is narrower and only involves the MAC design. In particular, we consider the problem of sharing time slots among a multiple of time-slotted networks that adopt different MAC protocols. One of the MAC protocols is DLMA. The other two are TDMA and ALOHA. The DRL agents of DLMA do not know that the other two MAC protocols are TDMA and ALOHA. Yet, by a series of observations of the environment, its own actions, and the rewards - in accordance with the DRL algorithmic framework - a DRL agent can learn the optimal MAC strategy for harmonious co-existence with TDMA and ALOHA nodes. In particular, the use of neural networks in DRL (as opposed to traditional reinforcement learning) allows for fast convergence to optimal solutions and robustness against perturbation in hyper- parameter settings, two essential properties for practical deployment of DLMA in real wireless networks.

2020-06-12
Chiba, Zouhair, Abghour, Noreddine, Moussaid, Khalid, Omri, Amina El, Rida, Mohamed.  2018.  A Hybrid Optimization Framework Based on Genetic Algorithm and Simulated Annealing Algorithm to Enhance Performance of Anomaly Network Intrusion Detection System Based on BP Neural Network. 2018 International Symposium on Advanced Electrical and Communication Technologies (ISAECT). :1—6.

Today, network security is a world hot topic in computer security and defense. Intrusions and attacks in network infrastructures lead mostly in huge financial losses, massive sensitive data leaks, thus decreasing efficiency, competitiveness and the quality of productivity of an organization. Network Intrusion Detection System (NIDS) is valuable tool for the defense-in-depth of computer networks. It is widely deployed in network architectures in order to monitor, to detect and eventually respond to any anomalous behavior and misuse which can threat confidentiality, integrity and availability of network resources and services. Thus, the presence of NIDS in an organization plays a vital part in attack mitigation, and it has become an integral part of a secure organization. In this paper, we propose to optimize a very popular soft computing tool widely used for intrusion detection namely Back Propagation Neural Network (BPNN) using a novel hybrid Framework (GASAA) based on improved Genetic Algorithm (GA) and Simulated Annealing Algorithm (SAA). GA is improved through an optimization strategy, namely Fitness Value Hashing (FVH), which reduce execution time, convergence time and save processing power. Experimental results on KDD CUP' 99 dataset show that our optimized ANIDS (Anomaly NIDS) based BPNN, called “ANIDS BPNN-GASAA” outperforms several state-of-art approaches in terms of detection rate and false positive rate. In addition, improvement of GA through FVH has saved processing power and execution time. Thereby, our proposed IDS is very much suitable for network anomaly detection.

2019-12-16
Xue, Zijun, Ko, Ting-Yu, Yuchen, Neo, Wu, Ming-Kuang Daniel, Hsieh, Chu-Cheng.  2018.  Isa: Intuit Smart Agent, A Neural-Based Agent-Assist Chatbot. 2018 IEEE International Conference on Data Mining Workshops (ICDMW). :1423–1428.
Hiring seasonal workers in call centers to provide customer service is a common practice in B2C companies. The quality of service delivered by both contracting and employee customer service agents depends heavily on the domain knowledge available to them. When observing the internal group messaging channels used by agents, we found that similar questions are often asked repetitively by different agents, especially from less experienced ones. The goal of our work is to leverage the promising advances in conversational AI to provide a chatbot-like mechanism for assisting agents in promptly resolving a customer's issue. In this paper, we develop a neural-based conversational solution that employs BiLSTM with attention mechanism and demonstrate how our system boosts the effectiveness of customer support agents. In addition, we discuss the design principles and the necessary considerations for our system. We then demonstrate how our system, named "Isa" (Intuit Smart Agent), can help customer service agents provide a high-quality customer experience by reducing customer wait time and by applying the knowledge accumulated from customer interactions in future applications.
2019-03-06
Lin, Y., Liu, H., Xie, G., Zhang, Y..  2018.  Time Series Forecasting by Evolving Deep Belief Network with Negative Correlation Search. 2018 Chinese Automation Congress (CAC). :3839-3843.

The recently developed deep belief network (DBN) has been shown to be an effective methodology for solving time series forecasting problems. However, the performance of DBN is seriously depended on the reasonable setting of hyperparameters. At present, random search, grid search and Bayesian optimization are the most common methods of hyperparameters optimization. As an alternative, a state-of-the-art derivative-free optimizer-negative correlation search (NCS) is adopted in this paper to decide the sizes of DBN and learning rates during the training processes. A comparative analysis is performed between the proposed method and other popular techniques in the time series forecasting experiment based on two types of time series datasets. Experiment results statistically affirm the efficiency of the proposed model to obtain better prediction results compared with conventional neural network models.

2019-02-21
Feng, W., Chen, Z., Fu, Y..  2018.  Autoencoder Classification Algorithm Based on Swam Intelligence Optimization. 2018 17th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES). :238–241.
BP algorithm used by autoencoder classification algorithm. But the BP algorithm is not only complicated and inefficient, but sometimes falls into local optimum. This makes autoencoder classification algorithm are not very good. So in this paper we combie Quantum Particle Swarm Optimization (QPSO) and autoencoder classification algorithm. QPSO used to optimize the weight of autoencoder neural network and the parameter of softmax. This method has been tested on some database, and the experimental result shows that this method has got good results.
2020-10-05
Rafati, Jacob, DeGuchy, Omar, Marcia, Roummel F..  2018.  Trust-Region Minimization Algorithm for Training Responses (TRMinATR): The Rise of Machine Learning Techniques. 2018 26th European Signal Processing Conference (EUSIPCO). :2015—2019.

Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.

2018-06-11
Moskewicz, Matthew W., Jannesari, Ali, Keutzer, Kurt.  2017.  Boda: A Holistic Approach for Implementing Neural Network Computations. Proceedings of the Computing Frontiers Conference. :53–62.
Neural networks (NNs) are currently a very popular topic in machine learning for both research and practice. GPUs are the dominant computing platform for research efforts and are also gaining popularity as a deployment platform for applications such as autonomous vehicles. As a result, GPU vendors such as NVIDIA have spent enormous effort to write special-purpose NN libraries. On other hardware targets, especially mobile GPUs, such vendor libraries are not generally available. Thus, the development of portable, open, high-performance, energy-efficient GPU code for NN operations would enable broader deployment of NN-based algorithms. A root problem is that high efficiency GPU programming suffers from high complexity, low productivity, and low portability. To address this, this work presents a framework to enable productive, high-efficiency GPU programming for NN computations across hardware platforms and programming models. In particular, the framework provides specific support for metaprogramming and autotuning of operations over ND-Arrays. To show the correctness and value of our framework and approach, we implement a selection of NN operations, covering the core operations needed for deploying three common image-processing neural networks. We target three different hardware platforms: NVIDIA, AMD, and Qualcomm GPUs. On NVIDIA GPUs, we show both portability between OpenCL and CUDA as well competitive performance compared to the vendor library. On Qualcomm GPUs, we show that our framework enables productive development of target-specific optimizations, and achieves reasonable absolute performance. Finally, On AMD GPUs, we show initial results that indicate our framework can yield reasonable performance on a new platform with minimal effort.
2017-12-20
Wang, Y., Huang, Y., Zheng, W., Zhou, Z., Liu, D., Lu, M..  2017.  Combining convolutional neural network and self-adaptive algorithm to defeat synthetic multi-digit text-based CAPTCHA. 2017 IEEE International Conference on Industrial Technology (ICIT). :980–985.
We always use CAPTCHA(Completely Automated Public Turing test to Tell Computers and Humans Apart) to prevent automated bot for data entry. Although there are various kinds of CAPTCHAs, text-based scheme is still applied most widely, because it is one of the most convenient and user-friendly way for daily user [1]. The fact is that segmentations of different types of CAPTCHAs are not always the same, which means one of CAPTCHA's bottleneck is the segmentation. Once we could accurately split the character, the problem could be solved much easier. Unfortunately, the best way to divide them is still case by case, which is to say there is no universal way to achieve it. In this paper, we present a novel algorithm to achieve state-of-the-art performance, what was more, we also constructed a new convolutional neural network as an add-on recognition part to stabilize our state-of-the-art performance of the whole CAPTCHA system. The CAPTCHA datasets we are using is from the State Administration for Industry& Commerce of the People's Republic of China. In this datasets, there are totally 33 entrances of CAPTCHAs. In this experiments, we assume that each of the entrance is known. Results are provided showing how our algorithms work well towards these CAPTCHAs.
2018-06-11
Moons, B., Goetschalckx, K., Berckelaer, N. Van, Verhelst, M..  2017.  Minimum energy quantized neural networks. 2017 51st Asilomar Conference on Signals, Systems, and Computers. :1921–1925.
This work targets the automated minimum-energy optimization of Quantized Neural Networks (QNNs) - networks using low precision weights and activations. These networks are trained from scratch at an arbitrary fixed point precision. At iso-accuracy, QNNs using fewer bits require deeper and wider network architectures than networks using higher precision operators, while they require less complex arithmetic and less bits per weights. This fundamental trade-off is analyzed and quantified to find the minimum energy QNN for any benchmark and hence optimize energy-efficiency. To this end, the energy consumption of inference is modeled for a generic hardware platform. This allows drawing several conclusions across different benchmarks. First, energy consumption varies orders of magnitude at iso-accuracy depending on the number of bits used in the QNN. Second, in a typical system, BinaryNets or int4 implementations lead to the minimum energy solution, outperforming int8 networks up to 2-10× at iso-accuracy. All code used for QNN training is available from https://github.com/BertMoons/.
2018-04-04
Wu, F., Wang, J., Liu, J., Wang, W..  2017.  Vulnerability detection with deep learning. 2017 3rd IEEE International Conference on Computer and Communications (ICCC). :1298–1302.
Vulnerability detection is an import issue in information system security. In this work, we propose the deep learning method for vulnerability detection. We present three deep learning models, namely, convolution neural network (CNN), long short term memory (LSTM) and convolution neural network — long short term memory (CNN-LSTM). In order to test the performance of our approach, we collected 9872 sequences of function calls as features to represent the patterns of binary programs during their execution. We apply our deep learning models to predict the vulnerabilities of these binary programs based on the collected data. The experimental results show that the prediction accuracy of our proposed method reaches 83.6%, which is superior to that of traditional method like multi-layer perceptron (MLP).
2018-05-02
Shanthi, D., Mohanty, R. K., Narsimha, G., Aruna, V..  2017.  Application of partical swarm intelligence technique to predict software reliability. 2017 International Conference on Intelligent Computing and Control Systems (ICICCS). :629–635.

Predict software program reliability turns into a completely huge trouble in these days. Ordinary many new software programs are introducing inside the marketplace and some of them dealing with failures as their usage/managing is very hard. and plenty of shrewd strategies are already used to are expecting software program reliability. In this paper we're giving a sensible knowledge and the difference among those techniques with my new method. As a result, the prediction fashions constructed on one dataset display a extensive decrease in their accuracy when they are used with new statistics. The aim of this assessment, SE issues which can be of sensible importance are software development/cost estimation, software program reliability prediction, and so forth, and also computing its broaden computational equipment with enhanced power, scalability, flexibility and that can engage more successfully with human beings.

2018-11-19
Lal, Shamit, Garg, Vineet, Verma, Om Prakash.  2017.  Automatic Image Colorization Using Adversarial Training. Proceedings of the 9th International Conference on Signal Processing Systems. :84–88.

The paper presents a fully automatic end-to-end trainable system to colorize grayscale images. Colorization is a highly under-constrained problem. In order to produce realistic outputs, the proposed approach takes advantage of the recent advances in deep learning and generative networks. To achieve plausible colorization, the paper investigates conditional Wasserstein Generative Adversarial Networks (WGAN) [3] as a solution to this problem. Additionally, a loss function consisting of two classification loss components apart from the adversarial loss learned by the WGAN is proposed. The first classification loss provides a measure of how much the predicted colored images differ from ground truth. The second classification loss component makes use of ground truth semantic classification labels in order to learn meaningful intermediate features. Finally, WGAN training procedure pushes the predictions to the manifold of natural images. The system is validated using a user study and a semantic interpretability test and achieves results comparable to [1] on Imagenet dataset [10].

2018-09-28
Yang, Y., Wunsch, D., Yin, Y..  2017.  Hamiltonian-driven adaptive dynamic programming for nonlinear discrete-time dynamic systems. 2017 International Joint Conference on Neural Networks (IJCNN). :1339–1346.

In this paper, based on the Hamiltonian, an alternative interpretation about the iterative adaptive dynamic programming (ADP) approach from the perspective of optimization is developed for discrete time nonlinear dynamic systems. The role of the Hamiltonian in iterative ADP is explained. The resulting Hamiltonian driven ADP is able to evaluate the performance with respect to arbitrary admissible policies, compare two different admissible policies and further improve the given admissible policy. The convergence of the Hamiltonian ADP to the optimal policy is proven. Implementation of the Hamiltonian-driven ADP by neural networks is discussed based on the assumption that each iterative policy and value function can be updated exactly. Finally, a simulation is conducted to verify the effectiveness of the presented Hamiltonian-driven ADP.

2018-02-21
Conti, F., Schilling, R., Schiavone, P. D., Pullini, A., Rossi, D., Gürkaynak, F. K., Muehlberghuber, M., Gautschi, M., Loi, I., Haugou, G. et al..  2017.  An IoT Endpoint System-on-Chip for Secure and Energy-Efficient Near-Sensor Analytics. IEEE Transactions on Circuits and Systems I: Regular Papers. 64:2481–2494.

Near-sensor data analytics is a promising direction for internet-of-things endpoints, as it minimizes energy spent on communication and reduces network load - but it also poses security concerns, as valuable data are stored or sent over the network at various stages of the analytics pipeline. Using encryption to protect sensitive data at the boundary of the on-chip analytics engine is a way to address data security issues. To cope with the combined workload of analytics and encryption in a tight power envelope, we propose Fulmine, a system-on-chip (SoC) based on a tightly-coupled multi-core cluster augmented with specialized blocks for compute-intensive data processing and encryption functions, supporting software programmability for regular computing tasks. The Fulmine SoC, fabricated in 65-nm technology, consumes less than 20mW on average at 0.8V achieving an efficiency of up to 70pJ/B in encryption, 50pJ/px in convolution, or up to 25MIPS/mW in software. As a strong argument for real-life flexible application of our platform, we show experimental results for three secure analytics use cases: secure autonomous aerial surveillance with a state-of-the-art deep convolutional neural network (CNN) consuming 3.16pJ per equivalent reduced instruction set computer operation, local CNN-based face detection with secured remote recognition in 5.74pJ/op, and seizure detection with encrypted data collection from electroencephalogram within 12.7pJ/op.

2018-05-01
Al-Salhi, Y. E. A., Lu, S..  2017.  New Steganography Scheme to Conceal a Large Amount of Secret Messages Using an Improved-AMBTC Algorithm Based on Hybrid Adaptive Neural Networks. 2017 Ieee 3rd International Conference on Big Data Security on Cloud (Bigdatasecurity), Ieee International Conference on High Performance and Smart Computing (Hpsc), and Ieee International Conference on Intelligent Data and Security (Ids). :112–121.

The term steganography was used to conceal thesecret message into other media file. In this paper, a novel imagesteganography is proposed, based on adaptive neural networkswith recycling the Improved Absolute Moment Block TruncationCoding algorithm, and by employing the enhanced five edgedetection operators with an optimal target of the ANNS. Wepropose a new scheme of an image concealing using hybridadaptive neural networks based on I-AMBTC method by thehelp of two approaches, the relevant edge detection operators andimage compression methods. Despite that, many processes in ourscheme are used, but still the quality of concealed image lookinggood according to the HVS and PVD systems. The final simulationresults are discussed and compared with another related researchworks related to the image steganography system.

2017-12-20
Gayathri, S..  2017.  Phishing websites classifier using polynomial neural networks in genetic algorithm. 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN). :1–4.

Genetic Algorithms are group of mathematical models in computational science by exciting evolution in AI techniques nowadays. These algorithms preserve critical information by applying data structure with simple chromosome recombination operators by encoding solution to a specific problem. Genetic algorithms they are optimizer, in which range of problems applied to it are quite broad. Genetic Algorithms with its global search includes basic principles like selection, crossover and mutation. Data structures, algorithms and human brain inspiration are found for classification of data and for learning which works using Neural Networks. Artificial Intelligence (AI) it is a field, where so many tasks performed naturally by a human. When AI conventional methods are used in a computer it was proved as a complicated task. Applying Neural Networks techniques will create an internal structure of rules by which a program can learn by examples, to classify different inputs than mining techniques. This paper proposes a phishing websites classifier using improved polynomial neural networks in genetic algorithm.

2018-01-10
Thaler, S., Menkonvski, V., Petkovic, M..  2017.  Towards a neural language model for signature extraction from forensic logs. 2017 5th International Symposium on Digital Forensic and Security (ISDFS). :1–6.
Signature extraction is a critical preprocessing step in forensic log analysis because it enables sophisticated analysis techniques to be applied to logs. Currently, most signature extraction frameworks either use rule-based approaches or handcrafted algorithms. Rule-based systems are error-prone and require high maintenance effort. Hand-crafted algorithms use heuristics and tend to work well only for specialized use cases. In this paper we present a novel approach to extract signatures from forensic logs that is based on a neural language model. This language model learns to identify mutable and non-mutable parts in a log message. We use this information to extract signatures. Neural language models have shown to work extremely well for learning complex relationships in natural language text. We experimentally demonstrate that our model can detect which parts are mutable with an accuracy of 86.4%. We also show how extracted signatures can be used for clustering log lines.
2017-12-20
Azakami, T., Shibata, C., Uda, R..  2017.  Challenge to Impede Deep Learning against CAPTCHA with Ergonomic Design. 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC). 1:637–642.

Once we had tried to propose an unbreakable CAPTCHA and we reached a result that limitation of time is effect to prevent computers from recognizing characters accurately while computers can finally recognize all text-based CAPTCHA in unlimited time. One of the existing usual ways to prevent computers from recognizing characters is distortion, and adding noise is also effective for the prevention. However, these kinds of prevention also make recognition of characters by human beings difficult. As a solution of the problems, an effective text-based CAPTCHA algorithm with amodal completion was proposed by our team. Our CAPTCHA causes computers a large amount of calculation costs while amodal completion helps human beings to recognize characters momentarily. Our CAPTCHA has evolved with aftereffects and combinations of complementary colors. We evaluated our CAPTCHA with deep learning which is attracting the most attention since deep learning is faster and more accurate than existing methods for recognition with computers. In this paper, we add jagged lines to edges of characters since edges are one of the most important parts for recognition in deep learning. In this paper, we also evaluate that how much the jagged lines decrease recognition of human beings and how much they prevent computers from the recognition. We confirm the effects of our method to deep learning.