Biblio
This paper proposes AERFAD, an anomaly detection method based on the autoencoder and the random forest, for solving the credit card fraud detection problem. The proposed AERFAD first utilizes the autoencoder to reduce the dimensionality of data and then uses the random forest to classify data as anomalous or normal. Large numbers of credit card transaction data of European cardholders are applied to AEFRAD to detect possible frauds for the sake of performance evaluation. When compared with related methods, AERFAD has relatively excellent performance in terms of the accuracy, true positive rate, true negative rate, and Matthews correlation coefficient.
In Machine Learning, White Box Adversarial Attacks rely on knowing underlying knowledge about the model attributes. This works focuses on discovering to distrinct pieces of model information: the underlying architecture and primary training dataset. With the process in this paper, a structured set of input probes and the output of the model become the training data for a deep classifier. Two subdomains in Machine Learning are explored - image based classifiers and text transformers with GPT-2. With image classification, the focus is on exploring commonly deployed architectures and datasets available in popular public libraries. Using a single transformer architecture with multiple levels of parameters, text generation is explored by fine tuning off different datasets. Each dataset explored in image and text are distinguishable from one another. Diversity in text transformer outputs implies further research is needed to successfully classify architecture attribution in text domain.
One of the latest emerging technologies is artificial intelligence, which makes the machine mimic human behavior. The most important component used to detect cyber attacks or malicious activities is the Intrusion Detection System (IDS). Artificial intelligence plays a vital role in detecting intrusions and widely considered as the better way in adapting and building IDS. In trendy days, artificial intelligence algorithms are rising as a brand new computing technique which will be applied to actual time issues. In modern days, neural network algorithms are emerging as a new artificial intelligence technique that can be applied to real-time problems. The proposed system is to detect a classification of botnet attack which poses a serious threat to financial sectors and banking services. The proposed system is created by applying artificial intelligence on a realistic cyber defense dataset (CSE-CIC-IDS2018), the very latest Intrusion Detection Dataset created in 2018 by Canadian Institute for Cybersecurity (CIC) on AWS (Amazon Web Services). The proposed system of Artificial Neural Networks provides an outstanding performance of Accuracy score is 99.97% and an average area under ROC (Receiver Operator Characteristic) curve is 0.999 and an average False Positive rate is a mere value of 0.001. The proposed system using artificial intelligence of botnet attack detection is powerful, more accurate and precise. The novel proposed system can be implemented in n machines to conventional network traffic analysis, cyber-physical system traffic data and also to the real-time network traffic analysis.
Untethered microrobots actuated by external magnetic fields have drawn extensive attention recently, due to their potential advantages in real-time tracking and targeted delivery in vivo. To control a swarm of microrobots with external fields, however, is still one of the major challenges in this field. In this work, we present new methods to generate ribbon-like and vortex-like microrobotic swarms using oscillating and rotating magnetic fields, respectively. Paramagnetic nanoparticles with a diameter of 400 nm serve as the agents. These two types of swarms exhibits out-of-equilibrium structure, in which the nanoparticles perform synchronised motions. By tuning the magnetic fields, the swarming patterns can be reversibly transformed. Moreover, by increasing the pitch angle of the applied fields, the swarms are capable of performing navigated locomotion with a controlled velocity. This work sheds light on a better understanding for microrobotic swarm behaviours and paves the way for potential biomedical applications.
Reliability and robustness of Internet of Things (IoT)-cloud-based communication is an important issue for prospective development of the IoT concept. In this regard, a robust and unique client-to-cloud communication physical layer is required. Physical Unclonable Function (PUF) is regarded as a suitable physics-based random identification hardware, but suffers from reliability problems. In this paper, we propose novel hardware concepts and furthermore an analysis method in CMOS technology to improve the hardware-based robustness of the generated PUF word from its first point of generation to the last cloud-interfacing point in a client. Moreover, we present a spectral analysis for an inexpensive high-yield implementation in a 65nm generation. We also offer robust monitoring concepts for the PUF-interfacing communication physical layer hardware.
Modeling and simulation of real-world environments has in recent times being widely used. The modeling of environments whose examination in particular is difficult and the examination via the model becomes easier. The parameters of the modeled systems and the values they can obtain are quite large, and manual tuning is tedious and requires a lot of effort while it often it is almost impossible to get the desired results. For this reason, there is a need for the parameter space to be set. The studies conducted in recent years were reviewed, it has been observed that there are few studies for parameter tuning problem in modeling and simulations. In this study, work has been done for a solution to be found to the problem of parameter tuning with swarm intelligence optimization algorithms Particle swarm optimization and Firefly algorithms. The performance of these algorithms in the parameter tuning process has been tested on 2 different agent based model studies. The performance of the algorithms has been observed by manually entering the parameters found for the model. According to the obtained results, it has been seen that the Firefly algorithm where the Particle swarm optimization algorithm works faster has better parameter values. With this study, the parameter tuning problem of the models in the different fields were solved.
Due to the growing advancement of crime ware services, the computer and network security becomes a crucial issue. Detecting sensitive data exfiltration is a principal component of each information protection strategy. In this research, a Multi-Level Data Exfiltration Detection (MLDED) system that can handle different types of insider data leakage threats with staircase difficulty levels and their implications for the organization environment has been proposed, implemented and tested. The proposed system detects exfiltration of data outside an organization information system, where the main goal is to use the detection results of a MLDED system for digital forensic purposes. MLDED system consists of three major levels Hashing, Keywords Extraction and Labeling. However, it is considered only for certain type of documents such as plain ASCII text and PDF files. In response to the challenging issue of identifying insider threats, a forensic readiness data exfiltration system is designed that is capable of detecting and identifying sensitive information leaks. The results show that the proposed system has an overall detection accuracy of 98.93%.
In this paper we propose an architecture for fully-reconfigurable, plug-and-play wireless sensor network testbed. The proposed architecture is able to reconfigure and support easy experimentation and testing of standard protocol stacks (i.e. uIPv4 and uIPv6) as well as non-standardized clean-slate protocol stacks (e.g. configured using RIME). The parameters of the protocol stacks can be remotely reconfigured through an easy to use RESTful API. Additionally, we are able to fully reconfigure clean-slate protocol stacks at run-time. The architecture enables easy set-up of the network - plug - by using a protocol that automatically sets up a multi-hop network (i.e. RPL protocol) and it enables reconfiguration and experimentation - play - by using a simple, RESTful interaction with each node individually. The reference implementation of the architecture uses a dual-stack Contiki OS with the ProtoStack tool for dynamic composition of services.
Existing main-memory hash join algorithms for multi-core can be classified into two camps. Hardware-oblivious hash join variants do not depend on hardware-specific parameters. Rather, they consider qualitative characteristics of modern hardware and are expected to achieve good performance on any technologically similar platform. The assumption behind these algorithms is that hardware is now good enough at hiding its own limitations-through automatic hardware prefetching, out-of-order execution, or simultaneous multi-threading (SMT)-to make hardware-oblivious algorithms competitive without the overhead of carefully tuning to the underlying hardware. Hardware-conscious implementations, such as (parallel) radix join, aim to maximally exploit a given architecture by tuning the algorithm parameters (e.g., hash table sizes) to the particular features of the architecture. The assumption here is that explicit parameter tuning yields enough performance advantages to warrant the effort required. This paper compares the two approaches under a wide range of workloads (relative table sizes, tuple sizes, effects of sorted data, etc.) and configuration parameters (VM page sizes, number of threads, number of cores, SMT, SIMD, prefetching, etc.). The results show that hardware-conscious algorithms generally outperform hardware-oblivious ones. However, on specific workloads and special architectures with aggressive simultaneous multi-threading, hardware-oblivious algorithms are competitive. The main conclusion of the paper is that, in existing multi-core architectures, it is still important to carefully tailor algorithms to the underlying hardware to get the necessary performance. But processor developments may require to revisit this conclusion in the future.