Biblio
This paper proposes AERFAD, an anomaly detection method based on the autoencoder and the random forest, for solving the credit card fraud detection problem. The proposed AERFAD first utilizes the autoencoder to reduce the dimensionality of data and then uses the random forest to classify data as anomalous or normal. Large numbers of credit card transaction data of European cardholders are applied to AEFRAD to detect possible frauds for the sake of performance evaluation. When compared with related methods, AERFAD has relatively excellent performance in terms of the accuracy, true positive rate, true negative rate, and Matthews correlation coefficient.
Opportunistic spectrum access is one of the emerging techniques for maximizing throughput in congested bands and is enabled by predicting idle slots in spectrum. We propose a kernel-based reinforcement learning approach coupled with a novel budget-constrained sparsification technique that efficiently captures the environment to find the best channel access actions. This approach allows learning and planning over the intrinsic state-action space and extends well to large state spaces. We apply our methods to evaluate coexistence of a reinforcement learning-based radio with a multi-channel adversarial radio and a single-channel carrier-sense multiple-access with collision avoidance (CSMA-CA) radio. Numerical experiments show the performance gains over carrier-sense systems.
The decisions made by machines are increasingly comparable in predictive performance to those made by humans, but these decision making processes are often concealed as black boxes. Additional techniques are required to extract understanding, and one such category are explanation methods. This research compares the explanations of two popular forms of artificial intelligence; neural networks and random forests. Researchers in either field often have divided opinions on transparency, and comparing explanations may discover similar ground truths between models. Similarity can help to encourage trust in predictive accuracy alongside transparent structure and unite the respective research fields. This research explores a variety of simulated and real-world datasets that ensure fair applicability to both learning algorithms. A new heuristic explanation method that extends an existing technique is introduced, and our results show that this is somewhat similar to the other methods examined whilst also offering an alternative perspective towards least-important features.
The presence of robots is becoming more apparent as technology progresses and the market focus transitions from smart phones to robotic personal assistants such as those provided by Amazon and Google. The integration of robots in our societies is an inevitable tendency in which robots in many forms and with many functionalities will provide services to humans. This calls for an understanding of how humans are affected by both the presence of and the reliance on robots to perform services for them. In this paper we explore the effects that robots have on humans when a service is performed on request. We expose three groups of human participants to three levels of service completion performed by robots. We record and analyse human perceptions such as propensity to trust, competency, responsiveness, sociability, and team work ability. Our results demonstrate that humans tend to trust robots and are more willing to interact with them when they autonomously recover from failure by requesting help from other robots to fulfil their service. This supports the view that autonomy and team working capabilities must be brought into robots in an effort to strengthen trust in robots performing a service.
Severe class imbalance between the majority and minority classes in large datasets can prejudice Machine Learning classifiers toward the majority class. Our work uniquely consolidates two case studies, each utilizing three learners implemented within an Apache Spark framework, six sampling methods, and five sampling distribution ratios to analyze the effect of severe class imbalance on big data analytics. We use three performance metrics to evaluate this study: Area Under the Receiver Operating Characteristic Curve, Area Under the Precision-Recall Curve, and Geometric Mean. In the first case study, models were trained on one dataset (POST) and tested on another (SlowlorisBig). In the second case study, the training and testing dataset roles were switched. Our comparison of performance metrics shows that Area Under the Precision-Recall Curve and Geometric Mean are sensitive to changes in the sampling distribution ratio, whereas Area Under the Receiver Operating Characteristic Curve is relatively unaffected. In addition, we demonstrate that when comparing sampling methods, borderline-SMOTE2 outperforms the other methods in the first case study, and Random Undersampling is the top performer in the second case study.
This article presents a practical approach for secure key exchange exploiting reciprocity in wireless transmission. The method relies on the reciprocal channel phase to mask points of a Phase Shift Keying (PSK) constellation. Masking is achieved by adding (modulo 2π) the measured reciprocal channel phase to the PSK constellation points carrying some of the key bits. As the channel phase is uniformly distributed in [0, 2π], knowing the sum of the two phases does not disclose any information about any of its two components. To enlarge the key size over a static or slow fading channel, the Radio Frequency (RF) propagation path is perturbed to create independent realizations of multi-path fading. Prior techniques have relied on quantizing the reciprocal channel state measured at the two ends and thereby suffer from information leakage in the process of key consolidation (ensuring the two ends have access to the same key). The proposed method does not suffer from such shortcomings as raw key bits can be equipped with Forward Error Correction (FEC) without affecting the masking (zero information leakage) property. To eavesdrop a phase value shared in this manner, the Eavesdropper (Eve) would require to solve a system of linear equations defined over angles, each equation corresponding to a possible measurement by the Eve. Channel perturbation is performed such that each new channel state creates an independent channel realization for the legitimate nodes, as well as for each of Eves antennas. As a result, regardless of the Eves Signal-to-Noise Ratio (SNR) and number of antennas, Eve will always face an under-determined system of equations. On the other hand, trying to solve any such under-determined system of linear equations in terms of an unknown phase will not reveal any useful information about the actual answer, meaning that the distribution of the answer remains uniform in [0, 2π].
Extended interaction oscillators (EIOs) are high-frequency vacuum-electronic sources, capable to generate millimeter-wave to terahertz (THz) radiations. They are considered to be potential sources of high-power submillimeter wavelengths. Different slow-wave structures and beam geometries are used for EIOs. This paper presents a quantitative figure of merit, the critical unloaded oscillating frequency (fcr) for any specific geometry of EIO. This figure is calculated and tested for 2π standing-wave modes (a common mode for EIOs) of two different slowwave structures (SWSs), one double-ridge SWS driven by a sheet electron beam and one ring-loaded waveguide driven by a cylindrical beam. The calculated fcrs are compared with particle-in-cell (PIC) results, showing an acceptable agreement. The derived fcr is calculated three to four orders of magnitude faster than the PIC solver. Generality of the method, its clear physical interpretation and computational rapidity, makes it a convenient approach to evaluate the high-frequency behavior of any specified EIO geometry. This allows to investigate the changes in geometry to attain higher frequencies at THz spectrum.
Security attacks against Internet of Things (IoT) are on the rise and they lead to drastic consequences. Data confidentiality is typically based on a strong symmetric-key algorithm to guard against confidentiality attacks. However, there is a need to design an efficient lightweight cipher scheme for a number of applications for IoT systems. Recently, a set of lightweight cryptographic algorithms have been presented and they are based on the dynamic key approach, requiring a small number of rounds to minimize the computation and resource overhead, without degrading the security level. This paper follows this logic and provides a new flexible lightweight cipher, with or without chaining operation mode, with a simple round function and a dynamic key for each input message. Consequently, the proposed cipher scheme can be utilized for real-time applications and/or devices with limited resources such as Multimedia Internet of Things (MIoT) systems. The importance of the proposed solution is that it produces dynamic cryptographic primitives and it performs the mixing of selected blocks in a dynamic pseudo-random manner. Accordingly, different plaintext messages are encrypted differently, and the avalanche effect is also preserved. Finally, security and performance analysis are presented to validate the efficiency and robustness of the proposed cipher variants.
Intrusion detection system is described as a data monitoring, network activity study and data on possible vulnerabilities and attacks in advance. One of the main limitations of the present intrusion detection technology is the need to take out fake alarms so that the user can confound with the data. This paper deals with the different types of IDS their behaviour, response time and other important factors. This paper also demonstrates and brings out the advantages and disadvantages of six latest intrusion detection techniques and gives a clear picture of the recent advancements available in the field of IDS based on the factors detection rate, accuracy, average running time and false alarm rate.
We introduce a new defense mechanism for stochastic control systems with control objectives, to enhance their resilience before the detection of any attacks. To this end, we cautiously design the outputs of the sensors that monitor the state of the system since the attackers need the sensor outputs for their malicious objectives in stochastic control scenarios. Different from the defense mechanisms that seek to detect infiltration or to improve detectability of the attacks, the proposed approach seeks to minimize the damage of possible attacks before they actually have even been detected. We, specifically, consider a controlled Gauss-Markov process, where the controller could have been infiltrated into at any time within the system's operation. Within the framework of game-theoretic hierarchical equilibrium, we provide a semi-definite programming based algorithm to compute the optimal linear secure sensor outputs that enhance the resiliency of control systems prior to attack detection.
Behavioral malware detection aims to improve on the performance of static signature-based techniques used by anti-virus systems, which are less effective against modern polymorphic and metamorphic malware. Behavioral malware classification aims to go beyond the detection of malware by also identifying a malware's family according to a naming scheme such as the ones used by anti-virus vendors. Behavioral malware classification techniques use run-time features, such as file system or network activities, to capture the behavioral characteristic of running processes. The increasing volume of malware samples, diversity of malware families, and the variety of naming schemes given to malware samples by anti-virus vendors present challenges to behavioral malware classifiers. We describe a behavioral classifier that uses a Convolutional Recurrent Neural Network and data from Microsoft Windows Prefetch files. We demonstrate the model's improvement on the state-of-the-art using a large dataset of malware families and four major anti-virus vendor naming schemes. The model is effective in classifying malware samples that belong to common and rare malware families and can incrementally accommodate the introduction of new malware samples and families.
The paper dwells on the peculiarities of stylometry technologies usage to determine the style of the author publications. Statistical linguistic analysis of the author's text allows taking advantage of text content monitoring based on Porter stemmer and NLP methods to determine the set of stop words. The latter is used in the methods of stylometry to determine the ownership of the analyzed text to a specific author in percentage points. There is proposed a formal approach to the definition of the author's style of the Ukrainian text in the article. The experimental results of the proposed method for determining the ownership of the analyzed text to a particular author upon the availability of the reference text fragment are obtained. The study was conducted on the basis of the Ukrainian scientific texts of a technical area.
Conducted emission of motors is a domain of interest for EMC as it may introduce disturbances in the system in which they are integrated. Nevertheless few publications deal with the susceptibility of motors, and especially, servomotors despite this devices are more and more used in automated production lines as well as for robotics. Recent papers have been released devoted to the possibility of compromising such systems by cyber-attacks. One could imagine the use of smart intentional electromagnetic interference to modify their behavior or damage them leading in the modification of the industrial process. This paper aims to identify the disturbances that may affect the behavior of a Commercial Off-The-Shelf servomotor when exposed to an electromagnetic field and the criticality of the effects with regards to its application. Experiments have shown that a train of radio frequency pulses may induce an erroneous reading of the position value of the servomotor and modify in an unpredictable way the movement of the motor's axis.