Biblio
In this work, an asymmetric cryptography method for information security was developed, inspired by the fact that the human body generates chaotic signals, and these signals can be used to create sequences of random numbers. Encryption circuit was implemented in a Reconfigurable Hardware (FPGA). To encode and decode an image, the chaotic synchronization between two dynamic systems, such as Hopfield neural networks (HNNs), was used to simulate chaotic signals. The notion of Homotopy, an argument of topological nature, was used for the synchronization. The results show efficiency when compared to state of the art, in terms of image correlation, histogram analysis and hardware implementation.
Spams are unsolicited and unnecessary messages which may contain harmful codes or links for activation of malicious viruses and spywares. Increasing popularity of social networks attracts the spammers to perform malicious activities in social networks. So an efficient spam detection method is necessary for social networks. In this paper, feed forward neural network with back propagation based spam detection model is proposed. The quality of the learning process is improved by tuning initial weights of feed forward neural network using proposed enhanced step size firefly algorithm which reduces the time for finding optimal weights during the learning process. The model is applied for twitter dataset and the experimental results show that, the proposed model performs well in terms of accuracy and detection rate and has lower false positive rate.
In this paper, we consider one of the approaches to the study of the characteristics of an information system that is under the influence of various factors, and their management using neural networks and wavelet transforms based on determining the relationship between the modified state of the information system and the possibility of dynamic analysis of effects. At the same time, the process of influencing the information system includes the following components: impact on the components providing the functions of the information system; determination of the result of exposure; analysis of the result of exposure; response to the result of exposure. As an input signal, the characteristics of the means that affect are taken. The system includes an adaptive response unit, the input of which receives signals about the prerequisites for changes, and at the output, this unit generates signals for the inclusion of appropriate means to eliminate or compensate for these prerequisites or directly the changes in the information system.
Advances in our understanding of the nature of cognition in its myriad forms (Embodied, Embedded, Extended, and Enactive) displayed in all living beings (cellular organisms, animals, plants, and humans) and new theories of information, info-computation and knowledge are throwing light on how we should build software systems in the digital universe which mimic and interact with intelligent, sentient and resilient beings in the physical universe. Recent attempts to infuse cognition into computing systems to push the boundaries of Church-Turing thesis have led to new computing models that mimic biological systems in encoding knowledge structures using both algorithms executed in stored program control machines and neural networks. This paper presents a new model and implements an application as hierarchical named network composed of microservices to create a managed process workflow by enabling dynamic configuration and reconfiguration of the microservice network. We demonstrate the resiliency, efficiency and scaling of the named microservice network using a novel edge cloud platform by Platina Systems. The platform eliminates the need for Virtual Machine overlay and provides high performance and low-latency with L3 based 100 GbE network and SSD support with RDMA and NVMeoE. The hierarchical named microservice network using Kubernetes provisioning stack provides all the cloud features such as elasticity, autoscaling, self-repair and live-migration without reboot. The model is derived from a recent theoretical framework for unification of different models of computation using "Structural Machines.'' They are shown to simulate Turing machines, inductive Turing machines and also are proved to be more efficient than Turing machines. The structural machine framework with a hierarchy of controllers managing the named service connections provides dynamic reconfiguration of the service network from browsers to database to address rapid fluctuations in the demand for or the availability of resources without having to reconfigure IP address base networks.
In recent trends, privacy preservation is the most predominant factor, on big data analytics and cloud computing. Every organization collects personal data from the users actively or passively. Publishing this data for research and other analytics without removing Personally Identifiable Information (PII) will lead to the privacy breach. Existing anonymization techniques are failing to maintain the balance between data privacy and data utility. In order to provide a trade-off between the privacy of the users and data utility, a Mondrian based k-anonymity approach is proposed. To protect the privacy of high-dimensional data Deep Neural Network (DNN) based framework is proposed. The experimental result shows that the proposed approach mitigates the information loss of the data without compromising privacy.
Deep neural networks (DNNs) provide good performance for image recognition, speech recognition, and pattern recognition. However, a poisoning attack is a serious threat to DNN's security. The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process. In some situations such as a military, it may be necessary to drop only a chosen class of accuracy in the model. For example, if an attacker does not allow only nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent UAV from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only chosen class in the model. The proposed method reduces the accuracy of a chosen class in the model by training malicious training data corresponding to a chosen class, while maintaining the accuracy of the remaining classes. For experiment, we used tensorflow as a machine learning library and MNIST and CIFAR10 as datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class to 43.2% and 55.3% in MNIST and CIFAR10, while maintaining the accuracy of the remaining classes.
Internet application providers now have more incentive than ever to collect user data, which greatly increases the risk of user privacy violations due to the emerging of deep neural networks. In this paper, we propose TensorClog-a poisoning attack technique that is designed for privacy protection against deep neural networks. TensorClog has three properties with each of them serving a privacy protection purpose: 1) training on TensorClog poisoned data results in lower inference accuracy, reducing the incentive of abusive data collection; 2) training on TensorClog poisoned data converges to a larger loss, which prevents the neural network from learning the privacy; and 3) TensorClog regularizes the perturbation to remain a high structure similarity, so that the poisoning does not affect the actual content in the data. Applying our TensorClog poisoning technique to CIFAR-10 dataset results in an increase in both converged training loss and test error by 300% and 272%, respectively. It manages to maintain data's human perception with a high SSIM index of 0.9905. More experiments including different limited information attack scenarios and a real-world application transferred from pre-trained ImageNet models are presented to further evaluate TensorClog's effectiveness in more complex situations.
Style transfer is a research hotspot in computer vision. Up to now, it is still a challenge although many researches have been conducted on it for high quality style transfer. In this work, we propose an algorithm named ASTCNN which is a real-time Arbitrary Style Transfer Convolution Neural Network. The ASTCNN consists of two independent encoders and a decoder. The encoders respectively extract style and content features from style and content and the decoder generates the style transferred image images. Experimental results show that ASTCNN achieves higher quality output image than the state-of-the-art style transfer algorithms and the floating point computation of ASTCNN is 23.3% less than theirs.
Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains. Most research on adversarial examples takes as its only constraint that the perturbed images are similar to the originals. However, real-world application of these ideas often requires the examples to satisfy additional objectives, which are typically enforced through custom modifications of the perturbation process. In this article, we propose adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives. We demonstrate the ability of AGNs to accommodate a wide range of objectives, including imprecise ones difficult to model, in two application domains. In particular, we demonstrate physical adversarial examples—eyeglass frames designed to fool face recognition—with better robustness, inconspicuousness, and scalability than previous approaches, as well as a new attack to fool a handwritten-digit classifier.
Since the neural networks are utilized to extract information from an image, Gatys et al. found that they could separate the content and style of images and reconstruct them to another image which called Style Transfer. Moreover, there are many feed-forward neural networks have been suggested to speeding up the original method to make Style Transfer become practical application. However, this takes a price: these feed-forward networks are unchangeable because of their fixed parameters which mean we cannot transfer arbitrary styles but only single one in real-time. Some coordinated approaches have been offered to relieve this dilemma. Such as a style-swap layer and an adaptive normalization layer (AdaIN) and soon. Its worth mentioning that we observed that the AdaIN layer only aligns the means and variance of the content feature maps with those of the style feature maps. Our method is aimed at presenting an operational approach that enables arbitrary style transfer in real-time, reserving more statistical information by histogram matching, providing more reliable texture clarity and more humane user control. We achieve performance more cheerful than existing approaches without adding calculation, complexity. And the speed comparable to the fastest Style Transfer method. Our method provides more flexible user control and trustworthy quality and stability.
These days deep learning is the fastest-growing area in the field of Machine Learning. Convolutional Neural Networks are currently the main tool used for the image analysis and classification purposes. Although great achievements and perspectives, deep neural networks and accompanying learning algorithms have some relevant challenges to tackle. In this paper, we have focused on the most frequently mentioned problem in the field of machine learning, that is relatively poor generalization abilities. Partial remedies for this are regularization techniques e.g. dropout, batch normalization, weight decay, transfer learning, early stopping and data augmentation. In this paper we have focused on data augmentation. We propose to use a method based on a neural style transfer, which allows to generate new unlabeled images of high perceptual quality that combine the content of a base image with the appearance of another one. In a proposed approach, the newly created images are described with pseudo-labels, and then used as a training dataset. Real, labeled images are divided into the validation and test set. We validated proposed method on a challenging skin lesion classification case study. Four representative neural architectures are examined. Obtained results show the strong potential of the proposed approach.
This paper describes a machine assistance approach to grading decisions for values that might be missing or need validation, using a mathematical algebraic form of an Expert System, instead of the traditional textual or logic forms and builds a neural network computational graph structure. This Experts System approach is also structured into a neural network like format of: input, hidden and output layers that provide a structured approach to the knowledge-base organization, this provides a useful abstraction for reuse for data migration applications in big data, Cyber and relational databases. The approach is further enhanced with a Bayesian probability tree approach to grade the confidences of value probabilities, instead of the traditional grading of the rule probabilities, and estimates the most probable value in light of all evidence presented. This is ground work for a Machine Learning (ML) experts system approach in a form that is closer to a Neural Network node structure.
Deep learning is a popular powerful machine learning solution to the computer vision tasks. The most criticized vulnerability of deep learning is its poor tolerance towards adversarial images obtained by deliberately adding imperceptibly small perturbations to the clean inputs. Such negatives can delude a classifier into wrong decision making. Previous defensive techniques mostly focused on refining the models or input transformation. They are either implemented only with small datasets or shown to have limited success. Furthermore, they are rarely scrutinized from the hardware perspective despite Artificial Intelligence (AI) on a chip is a roadmap for embedded intelligence everywhere. In this paper we propose a new discriminative noise injection strategy to adaptively select a few dominant layers and progressively discriminate adversarial from benign inputs. This is made possible by evaluating the differences in label change rate from both adversarial and natural images by injecting different amount of noise into the weights of individual layers in the model. The approach is evaluated on the ImageNet Dataset with 8-bit truncated models for the state-of-the-art DNN architectures. The results show a high detection rate of up to 88.00% with only approximately 5% of false positive rate for MobileNet. Both detection rate and false positive rate have been improved well above existing advanced defenses against the most practical noninvasive universal perturbation attack on deep learning based AI chip.
With the growing number of streaming services, internet providers are increasingly needing to be able to identify the types of data and content providers that are being used on their networks. Traditional methods, such as IP and port scanning, are not always available for clients using VPNs or with providers using varying IP addresses. As such, in this paper we explore a potential method using neural networks and Markov Decision Process in order to augment deep packet inspection techniques in identifying the source and class of video streaming services.
Writing style is a combination of consistent decisions associated with a specific author at different levels of language production, including lexical, syntactic, and structural. In this paper, we introduce a style-aware neural model to encode document information from three stylistic levels and evaluate it in the domain of authorship attribution. First, we propose a simple way to jointly encode syntactic and lexical representations of sentences. Subsequently, we employ an attention-based hierarchical neural network to encode the syntactic and semantic structure of sentences in documents while rewarding the sentences which contribute more to capturing the writing style. Our experimental results, based on four benchmark datasets, reveal the benefits of encoding document information from all three stylistic levels when compared to the baseline methods in the literature.
Conventional methods for anomaly detection include techniques based on clustering, proximity or classification. With the rapidly growing social networks, outliers or anomalies find ingenious ways to obscure themselves in the network and making the conventional techniques inefficient. In this paper, we utilize the ability of Deep Learning over topological characteristics of a social network to detect anomalies in email network and twitter network. We present a model, Graph Neural Network, which is applied on social connection graphs to detect anomalies. The combinations of various social network statistical measures are taken into account to study the graph structure and functioning of the anomalous nodes by employing deep neural networks on it. The hidden layer of the neural network plays an important role in finding the impact of statistical measure combination in anomaly detection.
The greatest threat towards securing the organization and its assets are no longer the attackers attacking beyond the network walls of the organization but the insiders present within the organization with malicious intent. Existing approaches helps to monitor, detect and prevent any malicious activities within an organization's network while ignoring the human behavior impact on security. In this paper we have focused on user behavior profiling approach to monitor and analyze user behavior action sequence to detect insider threats. We present an ensemble hybrid machine learning approach using Multi State Long Short Term Memory (MSLSTM) and Convolution Neural Networks (CNN) based time series anomaly detection to detect the additive outliers in the behavior patterns based on their spatial-temporal behavior features. We find that using Multistate LSTM is better than basic single state LSTM. The proposed method with Multistate LSTM can successfully detect the insider threats providing the AUC of 0.9042 on train data and AUC of 0.9047 on test data when trained with publically available dataset for insider threats.
With widely applied in various fields, deep learning (DL) is becoming the key driving force in industry. Although it has achieved great success in artificial intelligence tasks, similar to traditional software, it has defects that, once it failed, unpredictable accidents and losses would be caused. In this paper, we propose a test cases generation technique based on an adversarial samples generation algorithm for image classification deep neural networks (DNNs), which can generate a large number of good test cases for the testing of DNNs, especially in case that test cases are insufficient. We briefly introduce our method, and implement the framework. We conduct experiments on some classic DNN models and datasets. We further evaluate the test set by using a coverage metric based on states of the DNN.