Biblio
Coherent rendering in augmented reality deals with synthesizing virtual content that seamlessly blends in with the real content. Unfortunately, capturing or modeling every real aspect in the virtual rendering process is often unfeasible or too expensive. We present a post-processing method that improves the look of rendered overlays in a dental virtual try-on application. We combine the original frame and the default rendered frame in an autoencoder neural network in order to obtain a more natural output, inspired by artistic style transfer research. Specifically, we apply the original frame as style on the rendered frame as content, repeating the process with each new pair of frames. Our method requires only a single forward pass, our shallow architecture ensures fast execution, and our internal feedback loop inherently enforces temporal consistency.
We re-define multimodality and introduce a simple approach to multimodal and arbitrary style transfer. Conventionally, style transfer methods are limited to synthesizing a deterministic output based on a single style, and there has been no work that can generate multiple images of various details, or multimodality, given a single style. In this work, we explore a way to achieve multimodal and arbitrary style transfer by injecting noise to a unimodal method. This novel approach does not require any trainable parameters, and can be readily applied to any unimodal style transfer methods with separate style encoding sub-network in literature. Experimental results show that while being able to transfer an image to multiple domains in various ways, the image quality is highly competitive with contemporary models in style transfer.
Most of the notable artworks of all time are hand drawn by great artists. But, now with the advancement in image processing and huge computation power, very sophisticated synthesised artworks are being produced. Since mid-1990's, computer graphics engineers have come up with algorithms to produce digital paintings, but the results were not visually appealing. Recently, neural networks have been used to do this task and the results seen are like never before. One such algorithm for this purpose is the neural style transfer algorithm, which imparts the pattern from one image to another, producing marvellous pieces of art. This research paper focuses on the roles of various parameters involved in the neural style transfer algorithm. An extensive analysis of how these parameters influence the output, in terms of time, performance and quality of the style transferred image produced is also shown in the paper. A concrete comparison has been drawn on the basis of different time and performance metrics. Finally, optimal values for these discussed parameters have been suggested.
As an emerging paradigm for energy-efficiency design, approximate computing can reduce power consumption through simplification of logic circuits. Although calculation errors are caused by approximate computing, their impacts on the final results can be negligible in some error resilient applications, such as Convolutional Neural Networks (CNNs). Therefore, approximate computing has been applied to CNNs to reduce the high demand for computing resources and energy. Compared with the traditional method such as reducing data precision, this paper investigates the effect of approximate computing on the accuracy and power consumption of CNNs. To optimize the approximate computing technology applied to CNNs, we propose a method for quantifying the error resilience of each neuron by theoretical analysis and observe that error resilience varies widely across different neurons. On the basic of quantitative error resilience, dynamic adaptation of approximate bit-width and the corresponding configurable adder are proposed to fully exploit the error resilience of CNNs. Experimental results show that the proposed method further improves the performance of power consumption while maintaining high accuracy. By adopting the optimal approximate bit-width for each layer found by our proposed algorithm, dynamic adaptation of approximate bit-width reduces power consumption by more than 30% and causes less than 1% loss of the accuracy for LeNet-5.
To be prepared against cyberattacks, most organizations resort to security information and event management systems to monitor their infrastructures. These systems depend on the timeliness and relevance of the latest updates, patches and threats provided by cyberthreat intelligence feeds. Open source intelligence platforms, namely social media networks such as Twitter, are capable of aggregating a vast amount of cybersecurity-related sources. To process such information streams, we require scalable and efficient tools capable of identifying and summarizing relevant information for specified assets. This paper presents the processing pipeline of a novel tool that uses deep neural networks to process cybersecurity information received from Twitter. A convolutional neural network identifies tweets containing security-related information relevant to assets in an IT infrastructure. Then, a bidirectional long short-term memory network extracts named entities from these tweets to form a security alert or to fill an indicator of compromise. The proposed pipeline achieves an average 94% true positive rate and 91% true negative rate for the classification task and an average F1-score of 92% for the named entity recognition task, across three case study infrastructures.
Network attack is a significant security issue for modern society. From small mobile devices to large cloud platforms, almost all computing products, used in our daily life, are networked and potentially under the threat of network intrusion. With the fast-growing network users, network intrusions become more and more frequent, volatile and advanced. Being able to capture intrusions in time for such a large scale network is critical and very challenging. To this end, the machine learning (or AI) based network intrusion detection (NID), due to its intelligent capability, has drawn increasing attention in recent years. Compared to the traditional signature-based approaches, the AI-based solutions are more capable of detecting variants of advanced network attacks. However, the high detection rate achieved by the existing designs is usually accompanied by a high rate of false alarms, which may significantly discount the overall effectiveness of the intrusion detection system. In this paper, we consider the existence of spatial and temporal features in the network traffic data and propose a hierarchical CNN+RNN neural network, LuNet. In LuNet, the convolutional neural network (CNN) and the recurrent neural network (RNN) learn input traffic data in sync with a gradually increasing granularity such that both spatial and temporal features of the data can be effectively extracted. Our experiments on two network traffic datasets show that compared to the state-of-the-art network intrusion detection techniques, LuNet not only offers a high level of detection capability but also has a much low rate of false positive-alarm.
Supervisory Control and Data Acquisition (SCADA)networks are widely deployed in modern industrial control systems (ICSs)such as energy-delivery systems. As an increasing number of field devices and computing nodes get interconnected, network-based cyber attacks have become major cyber threats to ICS network infrastructure. Field devices and computing nodes in ICSs are subjected to both conventional network attacks and specialized attacks purposely crafted for SCADA network protocols. In this paper, we propose a deep-learning-based network intrusion detection system for SCADA networks to protect ICSs from both conventional and SCADA specific network-based attacks. Instead of relying on hand-crafted features for individual network packets or flows, our proposed approach employs a convolutional neural network (CNN)to characterize salient temporal patterns of SCADA traffic and identify time windows where network attacks are present. In addition, we design a re-training scheme to handle previously unseen network attack instances, enabling SCADA system operators to extend our neural network models with site-specific network attack traces. Our results using realistic SCADA traffic data sets show that the proposed deep-learning-based approach is well-suited for network intrusion detection in SCADA systems, achieving high detection accuracy and providing the capability to handle newly emerged threats.
This article shows the possibility of detection of the hidden information in images. This is the approach to steganalysis than the basic data about the image and the information about the hiding method of the information are unknown. The architecture of the convolutional neural network makes it possible to detect small changes in the image with high probability.