Biblio
In this paper, inspired by Gatys's recent work, we propose a novel approach that transforms photos to comics using deep convolutional neural networks (CNNs). While Gatys's method that uses a pre-trained VGG network generally works well for transferring artistic styles such as painting from a style image to a content image, for more minimalist styles such as comics, the method often fails to produce satisfactory results. To address this, we further introduce a dedicated comic style CNN, which is trained for classifying comic images and photos. This new network is effective in capturing various comic styles and thus helps to produce better comic stylization results. Even with a grayscale style image, Gatys's method can still produce colored output, which is not desirable for comics. We develop a modified optimization framework such that a grayscale image is guaranteed to be synthesized. To avoid converging to poor local minima, we further initialize the output image using grayscale version of the content image. Various examples show that our method synthesizes better comic images than the state-of-the-art method.
In this paper, we propose to impose a multiscale contextual loss for image style transfer based on Convolutional Neural Networks (CNN). In the traditional optimization framework, a new stylized image is synthesized by constraining the high-level CNN features similar to a content image and the lower-level CNN features similar to a style image, which, however, appears to lost many details of the content image, presenting unpleasing and inconsistent distortions or artifacts. The proposed multiscale contextual loss, named Haar loss, is responsible for preserving the lost details by dint of matching the features derived from the content image and the synthesized image via wavelet transform. It endows the synthesized image with the characteristic to better retain the semantic information of the content image. More specifically, the unpleasant distortions can be effectively alleviated while the style can be well preserved. In the experiments, we show the visually more consistent and simultaneously well-stylized images generated by incorporating the multiscale contextual loss.
We propose a method for transferring an arbitrary style to only a specific object in an image. Style transfer is the process of combining the content of an image and the style of another image into a new image. Our results show that the proposed method can realize style transfer to specific object.
Convolutional Neural Network (CNN) based methods have shown significant performance gains in the problem of visual tracking in recent years. Due to many uncertain changes of objects online, such as abrupt motion, background clutter and large deformation, the visual tracking is still a challenging task. We propose a novel algorithm, namely Deep Location-Specific Tracking, which decomposes the tracking problem into a localization task and a classification task, and trains an individual network for each task. The localization network exploits the information in the current frame and provides a specific location to improve the probability of successful tracking, while the classification network finds the target among many examples generated around the target location in the previous frame, as well as the one estimated from the localization network in the current frame. CNN based trackers often have massive number of trainable parameters, and are prone to over-fitting to some particular object states, leading to less precision or tracking drift. We address this problem by learning a classification network based on 1 × 1 convolution and global average pooling. Extensive experimental results on popular benchmark datasets show that the proposed tracker achieves competitive results without using additional tracking videos for fine-tuning. The code is available at https://github.com/ZjjConan/DLST
This paper is the first work to perform spatio-temporal mapping of human activity using the visual content of geo-tagged videos. We utilize a recent deep-learning based video analysis framework, termed hidden two-stream networks, to recognize a range of activities in YouTube videos. This framework is efficient and can run in real time or faster which is important for recognizing events as they occur in streaming video or for reducing latency in analyzing already captured video. This is, in turn, important for using video in smart-city applications. We perform a series of experiments to show our approach is able to map activities both spatially and temporally.
In order to improve the limitation of single-mode biometric identification technology, a bimodal biometric verification system based on deep learning is proposed in this paper. A modified CNN architecture is used to generate better facial feature for bimodal fusion. The obtained facial feature and acoustic feature extracted by the acoustic feature extraction model are fused together to form the fusion feature on feature layer level. The fusion feature obtained by this method are used to train a neural network of identifying the target person who have these corresponding features. Experimental results demonstrate the superiority and high performance of our bimodal biometric in comparison with single-mode biometrics for identity authentication, which are tested on a bimodal database consists of data coherent from TED-LIUM and CASIA-WebFace. Compared with using facial feature or acoustic feature alone, the classification accuracy of fusion feature obtained by our method is increased obviously.
Stochastic Computing (SC) is an alternative design paradigm particularly useful for applications where cost is critical. SC has been applied to neural networks, as neural networks are known for their high computational complexity. However previous work in this area has critical limitations such as the fully-parallel architecture assumption, which prevent them from being applicable to recent ones such as convolutional neural networks, or ConvNets. This paper presents the first SC architecture for ConvNets, shows its feasibility, with detailed analyses of implementation overheads. Our SC-ConvNet is a hybrid between SC and conventional binary design, which is a marked difference from earlier SC-based neural networks. Though this might seem like a compromise, it is a novel feature driven by the need to support modern ConvNets at scale, which commonly have many, large layers. Our proposed architecture also features hybrid layer composition, which helps achieve very high recognition accuracy. Our detailed evaluation results involving functional simulation and RTL synthesis suggest that SC-ConvNets are indeed competitive with conventional binary designs, even without considering inherent error resilience of SC.
State-of-the-art convolutional neural networks (ConvNets) are now able to achieve near human performance on a wide range of classification tasks. Unfortunately, current hardware implementations of ConvNets are memory power intensive, prohibiting deployment in low-power embedded systems and IoE platforms. One method of reducing memory power is to exploit the error resilience of ConvNets and accept bit errors under reduced supply voltages. In this paper, we extensively study the effectiveness of this idea and show that further savings are possible by injecting bit errors during ConvNet training. Measurements on an 8KB SRAM in 28nm UTBB FD-SOI CMOS demonstrate supply voltage reduction of 310mV, which results in up to 5.4× leakage power reduction and up to 2.9× memory access power reduction at 99% of floating-point classification accuracy, with no additional hardware cost. To our knowledge, this is the first silicon-validated study on the effect of bit errors in ConvNets.
Hacker forums and other social platforms may contain vital information about cyber security threats. But using manual analysis to extract relevant threat information from these sources is a time consuming and error-prone process that requires a significant allocation of resources. In this paper, we explore the potential of Machine Learning methods to rapidly sift through hacker forums for relevant threat intelligence. Utilizing text data from a real hacker forum, we compared the text classification performance of Convolutional Neural Network methods against more traditional Machine Learning approaches. We found that traditional machine learning methods, such as Support Vector Machines, can yield high levels of performance that are on par with Convolutional Neural Network algorithms.
Although connecting a microgrid to modern power systems can alleviate issues arising from a large penetration of distributed generation, it can also cause severe voltage instability problems. This paper presents an online method to analyze voltage security in a microgrid using convolutional neural networks. To transform the traditional voltage stability problem into a classification problem, three steps are considered: 1) creating data sets using offline simulation results; 2) training the model with dimensional reduction and convolutional neural networks; 3) testing the online data set and evaluating performance. A case study in the modified IEEE 14-bus system shows the accuracy of the proposed analysis method increases by 6% compared to back-propagation neural network and has better performance than decision tree and support vector machine. The proposed algorithm has great potential in future applications.
Recognizing Families In the Wild (RFIW) is a large-scale, multi-track automatic kinship recognition evaluation, supporting both kinship verification and family classification on scales much larger than ever before. It was organized as a Data Challenge Workshop hosted in conjunction with ACM Multimedia 2017. This was achieved with the largest image collection that supports kin-based vision tasks. In the end, we use this manuscript to summarize evaluation protocols, progress made and some technical background and performance ratings of the algorithms used, and a discussion on promising directions for both research and engineers to be taken next in this line of work.
There has been growing interest in using convolutional neural networks (CNNs) in the fields of image forensics and steganalysis, and some promising results have been reported recently. These works mainly focus on the architectural design of CNNs, usually, a single CNN model is trained and then tested in experiments. It is known that, neural networks, including CNNs, are suitable to form ensembles. From this perspective, in this paper, we employ CNNs as base learners and test several different ensemble strategies. In our study, at first, a recently proposed CNN architecture is adopted to build a group of CNNs, each of them is trained on a random subsample of the training dataset. The output probabilities, or some intermediate feature representations, of each CNN, are then extracted from the original data and pooled together to form new features ready for the second level of classification. To make best use of the trained CNN models, we manage to partially recover the lost information due to spatial subsampling in the pooling layers when forming feature vectors. Performance of the ensemble methods are evaluated on BOSSbase by detecting S-UNIWARD at 0.4 bpp embedding rate. Results have indicated that both the recovery of the lost information, and learning from intermediate representation in CNNs instead of output probabilities, have led to performance improvement.