Biblio
Filters: Author is Whiteside, D. [Clear All Filters]
An Approach to Embedding a Style Transfer Model into a Mobile APP. 2020 International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). :307–316.
.
2020. The prevalence of photo processing apps suggests the demands of picture editing. As an implementation of the convolutional neural network, style transfer has been deep investigated and there are supported materials to realize it on PC platform. However, few approaches are mentioned to deploy a style transfer model on the mobile and meet the requirements of mobile users. The traditional style transfer model takes hours to proceed, therefore, based on a Perceptual Losses algorithm [1], we created a feedforward neural network for each style and the proceeding time was reduced to a few seconds. The training data were generated from a pre-trained convolutional neural network model, VGG-19. The algorithm took thousandth time and generated similar output as the original. Furthermore, we optimized the model and deployed the model with TensorFlow Mobile library. We froze the model and adopted a bitmap to scale the inputs to 720×720 and reverted back to the original resolution. The reverting process may create some blur but it can be regarded as a feature of art. The generated images have reliable quality and the waiting time is independent of the content and pattern of input images. The main factor that influences the proceeding time is the input resolution. The average waiting time of our model on the mobile phone, HUAWEI P20 Pro, is less than 2 seconds for 720p images and around 2.8 seconds for 1080p images, which are ten times slower than that on the PC GPU, Tesla T40. The performance difference depends on the architecture of the model.