Visible to the public Biblio

Filters: Keyword is overfitting problem  [Clear All Filters]
2021-02-01
Wu, L., Chen, X., Meng, L., Meng, X..  2020.  Multitask Adversarial Learning for Chinese Font Style Transfer. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Style transfer between Chinese fonts is challenging due to both the complexity of Chinese characters and the significant difference between fonts. Existing algorithms for this task typically learn a mapping between the reference and target fonts for each character. Subsequently, this mapping is used to generate the characters that do not exist in the target font. However, the characters available for training are unlikely to cover all fine-grained parts of the missing characters, leading to the overfitting problem. As a result, the generated characters of the target font may suffer problems of incomplete or even radicals and dirty dots. To address this problem, this paper presents a multi-task adversarial learning approach, termed MTfontGAN, to generate more vivid Chinese characters. MTfontGAN learns to transfer a reference font to multiple target ones simultaneously. An alignment is imposed on the encoders of different tasks to make them focus on the important parts of the characters in general style transfer. Such cross-task interactions at the feature level effectively improve the generalization capability of MTfontGAN. The performance of MTfontGAN is evaluated on three Chinese font datasets. Experimental results show that MTfontGAN outperforms the state-of-the-art algorithms in a single-task setting. More importantly, increasing the number of tasks leads to better performance in all of them.
2019-12-30
Taha, Bilal, Hatzinakos, Dimitrios.  2019.  Emotion Recognition from 2D Facial Expressions. 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE). :1–4.
This work proposes an approach to find and learn informative representations from 2 dimensional gray-level images for facial expression recognition application. The learned features are obtained from a designed convolutional neural network (CNN). The developed CNN enables us to learn features from the images in a highly efficient manner by cascading different layers together. The developed model is computationally efficient since it does not consist of a huge number of layers and at the same time it takes into consideration the overfitting problem. The outcomes from the developed CNN are compared to handcrafted features that span texture and shape features. The experiments conducted on the Bosphours database show that the developed CNN model outperforms the handcrafted features when coupled with a Support Vector Machines (SVM) classifier.