Visible to the public Biblio

Filters: Keyword is multi-view learning  [Clear All Filters]
2022-09-09
Gonçalves, Luís, Vimieiro, Renato.  2021.  Approaching authorship attribution as a multi-view supervised learning task. 2021 International Joint Conference on Neural Networks (IJCNN). :1—8.
Authorship attribution is the problem of identifying the author of texts based on the author's writing style. It is usually assumed that the writing style contains traits inaccessible to conscious manipulation and can thus be safely used to identify the author of a text. Several style markers have been proposed in the literature, nevertheless, there is still no consensus on which best represent the choices of authors. Here we assume an agnostic viewpoint on the dispute for the best set of features that represents an author's writing style. We rather investigate how different sources of information may unveil different aspects of an author's style, complementing each other to improve the overall process of authorship attribution. For this we model authorship attribution as a multi-view learning task. We assess the effectiveness of our proposal applying it to a set of well-studied corpora. We compare the performance of our proposal to the state-of-the-art approaches for authorship attribution. We thoroughly analyze how the multi-view approach improves on methods that use a single data source. We confirm that our approach improves both in accuracy and consistency of the methods and discuss how these improvements are beneficial for linguists and domain specialists.
2018-12-10
Cui, Limeng, Chen, Zhensong, Zhang, Jiawei, He, Lifang, Shi, Yong, Yu, Philip S..  2018.  Multi-view Collective Tensor Decomposition for Cross-modal Hashing. Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval. :73–81.

Multimedia data available in various disciplines are usually heterogeneous, containing representations in multi-views, where the cross-modal search techniques become necessary and useful. It is a challenging problem due to the heterogeneity of data with multiple modalities, multi-views in each modality and the diverse data categories. In this paper, we propose a novel multi-view cross-modal hashing method named Multi-view Collective Tensor Decomposition (MCTD) to fuse these data effectively, which can exploit the complementary feature extracted from multi-modality multi-view while simultaneously discovering multiple separated subspaces by leveraging the data categories as supervision information. Our contributions are summarized as follows: 1) we exploit tensor modeling to get better representation of the complementary features and redefine a latent representation space; 2) a block-diagonal loss is proposed to explicitly pursue a more discriminative latent tensor space by exploring supervision information; 3) we propose a new feature projection method to characterize the data and to generate the latent representation for incoming new queries. An optimization algorithm is proposed to solve the objective function designed for MCTD, which works under an iterative updating procedure. Experimental results prove the state-of-the-art precision of MCTD compared with competing methods.