Visible to the public Biblio

Filters: Author is Ji, Yi  [Clear All Filters]
2022-08-12
Ji, Yi, Ohsawa, Yukio.  2021.  Mining Frequent and Rare Itemsets With Weighted Supports Using Additive Neural Itemset Embedding. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Over the past two decades, itemset mining techniques have become an integral part of pattern mining in large databases. We present a novel system for mining frequent and rare itemsets simultaneously with supports weighted by cardinality in transactional datasets. Based on our neural item embedding with additive compositionality, the original mining problems are approximately reduced to polynomial-time convex optimization, namely a series of vector subset selection problems in Euclidean space. The numbers of transactions and items are no longer exponential factors of the time complexity under such reduction, except only the Euclidean space dimension, which can be assigned arbitrarily for a trade-off between mining speed and result quality. The efficacy of our method reveals that additive compositionality can be represented by linear translation in the itemset vector space, which resembles the linguistic regularities in word embedding by similar neural modeling. Experiments show that our learned embedding can bring pattern itemsets with higher accuracy than sampling-based lossy mining techniques in most cases, and the scalability of our mining approach triumphs over several state-of-the-art distributed mining algorithms.
2022-03-09
Shi, Di-Bo, Xie, Huan, Ji, Yi, Li, Ying, Liu, Chun-Ping.  2021.  Deep Content Guidance Network for Arbitrary Style Transfer. 2021 International Joint Conference on Neural Networks (IJCNN). :1—8.
Arbitrary style transfer refers to generate a new image based on any set of existing images. Meanwhile, the generated image retains the content structure of one and the style pattern of another. In terms of content retention and style transfer, the recent arbitrary style transfer algorithms normally perform well in one, but it is difficult to find a trade-off between the two. In this paper, we propose the Deep Content Guidance Network (DCGN) which is stacked by content guidance (CG) layers. And each CG layer involves one position self-attention (pSA) module, one channel self-attention (cSA) module and one content guidance attention (cGA) module. Specially, the pSA module extracts more effective content information on the spatial layout of content images and the cSA module makes the style representation of style images in the channel dimension richer. And in the non-local view, the cGA module utilizes content information to guide the distribution of style features, which obtains a more detailed style expression. Moreover, we introduce a new permutation loss to generalize feature expression, so as to obtain abundant feature expressions while maintaining content structure. Qualitative and quantitative experiments verify that our approach can transform into better stylized images than the state-of-the-art methods.