Biblio

Filters: Author is Wu, Xiao  [Clear All Filters]
2019-10-15
Li, Gaochao, Jin, Xin, Wang, Zhonghua, Chen, Xunxun, Wu, Xiao.  2018.  Expert Recommendation Based on Collaborative Filtering in Subject Research. Proceedings of the 2018 International Conference on Information Science and System. :291–298.

This article implements a method for expert recommendation based on collaborative filtering. The recommendation model extracts potential evaluation experts from historical data, figures out the relevance between past subjects and current subjects, obtains the evaluation experience index and personal ability index of experts, calculates the relevance of research direction between experts and subjects and finally recommends the most proper experts.

2019-05-01
Zhao, Bo, Wu, Xiao, Cheng, Zhi-Qi, Liu, Hao, Jie, Zequn, Feng, Jiashi.  2018.  Multi-View Image Generation from a Single-View. Proceedings of the 26th ACM International Conference on Multimedia. :383-391.

How to generate multi-view images with realistic-looking appearance from only a single view input is a challenging problem. In this paper, we attack this problem by proposing a novel image generation model termed VariGANs, which combines the merits of the variational inference and the Generative Adversarial Networks (GANs). It generates the target image in a coarse-to-fine manner instead of a single pass which suffers from severe artifacts. It first performs variational inference to model global appearance of the object (e.g., shape and color) and produces coarse images of different views. Conditioned on the generated coarse images, it then performs adversarial learning to fill details consistent with the input and generate the fine images. Extensive experiments conducted on two clothing datasets, MVC and DeepFashion, have demonstrated that the generated images with the proposed VariGANs are more plausible than those generated by existing approaches, which provide more consistent global appearance as well as richer and sharper details.

2022-04-20
Zhang, Kailong, Li, Jiwei, Lu, Zhou, Luo, Mei, Wu, Xiao.  2013.  A Scene-Driven Modeling Reconfigurable Hardware-in-Loop Simulation Environment for the Verification of an Autonomous CPS. 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics. 1:446–451.
Cyber-Physical System(CPS) is now a new evolutional morphology of embedded systems. With features of merging computation and physical processes together, the traditional verification and simulation methods have being challenged recently. After analyzed the state-of-art of related research, a new simulation environment is studied according to the characters of a special autonomous cyber-physical system-Unmanned Aerial Vehicle, and designed to be scene-driven, modeling and reconfigurable. In this environment, a novel CPS-in-loop architecture, which can support simulations under different customized scenes, is studied firstly to ensure its opening and flexibility. And as another foundation, some dynamics models of CPS and atmospheric ones of relative sensors are introduced to simulate the motion of CPS and the change of its posture. On the basis above, the reconfigurable scene-driven mechanisms that are Based on hybrid events are mainly excogitated. Then, different scenes can be configured in terms of special verification requirements, and then each scene will be decomposed into a spatio-temporal event sequence and scheduled by a scene executor. With this environment, not only the posture of CPS, but also the autonomy of its behavior can be verified and observed. It will be meaningful for the design of such autonomous CPS.