Visible to the public Biblio

Filters: Keyword is 3D reconstruction  [Clear All Filters]
2022-03-14
Wang, Xindan, Chen, Qu, Li, Zhi.  2021.  A 3D Reconstruction Method for Augmented Reality Sandbox Based on Depth Sensor. 2021 IEEE 2nd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA). 2:844—849.
This paper builds an Augmented Reality Sandbox (AR Sandbox) system based on augmented reality technology, and performs a 3D reconstruction for the sandbox terrain using the depth sensor Microsoft Kinect in the AR Sandbox, as an entry point to pave the way for later development of related metaverse applications, such as the metaverse architecting and visual interactive modeling. The innovation of this paper is that for the AR Sandbox scene, a 3D reconstruction method based on depth sensor is proposed, which can automatically cut off the edge of the sandbox table in Kinect field of view, and accurately and completely reconstruct the sandbox terrain in Matlab.
2019-02-25
Pan, Zhiying, Di, Make, Zhang, Jianhua, Ravi, Suraj.  2018.  Automatic Re-Topology and UV Remapping for 3D Scanned Objects Based on Neural Network. Proceedings of the 31st International Conference on Computer Animation and Social Agents. :48-52.
Producing an editable model texture could be a challenging problem if the model is scanned from real world or generated by multi-view reconstruction algorithm. To solve this problem, we present a novel re-topology and UV remapping method based on neural network, which transforms arbitrary models with textured coordinates to a semi-regular meshes, and keeps models texture and removes the influence of lighting information. The main innovation of this paper is to use a neural network to find the appropriate location of the starting and ending points for models in the UV maps. Then each fragmented mesh is projected to the 2D planar domain. After calculating and optimizing the orientation field, a semi-regular mesh for each patch is then generated. Those patches can be projected back to three-dimension space and be spliced to a complete mesh. Experiments show that our method can achieve satisfactory performance.
2018-11-19
Sobue, Hideaki, Fukushima, Yuki, Kashiyama, Takahiro, Sekimoto, Yoshihide.  2017.  Flying Object Detection and Classification by Monitoring Using Video Images. Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. :57:1–57:4.

In recent years, there has been remarkable development in unmanned aerial vehicle UAVs); certain companies are trying to use the UAV to deliver goods also. Therefore, it is predicted that many such objects will fly over the city, in the near future. This study proposes a system for monitoring objects flying over a city. We use multiple 4K video cameras to capture videos of the flying objects. In this research, we combine background subtraction and a state-of-the-art tracking method, the KCF, for detection and tracking. We use deep learning for classification and the SfM for calculating the 3-dimensional trajectory. A UAV is flown over the inner-city area of Tokyo and videos are captured. The accuracy of each processing is verified, using the videos of objects flying over the city. In each processing, we obtain a certain measure of accuracy; thus, there is a good prospect of creating a system to monitor objects flying, over a city.