Visible to the public Biblio

Filters: Keyword is Microsoft Kinect  [Clear All Filters]
2022-03-14
Wang, Xindan, Chen, Qu, Li, Zhi.  2021.  A 3D Reconstruction Method for Augmented Reality Sandbox Based on Depth Sensor. 2021 IEEE 2nd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA). 2:844—849.
This paper builds an Augmented Reality Sandbox (AR Sandbox) system based on augmented reality technology, and performs a 3D reconstruction for the sandbox terrain using the depth sensor Microsoft Kinect in the AR Sandbox, as an entry point to pave the way for later development of related metaverse applications, such as the metaverse architecting and visual interactive modeling. The innovation of this paper is that for the AR Sandbox scene, a 3D reconstruction method based on depth sensor is proposed, which can automatically cut off the edge of the sandbox table in Kinect field of view, and accurately and completely reconstruct the sandbox terrain in Matlab.
2018-11-14
Wu, Q., Zhao, W..  2018.  Machine Learning Based Human Activity Detection in a Privacy-Aware Compliance Tracking System. 2018 IEEE International Conference on Electro/Information Technology (EIT). :0673–0676.

In this paper, we report our work on using machine learning techniques to predict back bending activity based on field data acquired in a local nursing home. The data are recorded by a privacy-aware compliance tracking system (PACTS). The objective of PACTS is to detect back-bending activities and issue real-time alerts to the participant when she bends her back excessively, which we hope could help the participant form good habits of using proper body mechanics when performing lifting/pulling tasks. We show that our algorithms can differentiate nursing staffs baseline and high-level bending activities by using human skeleton data without any expert rules.

2015-05-05
Coatsworth, M., Tran, J., Ferworn, A..  2014.  A hybrid lossless and lossy compression scheme for streaming RGB-D data in real time. Safety, Security, and Rescue Robotics (SSRR), 2014 IEEE International Symposium on. :1-6.

Mobile and aerial robots used in urban search and rescue (USAR) operations have shown the potential for allowing us to explore, survey and assess collapsed structures effectively at a safe distance. RGB-D cameras, such as the Microsoft Kinect, allow us to capture 3D depth data in addition to RGB images, providing a significantly richer user experience than flat video, which may provide improved situational awareness for first responders. However, the richer data comes at a higher cost in terms of data throughput and computing power requirements. In this paper we consider the problem of live streaming RGB-D data over wired and wireless communication channels, using low-power, embedded computing equipment. When assessing a disaster environment, a range camera is typically mounted on a ground or aerial robot along with the onboard computer system. Ground robots can use both wireless radio and tethers for communications, whereas aerial robots can only use wireless communication. We propose a hybrid lossless and lossy streaming compression format designed specifically for RGB-D data and investigate the feasibility and usefulness of live-streaming this data in disaster situations.