Visible to the public Biblio

Filters: Keyword is SLAM (robots)  [Clear All Filters]
2020-12-15
Shanavas, H., Ahmed, S. A., Hussain, M. H. Safwat.  2018.  Design of an Autonomous Surveillance Robot Using Simultaneous Localization and Mapping. 2018 International Conference on Design Innovations for 3Cs Compute Communicate Control (ICDI3C). :64—68.

In this paper, the design as well as complete implementation of a robot which can be autonomously controlled for surveillance. It can be seamlessly integrated into an existing security system already present. The robot's inherent ability allows it to map the interiors of an unexplored building and steer autonomously using its self-ruling and pilot feature. It uses a 2D LIDAR to map its environment in real-time and HD camera records suspicious activity. It also features an in-built display with touch based commands and voice recognition that enables people to interact with the robot during any situation.

2020-12-11
Fujiwara, N., Shimasaki, K., Jiang, M., Takaki, T., Ishii, I..  2019.  A Real-time Drone Surveillance System Using Pixel-level Short-time Fourier Transform. 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :303—308.

In this study we propose a novel method for drone surveillance that can simultaneously analyze time-frequency responses in all pixels of a high-frame-rate video. The propellers of flying drones rotate at hundreds of Hz and their principal vibration frequency components are much higher than those of their background objects. To separate the pixels around a drone's propellers from its background, we utilize these time-series features for vibration source localization with pixel-level short-time Fourier transform (STFT). We verify the relationship between the number of taps in the STFT computation and the performance of our algorithm, including the execution time and the localization accuracy, by conducting experiments under various conditions, such as degraded appearance, weather, and defocused blur. The robustness of the proposed algorithm is also verified by localizing a flying multi-copter in real-time in an outdoor scenario.

2017-03-08
Marburg, A., Hayes, M. P..  2015.  SMARTPIG: Simultaneous mosaicking and resectioning through planar image graphs. 2015 IEEE International Conference on Robotics and Automation (ICRA). :5767–5774.

This paper describes Smartpig, an algorithm for the iterative mosaicking of images of a planar surface using a unique parameterization which decomposes inter-image projective warps into camera intrinsics, fronto-parallel projections, and inter-image similarities. The constraints resulting from the inter-image alignments within an image set are stored in an undirected graph structure allowing efficient optimization of image projections on the plane. Camera pose is also directly recoverable from the graph, making Smartpig a feasible solution to the problem of simultaneous location and mapping (SLAM). Smartpig is demonstrated on a set of 144 high resolution aerial images and evaluated with a number of metrics against ground control.

Kerl, C., Stückler, J., Cremers, D..  2015.  Dense Continuous-Time Tracking and Mapping with Rolling Shutter RGB-D Cameras. 2015 IEEE International Conference on Computer Vision (ICCV). :2264–2272.

We propose a dense continuous-time tracking and mapping method for RGB-D cameras. We parametrize the camera trajectory using continuous B-splines and optimize the trajectory through dense, direct image alignment. Our method also directly models rolling shutter in both RGB and depth images within the optimization, which improves tracking and reconstruction quality for low-cost CMOS sensors. Using a continuous trajectory representation has a number of advantages over a discrete-time representation (e.g. camera poses at the frame interval). With splines, less variables need to be optimized than with a discrete representation, since the trajectory can be represented with fewer control points than frames. Splines also naturally include smoothness constraints on derivatives of the trajectory estimate. Finally, the continuous trajectory representation allows to compensate for rolling shutter effects, since a pose estimate is available at any exposure time of an image. Our approach demonstrates superior quality in tracking and reconstruction compared to approaches with discrete-time or global shutter assumptions.