Visible to the public Biblio

Filters: Keyword is open platform  [Clear All Filters]
2017-11-20
Saito, Susumu, Nakano, Teppei, Akabane, Makoto, Kobayashi, Tetsunori.  2016.  Evaluation of Collaborative Video Surveillance Platform: Prototype Development of Abandoned Object Detection. Proceedings of the 10th International Conference on Distributed Smart Camera. :172–177.

This paper evaluates a new video surveillance platform presented in a previous study, through an abandoned object detection task. The proposed platform has a function of automated detection and alerting, which is still a big challenge for a machine algorithm due to its recall-precision tradeoff problem. To achieve both high recall and high precision simultaneously, a hybrid approach using crowdsourcing after image analysis is proposed. This approach, however, is still not clear about what extent it can improve detection accuracy and raise quicker alerts. In this paper, the experiment is conducted for abandoned object detection, as one of the most common surveillance tasks. The results show that detection accuracy was improved from 50% (without crowdsourcing) to stable 95-100% (with crowdsourcing) by majority vote of 7 crowdworkers for each task. In contrast, alert time issue still remains open to further discussion since at least 7+ minutes are required to get the best performance.

2017-09-26
Papadopoulos, Georgios Z., Gallais, Antoine, Schreiner, Guillaume, Noël, Thomas.  2016.  Importance of Repeatable Setups for Reproducible Experimental Results in IoT. Proceedings of the 13th ACM Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, & Ubiquitous Networks. :51–59.

Performance analysis of newly designed solutions is essential for efficient Internet of Things and Wireless Sensor Network (WSN) deployments. Simulation and experimental evaluation practices are vital steps for the development process of protocols and applications for wireless technologies. Nowadays, the new solutions can be tested at a very large scale over both simulators and testbeds. In this paper, we first discuss the importance of repeatable experimental setups for reproducible performance evaluation results. To this aim, we present FIT IoT-LAB, a very large-scale and experimental testbed, i.e., consists of 2769 low-power wireless devices and 127 mobile robots. We then demonstrate through a number of experiments conducted on FIT IoT-LAB testbed, how to conduct meaningful experiments under real-world conditions. Finally, we discuss to what extent results obtained from experiments could be considered as scientific, i.e., reproducible by the community.