Biblio
In light of the problem for garbage cleaning in small water area, an intelligent miniature water surface garbage cleaning robot with unmanned driving and convenient operation is designed. Based on STC12C5A60S2 as the main controller in the design, power module, transmission module and cleaning module are controlled together to realize the function of cleaning and transporting garbage, intelligent remote control of miniature water surface garbage cleaning robot is realized by the WiFi module. Then the prototype is developed and tested, which will verify the rationality of the design. Compared with the traditional manual driving water surface cleaning devices, the designed robot realizes the intelligent control of unmanned driving, and achieves the purpose of saving human resources and reducing labor intensity, and the system operates security and stability, which has certain practical value.
In mobile wireless sensor networks (MWSN), data imprecision is a common problem. Decision making in real time applications may be greatly affected by a minor error. Even though there are many existing techniques that take advantage of the spatio-temporal characteristics exhibited in mobile environments, few measure the trustworthiness of sensor data accuracy. We propose a unique online context-aware data cleaning method that measures trustworthiness by employing an initial candidate reduction through the analysis of trust parameters used in financial markets theory. Sensors with similar trajectory behaviors are assigned trust scores estimated through the calculation of “betas” for finding the most accurate data to trust. Instead of devoting all the trust into a single candidate sensor's data to perform the cleaning, a Diversified Trust Portfolio (DTP) is generated based on the selected set of spatially autocorrelated candidate sensors. Our results show that samples cleaned by the proposed method exhibit lower percent error when compared to two well-known and effective data cleaning algorithms in tested outdoor and indoor scenarios.
In recent years, many researchers have focused on log-structured file systems (LFS), because it gracefully enhances the random write performance and efficiently resolves the consistency issue. However, the write policy of LFS can cause a file fragmentation problem, which degrades sequential read performance of the file system. In this paper, we analyze the relationship between file fragmentation and the sequential read performance, considering the characteristics of underlying storage devices. We also propose a novel file defragmentation scheme on LFS to effectively address the file fragmentation problem. Our scheme reorders the valid data blocks belonging to a victim segment based on the inode numbers during the cleaning process of LFS. In our experiments, our scheme eliminates file fragmentation by up to 98.5% when compared with traditional LFS.