Biblio
Considering their independent and environmentally-varied work-fashion, one of the most important factors in WSN applications is fault-tolerance. Due to the fact that the possibilities of an absent sensor node, damaged communication link or missing data are unavoidable in wireless sensor networks, fault-tolerance becomes a key-issue. Among the causes of these constant failures are environmental factors, battery exhaustion, damaged communications links, data collision, wear-out of memory and storage units and overloaded sensors. WSN can be in use for a variety of purposes, nevertheless its fault-tolerance needs to depend mostly on the application type. Scientific research, for example, tends to rely on accurate and precise massive amount of sensed data, thus demanding WSNs to support high degree of data sampling. The data storage capacity on the sensors is crucial because while some applications require instantaneous transmission to another node or directly to the base station, others demand intervallic or interrupted transmissions. Thus, if the amount of data is large - as a derivative of the data precision needed by the application - WSN nodes are required to store those amounts of data in a rapid and effective fashion till the transmission stage. However, since those requirements are mostly depend on the hardware and the wireless settings, WSNs frequently have distinguished amount of data loss, causing data integrity issues. Sensor nodes are inherently a cheap piece of hardware, due to the common need to use many of them over a large area, sometimes in a non-retrievable environment - a restriction that does not allow a usage of a pricey tampering or overflow resistant hardware (which also may not always be unfailing), and a damaged or overflowed sensor can harm the data integrity, or even completely reject incoming messages. The problem gets even worse when there is a need for high-rate sampling or when data should be received from many nodes since missing data becomes a more common phenomenon as deployed WSNs grow in scale. Therefore, high-rate sampling WSNs applications require fault-tolerant data storage, even though this requirement is not realistic. In cases of an overflow, our Distributed Adaptive Clustering algorithm (D-ACR) [1] reconfigures the network, by adaptively and hierarchically re-clustering parts of it, based on the rate of incoming data packages in order to minimize the energy-consumption, and prevent premature death of nodes. However, the re-clustering cannot prevent data loss caused by the nature of the sensors. We suggest to address this problem by an efficient distributed backup-placement algorithm named DBP-ACR, performed on the D-ACR refined clusters. The DBP-ACR algorithm re-directs packages from overloaded sensors to more efficient placements outside of the overloaded areas in the WSN cluster, thus increasing the fault-tolerance of the network and reducing the data loss.