Visible to the public Biblio

Found 16998 results

2018-03-19
Keerthana, S., Monisha, C., Priyanka, S., Veena, S..  2017.  De Duplication Scalable Secure File Sharing on Untrusted Storage in Big Data. 2017 International Conference on Information Communication and Embedded Systems (ICICES). :1–6.

Data Deduplication provides lots of benefits to security and privacy issues which can arise as user's sensitive data at risk of within and out of doors attacks. Traditional secret writing that provides knowledge confidentiality is incompatible with knowledge deduplication. Ancient secret writing wants completely different users to encode their knowledge with their own keys. Thus, identical knowledge copies of completely different various users can result in different ciphertexts that makes Deduplication not possible. Convergent secret writing has been planned to enforce knowledge confidentiality whereas creating Deduplication possible. It encrypts/decrypts a knowledge copy with a confluent key, that is obtained by computing the cryptographical hash price of the content of the information copy. Once generation of key and encryption, the user can retain the keys and send ciphertext to cloud.

Rawal, B. S., Vivek, S. S..  2017.  Secure Cloud Storage and File Sharing. 2017 IEEE International Conference on Smart Cloud (SmartCloud). :78–83.
Internet-based online cloud services provide enormous volumes of storage space, tailor made computing resources and eradicates the obligation of native machines for data maintenance as well. Cloud storage service providers claim to offer the ability of secure and elastic data-storage services that can adapt to various storage necessities. Most of the security tools have a finite rate of failure, and intrusion comes with more complex and sophisticated techniques; the security failure rates are skyrocketing. Once we upload our data into the cloud, we lose control of our data, which certainly brings new security risks toward integrity and confidentiality of our data. In this paper, we discuss a secure file sharing mechanism for the cloud with the disintegration protocol (DIP). The paper also introduces new contribution of seamless file sharing technique among different clouds without sharing an encryption key.
Al-Aaridhi, R., Yueksektepe, A., Graffi, K..  2017.  Access Control for Secure Distributed Data Structures in Distributed Hash Tables. 2017 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN). :1–3.
Peer-To-Peer (P2P) networks open up great possibilities for intercommunication, collaborative and social projects like file sharing, communication protocols or social networks while offering advantages over the conventional Client-Server model of computing pattern. Such networks counter the problems of centralized servers such as that P2P networks can scale to millions without additional costs. In previous work, we presented Distributed Data Structure (DDS) which offers a middle-ware scheme for distributed applications. This scheme builds on top of DHT (Distributed Hash Table) based P2P overlays, and offers distributed data storage services as a middle-ware it still needs to address security issues. The main objective of this paper is to investigate possible ways to handle the security problem for DDS, and to develop a possibly reusable security architecture for access control for secure distributed data structures in P2P networks without depending on trusted third parties.
Jemel, M., Msahli, M., Serhrouchni, A..  2017.  Towards an Efficient File Synchronization between Digital Safes. 2017 IEEE 31st International Conference on Advanced Information Networking and Applications (AINA). :136–143.
One of the main concerns of Cloud storage solutions is to offer the availability to the end user. Thus, addressing the mobility needs and device's variety has emerged as a major challenge. At first, data should be synchronized automatically and continuously when the user moves from one equipment to another. Secondly, the Cloud service should offer to the owner the possibility to share data with specific users. The paper's goal is to develop a secure framework that ensures file synchronization with high quality and minimal resource consumption. As a first step towards this goal, we propose the SyncDS protocol with its associated architecture. The synchronization protocol efficiency raises through the choice of the used networking protocol as well as the strategy of changes detection between two versions of file systems located in different devices. Our experiment results show that adopting the Hierarchical Hash Tree to detect the changes between two file systems and adopting the WebSocket protocol for the data exchanges improve the efficiency of the synchronization protocol.
Ukwandu, E., Buchanan, W. J., Russell, G..  2017.  Performance Evaluation of a Fragmented Secret Share System. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–6.
There are many risks in moving data into public storage environments, along with an increasing threat around large-scale data leakage. Secret sharing scheme has been proposed as a keyless and resilient mechanism to mitigate this, but scaling through large scale data infrastructure has remained the bane of using secret sharing scheme in big data storage and retrievals. This work applies secret sharing methods as used in cryptography to create robust and secure data storage and retrievals in conjunction with data fragmentation. It outlines two different methods of distributing data equally to storage locations as well as recovering them in such a manner that ensures consistent data availability irrespective of file size and type. Our experiments consist of two different methods - data and key shares. Using our experimental results, we were able to validate previous works on the effects of threshold on file recovery. Results obtained also revealed the varying effects of share writing to and retrieval from storage locations other than computer memory. The implication is that increase in fragment size at varying file and threshold sizes rather than add overheads to file recovery, do so on creation instead, underscoring the importance of choosing a varying fragment size as file size increases.
Kabir, T., Adnan, M. A..  2017.  A Dynamic Searchable Encryption Scheme for Secure Cloud Server Operation Reserving Multi-Keyword Ranked Search. 2017 4th International Conference on Networking, Systems and Security (NSysS). :1–9.
Cloud computing is becoming more and more popular day by day due to its maintenance, multitenancy and performance. Data owners are motivated to outsource their data to the cloud servers for resource pooling and productivity where multiple users can work on the same data concurrently. These servers offer great convenience and reduced cost for the computation, storage and management of data. But concerns can persist for loss of control over certain sensitive information. The complexity of security is largely intensified when data is distributed over a greater number of devices and data is shared among unrelated users. So these sensitive data should be encrypted for solving these security issues that many consumers cannot afford to tackle. In this paper, we present a dynamic searchable encryption scheme whose update operation can be completed by cloud server while reserving the ability to support multi-keyword ranked search. We have designed a scheme where dynamic operations on data like insert, update and delete are performed by cloud server without decrypting the data. Thus this scheme not only ensures dynamic operations on data but also provides a secure technique by performing those tasks without decryption. The state-of-the-art methods let the data users retrieve the data, re-encrypt it under the new policy and then send it again to the cloud. But our proposed method saves this high computational overhead by reducing the burden of performing dynamic operation by the data owners. The secure and widely used TF × IDF model is used along with kNN algorithm for construction of the index and generation of the query. We have used a tree-based index structure, so our proposed scheme can achieve a sub-linear search time. We have conducted experiments on Amazon EC2 cloud server with three datasets by updating a file, appending a file and deleting a file from the document collection and compared our result with the state-of-the-art method. Results show th- t our scheme has an average running time of 42ms which is 75% less than the existing method.
Jacob, C., Rekha, V. R..  2017.  Secured and Reliable File Sharing System with De-Duplication Using Erasure Correction Code. 2017 International Conference on Networks Advances in Computational Technologies (NetACT). :221–228.
An effective storage and management of file systems is very much essential now a days to avoid the wastage of storage space provided by the cloud providers. Data de-duplication technique has been used widely which allows only to store a single copy of a file and thus avoids duplication of file in the cloud storage servers. It helps to reduce the amount of storage space and save bandwidth of cloud service and thus in high cost savings for the cloud service subscribers. Today data that we need to store are in encrypted format to ensure the security. So data encryption by data owners with their own keys makes the de-duplication impossible for the cloud service subscriber as the data encryption with a key converts data into an unidentifiable format called cipher text thus encrypting, even the same data, with different keys may result in different cipher texts. But de-duplication and encryption need to work in hand to hand to ensure secure, authorized and optimized storage. In this paper, we propose a scheme for file-level de-duplication on encrypted files like text, images and even on video files stored in cloud based on the user's privilege set and file privilege set. This paper proposed a de-duplication system which distributes the files across different servers. The system uses an Erasure Correcting Code technique to re-construct the files even if the parts of the files are lost by attacking any server. Thus the proposed system can ensure both the security and reliability of encrypted files.
Heckman, M. R., Schell, R. R., Reed, E. E..  2015.  A Multi-Level Secure File Sharing Server and Its Application to a Multi-Level Secure Cloud. MILCOM 2015 - 2015 IEEE Military Communications Conference. :1224–1229.
Contemporary cloud environments are built on low-assurance components, so they cannot provide a high level of assurance about the isolation and protection of information. A ``multi-level'' secure cloud environment thus typically consists of multiple, isolated clouds, each of which handles data of only one security level. Not only are such environments duplicative and costly, data ``sharing'' must be implemented by massive, wasteful copying of data from low-level domains to high-level domains. The requirements for certifiable, scalable, multi-level cloud security are threefold: 1) To have trusted, high-assurance components available for use in creating a multi-level secure cloud environment; 2) To design a cloud architecture that efficiently uses the high-assurance components in a scalable way, and 3) To compose the secure components within the scalable architecture while still verifiably maintaining the system security properties. This paper introduces a trusted, high-assurance file server and architecture that satisfies all three requirements. The file server is built on mature technology that was previously certified and deployed across domains from TS/SCI to Unclassified and that supports high-performance, low-to-high and high-to-low file sharing with verifiable security.
2018-03-05
Ali, Muhammad Salek, Dolui, Koustabh, Antonelli, Fabio.  2017.  IoT Data Privacy via Blockchains and IPFS. Proceedings of the Seventh International Conference on the Internet of Things. :14:1–14:7.

Blockchain, the underlying technology of cryptocurrency networks like Bitcoin, can prove to be essential towards realizing the vision of a decentralized, secure, and open Internet of Things (IoT) revolution. There is a growing interest in many research groups towards leveraging blockchains to provide IoT data privacy without the need for a centralized data access model. This paper aims to propose a decentralized access model for IoT data, using a network architecture that we call a modular consortium architecture for IoT and blockchains. The proposed architecture facilitates IoT communications on top of a software stack of blockchains and peer-to-peer data storage mechanisms. The architecture is aimed to have privacy built into it, and to be adaptable for various IoT use cases. To understand the feasibility and deployment considerations for implementing the proposed architecture, we conduct performance analysis of existing blockchain development platforms, Ethereum and Monax.

Toulouse, Michel, Nguyen, Phuong Khanh.  2017.  Protecting Consensus Seeking NIDS Modules Against Multiple Attackers. Proceedings of the Eighth International Symposium on Information and Communication Technology. :226–233.

This work concerns distributed consensus algorithms and application to a network intrusion detection system (NIDS) [21]. We consider the problem of defending the system against multiple data falsification attacks (Byzantine attacks), a vulnerability of distributed peer-to-peer consensus algorithms that has not been widely addressed in its practicality. We consider both naive (independent) and colluding attackers. We test three defense strategy implementations, two classified as outlier detection methods and one reputation-based method. We have narrowed our attention to outlier and reputation-based methods because they are relatively light computationally speaking. We have left out control theoretic methods which are likely the most effective methods, however their computational cost increase rapidly with the number of attackers. We compare the efficiency of these three implementations for their computational cost, detection performance, convergence behavior and possible impacts on the intrusion detection accuracy of the NIDS. Tests are performed based on simulations of distributed denial of service attacks using the KSL-KDD data set.

Ameri, Aidin, Johnson, Daryl.  2017.  Covert Channel over Network Time Protocol. Proceedings of the 2017 International Conference on Cryptography, Security and Privacy. :62–65.

In this paper, we scrutinize a way through which covert messages are sent and received using the Network Time Protocol (NTP), which is not easily detected since NTP should be present in most environment to synchronize the clock between clients and servers using at least one time server. We also present a proof of concept and investigate the throughput and robustness of this covert channel. This channel will use the 32 bits of fraction of seconds in timestamp to send the covert message. It also uses "Peer Clock Precision" field to track the messages between sender and receiver.

Jin, Hongyu, Papadimitratos, Panos.  2017.  Resilient Privacy Protection for Location-Based Services Through Decentralization. Proceedings of the 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks. :253–258.

Location-based Services (LBSs) provide valuable features but can also reveal sensitive user information. Decentralized privacy protection removes the need for a so-called anonymizer, but relying on peers is a double-edged sword: adversaries could mislead with fictitious responses or even collude to compromise their peers' privacy. We address here exactly this problem: we strengthen the decentralized LBS privacy approach, securing peer-to-peer (P2P) interactions. Our scheme can provide precise timely P2P responses by passing proactively cached Point of Interest (POI) information. It reduces the exposure both to the honest-but-curious LBS servers and peer nodes. Our scheme allows P2P responses to be validated with very low fraction of queries affected even if a significant fraction of nodes are compromised. The exposure can be kept very low even if the LBS server or a large set of colluding curious nodes collude with curious identity management entities.

Harrington, Joshua, Lacroix, Jesse, El-Khatib, Khalil, Lobo, Felipe Leite, Oliveira, Horácio A.B.F..  2017.  Proactive Certificate Distribution for PKI in VANET. Proceedings of the 13th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :9–13.

Vehicular Ad-Hoc Networks (VANET) are the creation of several vehicles communicating with each other in order to create a network capable of communication and data exchange. One of the most promising methods for security and trust amongst vehicular networks is the usage of Public Key Infrastructure (PKI). However, current implementations of PKI as a security solution for determining the validity and authenticity of vehicles in a VANET is not efficient due to the usage of large amounts of delay and computational overhead. In this paper, we investigate the potential of PKI when predictively and preemptively passing along certificates to roadside units (RSU) in an effort to lower delay and computational overhead in a dynamic environment. We look to accomplish this through utilizing fog computing and propose a new protocol to pass certificates along the projected path.

Dmitrienko, Alexandra, Noack, David, Yung, Moti.  2017.  Secure Wallet-Assisted Offline Bitcoin Payments with Double-Spender Revocation. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :520–531.

Bitcoin seems to be the most successful cryptocurrency so far given the growing real life deployment and popularity. While Bitcoin requires clients to be online to perform transactions and a certain amount of time to verify them, there are many real life scenarios that demand for offline and immediate payments (e.g., mobile ticketing, vending machines, etc). However, offline payments in Bitcoin raise non-trivial security challenges, as the payee has no means to verify the received coins without having access to the Bitcoin network. Moreover, even online immediate payments are shown to be vulnerable to double-spending attacks. In this paper, we propose the first solution for Bitcoin payments, which enables secure payments with Bitcoin in offline settings and in scenarios where payments need to be immediately accepted. Our approach relies on an offline wallet and deploys several novel security mechanisms to prevent double-spending and to verify the coin validity in offline setting. These mechanisms achieve probabilistic security to guarantee that the attack probability is lower than the desired threshold. We provide a security and risk analysis as well as model security parameters for various adversaries. We further eliminate remaining risks by detection of misbehaving wallets and their revocation. We implemented our solution for mobile Android clients and instantiated an offline wallet using a microSD security card. Our implementation demonstrates that smooth integration over a very prevalent platform (Android) is possible, and that offline and online payments can practically co-exist. We also discuss alternative deployment approach for the offline wallet which does not leverage secure hardware, but instead relies on a deposit system managed by the Bitcoin network.

van der Heijden, Rens W., Engelmann, Felix, Mödinger, David, Schönig, Franziska, Kargl, Frank.  2017.  Blackchain: Scalability for Resource-Constrained Accountable Vehicle-to-x Communication. Proceedings of the 1st Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers. :4:1–4:5.

In this paper, we propose a new Blockchain-based message and revocation accountability system called Blackchain. Combining a distributed ledger with existing mechanisms for security in V2X communication systems, we design a distributed event data recorder (EDR) that satisfies traditional accountability requirements by providing a compressed global state. Unlike previous approaches, our distributed ledger solution provides an accountable revocation mechanism without requiring trust in a single misbehavior authority, instead allowing a collaborative and transparent decision making process through Blackchain. This makes Blackchain an attractive alternative to existing solutions for revocation in a Security Credential Management System (SCMS), which suffer from the traditional disadvantages of PKIs, notably including centralized trust. Our proposal becomes scalable through the use of hierarchical consensus: individual vehicles dynamically create clusters, which then provide their consensus decisions as input for road-side units (RSUs), which in turn publish their results to misbehavior authorities. This authority, which is traditionally a single entity in the SCMS, responsible for the integrity of the entire V2X network, is now a set of authorities that transparently perform a revocation, whose result is then published in a global Blackchain state. This state can be used to prevent the issuance of certificates to previously malicious users, and also prevents the authority from misbehaving through the transparency implied by a global system state.

Rüsch, Signe, Schürmann, Dominik, Kapitza, Rüdiger, Wolf, Lars.  2017.  Forward Secure Delay-Tolerant Networking. Proceedings of the 12th Workshop on Challenged Networks. :7–12.

Delay-Tolerant Networks exhibit highly asynchronous connections often routed over many mobile hops before reaching its intended destination. The Bundle Security Protocol has been standardized providing properties such as authenticity, integrity, and confidentiality of bundles using traditional Public-Key Cryptography. Other protocols based on Identity-Based Cryptography have been proposed to reduce the key distribution overhead. However, in both schemes, secret keys are usually valid for several months. Thus, a secret key extracted from a compromised node allows for decryption of past communications since its creation. We solve this problem and propose the first forward secure protocol for Delay-Tolerant Networking. For this, we apply the Puncturable Encryption construction designed by Green and Miers, integrate it into the Bundle Security Protocol and adapt its parameters for different highly asynchronous scenarios. Finally, we provide performance measurements and discuss their impact.

Das, A., Shen, M. Y., Wang, J..  2017.  Modeling User Communities for Identifying Security Risks in an Organization. 2017 IEEE International Conference on Big Data (Big Data). :4481–4486.

In this paper, we address the problem of peer grouping employees in an organization for identifying security risks. Our motivation for studying peer grouping is its importance for a clear understanding of user and entity behavior analytics (UEBA) that is the primary tool for identifying insider threat through detecting anomalies in network traffic. We show that using Louvain method of community detection it is possible to automate peer group creation with feature-based weight assignments. Depending on the number of employees and their features we show that it is also possible to give each group a meaningful description. We present three new algorithms: one that allows an addition of new employees to already generated peer groups, another that allows for incorporating user feedback, and lastly one that provides the user with recommended nodes to be reassigned. We use Niara's data to validate our claims. The novelty of our method is its robustness, simplicity, scalability, and ease of deployment in a production environment.

Xu, Y., Wang, H. M., Yang, Q., Huang, K. W., Zheng, T. X..  2017.  Cooperative Transmission for Physical Layer Security by Exploring Social Awareness. 2017 IEEE Globecom Workshops (GC Wkshps). :1–6.

Social awareness and social ties are becoming increasingly fashionable with emerging mobile and handheld devices. Social trust degree describing the strength of the social ties has drawn lots of research interests in many fields including secure cooperative communications. Such trust degree reflects the users' willingness for cooperation, which impacts the selection of the cooperative users in the practical networks. In this paper, we propose a cooperative relay and jamming selection scheme to secure communication based on the social trust degree under a stochastic geometry framework. We aim to analyze the involved secrecy outage probability (SOP) of the system's performance. To achieve this target, we propose a double Gamma ratio (DGR) approach through Gamma approximation. Based on this, the SOP is tractably obtained in closed form. The simulation results verify our theoretical findings, and validate that the social trust degree has dramatic influences on the network's secrecy performance.

Messai, M. L., Seba, H..  2017.  A Self-Healing Key Pre-Distribution Scheme for Multi-Phase Wireless Sensor Networks. 2017 IEEE Trustcom/BigDataSE/ICESS. :144–151.

Node compromising is still the most hard attack in Wireless Sensor Networks (WSNs). It affects key distribution which is a building block in securing communications in any network. The weak point of several roposed key distribution schemes in WSNs is their lack of resilience to node compromising attacks. When a node is compromised, all its key material is revealed leading to insecure communication links throughout the network. This drawback is more harmful for long-lived WSNs that are deployed in multiple phases, i.e., Multi-phase WSNs (MPWSNs). In the last few years, many key management schemes were proposed to ensure security in WSNs. However, these schemes are conceived for single phase WSNs and their security degrades with time when an attacker captures nodes. To deal with this drawback and enhance the resilience to node compromising over the whole lifetime of the network, we propose in this paper, a new key pre-distribution scheme adapted to MPWSNs. Our scheme takes advantage of the resilience improvement of Q-composite key scheme and adds self-healing which is the ability of the scheme to decrease the effect of node compromising over time. Self-healing is achieved by pre-distributing each generation with fresh keys. The evaluation of our scheme proves that it has a good key connectivity and a high resilience to node compromising attack compared to existing key management schemes.

Mfula, H., Nurminen, J. K..  2017.  Adaptive Root Cause Analysis for Self-Healing in 5G Networks. 2017 International Conference on High Performance Computing Simulation (HPCS). :136–143.

Root cause analysis (RCA) is a common and recurring task performed by operators of cellular networks. It is done mainly to keep customers satisfied with the quality of offered services and to maximize return on investment (ROI) by minimizing and where possible eliminating the root causes of faults in cellular networks. Currently, the actual detection and diagnosis of faults or potential faults is still a manual and slow process often carried out by network experts who manually analyze and correlate various pieces of network data such as, alarms, call traces, configuration management (CM) and key performance indicator (KPI) data in order to come up with the most probable root cause of a given network fault. In this paper, we propose an automated fault detection and diagnosis solution called adaptive root cause analysis (ARCA). The solution uses measurements and other network data together with Bayesian network theory to perform automated evidence based RCA. Compared to the current common practice, our solution is faster due to automation of the entire RCA process. The solution is also cheaper because it needs fewer or no personnel in order to operate and it improves efficiency through domain knowledge reuse during adaptive learning. As it uses a probabilistic Bayesian classifier, it can work with incomplete data and it can handle large datasets with complex probability combinations. Experimental results from stratified synthesized data affirmatively validate the feasibility of using such a solution as a key part of self-healing (SH) especially in emerging self-organizing network (SON) based solutions in LTE Advanced (LTE-A) and 5G.

Fan, Z., Wu, H., Xu, J., Tang, Y..  2017.  An Optimization Algorithm for Spatial Information Network Self-Healing Based on Software Defined Network. 2017 12th International Conference on Computer Science and Education (ICCSE). :369–374.

Spatial information network is an important part of the integrated space-terrestrial information network, its bearer services are becoming increasingly complex, and real-time requirements are also rising. Due to the structural vulnerability of the spatial information network and the dynamics of the network, this poses a serious challenge to how to ensure reliable and stable data transmission. The structural vulnerability of the spatial information network and the dynamics of the network brings a serious challenge of ensuring reliable and stable data transmission. Software Defined Networking (SDN), as a new network architecture, not only can quickly adapt to new business, but also make network reconfiguration more intelligent. In this paper, SDN is used to design the spatial information network architecture. An optimization algorithm for network self-healing based on SDN is proposed to solve the failure of switching node. With the guarantee of Quality of Service (QoS) requirement, the link is updated with the least link to realize the fast network reconfiguration and recovery. The simulation results show that the algorithm proposed in this paper can effectively reduce the delay caused by fault recovery.

Chang, C. H., Hu, C. H., Tsai, C. H., Hsieh, C. Y..  2017.  Three-Layer Ring Optical Fiber Sensing Network with Self-Healing Functionality. 2017 Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR). :1–2.

A novel optical fiber sensing network is proposed to eliminate the effect of multiple fiber failures. Simulation results show that if the number of breakpoint in each subnet is less than four, the optical routing paths can be reset to avoid those breakpoints by changing the status of optical switches in the remote nodes.

Shen, Y., Chen, W., Wang, J..  2017.  Distributed Self-Healing for Mobile Robot Networks with Multiple Robot Failures. 2017 Chinese Automation Congress (CAC). :5939–5944.

In the multi-robot applications, the maintained and desired network may be destroyed by failed robots. The existing self-healing algorithms only handle with the case of single robot failure, however, multiple robot failures may cause several challenges, such as disconnected network and conflicts among repair paths. This paper presents a distributed self-healing algorithm based on 2-hop neighbor infomation to resolve the problems caused by multiple robot failures. Simulations and experiment show that the proposed algorithm manages to restore connectivity of the mobile robot network and improves the synchronization of the network globally, which validate the effectiveness of the proposed algorithm in resolving multiple robot failures.

Khalil, K., Eldash, O., Bayoumi, M..  2017.  Self-Healing Router Architecture for Reliable Network-on-Chips. 2017 24th IEEE International Conference on Electronics, Circuits and Systems (ICECS). :330–333.

NoCs are a well established research topic and several Implementations have been proposed for Self-healing. Self-healing refers to the ability of a system to detect faults or failures and fix them through healing or repairing. The main problems in current self-healing approaches are area overhead and scalability for complex structure since they are based on redundancy and spare blocks. Also, faulty router can isolate PE from other router nodes which can reduce the overall performance of the system. This paper presents a self-healing for a router to avoid denied fault PE function and isolation PE from other nodes. In the proposed design, the neighbor routers receive signal from a faulty router which keeps them to send the data packet which has only faulted router destination to a faulty router. Control unite turns on switches to connect four input ports to local ports successively to send coming packets to PE. The reliability of the proposed technique is studied and compared to conventional system with different failure rates. This approach is capable of healing 50% of the router. The area overhead is 14% for the proposed approach which is much lower compared to other approaches using redundancy.

Mahfood Haddad, Yara, Ali, Hesham H..  2017.  An Evolutionary Graph-Based Approach for Managing Self-Organized IoT Networks. Proceedings of the 15th ACM International Symposium on Mobility Management and Wireless Access. :113–119.

Wireless sensor networks (WSNs) are one of the most rapidly developing information technologies and promise to have a variety of applications in Next Generation Networks (NGNs) including the IoT. In this paper, the focus will be on developing new methods for efficiently managing such large-scale networks composed of homogeneous wireless sensors/devices in urban environments such as homes, hospitals, stores and industrial compounds. Heterogeneous networks were proposed in a comparison with the homogeneous ones. The efficiency of these networks will depend on several optimization parameters such as the redundancy, as well as the percentages of coverage and energy saved. We tested the algorithm using different densities of sensors in the network and different values of tuning parameters for the optimization parameters. Obtained results show that our proposed algorithm performs better than the other greedy algorithm. Moreover, networks with more sensors maintain more redundancy and better percentage of coverage. However, it wastes more energy. The same method will be used for heterogeneous wireless sensors networks where devices have different characteristics and the network acts more efficient.