Biblio
Over the past decade, distributed CSMA, which forms the basis for WiFi, has been deployed ubiquitously to provide seamless and high-speed mobile internet access. However, distributed CSMA might not be ideal for future IoT/M2M applications, where the density of connected devices/sensors/controllers is expected to be orders of magnitude higher than that in present wireless networks. In such high-density networks, the overhead associated with completely distributed MAC protocols will become a bottleneck. Moreover, IoT communications are likely to have strict QoS requirements, for which the `best-effort' scheduling by present WiFi networks may be unsuitable. This calls for a clean-slate redesign of the wireless MAC taking into account the requirements for future IoT/M2M networks. In this paper, we propose a reservation-based (for minimal overhead) wireless MAC designed specifically with IoT/M2M applications in mind.
The current trend of IoT user is toward the use of services and data externally due to voluminous processing, which demands resourceful machines. Instead of relying on the cloud of poor connectivity or a limited bandwidth, the IoT user prefers to use a cloudlet-based fog computing. However, the choice of cloudlet is solely dependent on its trust and reliability. In practice, even though a cloudlet possesses a required trusted platform module (TPM), we argue that the presence of a TPM is not enough to make the cloudlet trustworthy as the TPM supports only the primitive security of the bootstrap. Besides uncertainty in security, other uncertain conditions of the network (e.g. network bandwidth, latency and expectation time to complete a service request for cloud-based services) may also prevail for the cloudlets. Therefore, in order to evaluate the trust value of multiple cloudlets under uncertainty, this paper broadly proposes the empirical process for evaluation of trust. This will be followed by a measure of trust-based reputation of cloudlets through computational intelligence such as fuzzy logic and ant colony optimization (ACO). In the process, fuzzy logic-based inference and membership evaluation of trust are presented. In addition, ACO and its pheromone communication across different colonies are being modeled with multiple cloudlets. Finally, a measure of affinity or popular trust and reputation of the cloudlets is also proposed. Together with the context of application under multiple cloudlets, the computationally intelligent approaches have been investigated in terms of performance. Hence the contribution is subjected towards building a trusted cloudlet-based fog platform.
Wireless Sensor Network (WSN) is a heterogeneous type of network consisting of scattered sensor nodes and working together for data collection, processing, and transmission functions[1], [2]. Because WSN is widely used in vital matters, aspects of its security must also be considered. There are many types of attacks that might be carried out to disrupt WSN networks. The methods of attack that exist in WSN include jamming attack, tampering, Sybil attack, wormhole attack, hello flood attack, and, blackhole attack[3]. Blackhole attacks are one of the most dangerous attacks on WSN networks. Enhanced Check Agent method is designed to detect black hole attacks by sending a checking agent to record nodes that are considered black okay. The implementation will be tested right on a wireless sensor network using ZigBee technology. Network topology uses a mesh where each node can have more than one routing table[4]. The Enhanced Check Agent method can increase throughput to 100 percent.
Adversarial models are well-established for cryptographic protocols, but distributed real-time protocols have requirements that these abstractions are not intended to cover. The IEEE/IEC 61850 standard for communication networks and systems for power utility automation in particular not only requires distributed processing, but in case of the generic object oriented substation events and sampled value (GOOSE/SV) protocols also hard real-time characteristics. This motivates the desire to include both quality of service (QoS) and explicit network topology in an adversary model based on a π-calculus process algebraic formalism based on earlier work. This allows reasoning over process states, placement of adversarial entities and communication behaviour. We demonstrate the use of our model for the simple case of a replay attack against the publish/subscribe GOOSE/SV subprotocol, showing bounds for non-detectability of such an attack.
Mobile Ad-hoc Network (MANET) consists of different configurations, where it deals with the dynamic nature of its creation and also it is a self-configurable type of a network. The primary task in this type of networks is to develop a mechanism for routing that gives a high QoS parameter because of the nature of ad-hoc network. The Ad-hoc-on-Demand Distance Vector (AODV) used here is the on-demand routing mechanism for the computation of the trust. The proposed approach uses the Artificial neural network (ANN) and the Support Vector Machine (SVM) for the discovery of the black hole attacks in the network. The results are carried out between the black hole AODV and the security mechanism provided by us as the Secure AODV (SAODV). The results were tested on different number of nodes, at last, it has been experimented for 100 nodes which provide an improvement in energy consumption of 54.72%, the throughput is 88.68kbps, packet delivery ratio is 92.91% and the E to E delay is of about 37.27ms.
Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.
Intrusion Detection system (IDS) was an application which was aimed to monitor network activity or system and it could find if there was a dangerous operation. Implementation of IDS on Software Define Network architecture (SDN) has drawbacks. IDS on SDN architecture might decreasing network Quality of Service (QoS). So the network could not provide services to the existing network traffic. Throughput, delay and packet loss were important parameters of QoS measurement. Snort IDS and bro IDS were tools in the application of IDS on the network. Both had differences, one of which was found in the detection method. Snort IDS used a signature based detection method while bro IDS used an anomaly based detection method. The difference between them had effects in handling the network traffic through it. In this research, we compared both tools. This comparison are done with testing parameters such as throughput, delay, packet loss, CPU usage, and memory usage. From this test, it was found that bro outperform snort IDS for throughput, delay , and packet loss parameters. However, CPU usage and memory usage on bro requires higher resource than snort.