Biblio
The “Internet of Things” (IoT) is internetworking of physical devices known as 'things', algorithms, equipment and techniques that allow communication with another device, equipment and software over the network. And with the advancement in data communication, every device must be connected via the Internet. For this purpose, we use resource-constrained sensor nodes for collecting data from homes, offices, hospitals, industries and data centers. But various vulnerabilities may ruin the functioning of the sensor nodes. Routing Protocol for Low Power and Lossy Networks (RPL) is a standardized, secure routing protocol designed for the 6LoWPAN IoT network. It's a proactive routing protocol that works on the destination-oriented topology to perform safe routing. The Sinkhole is a networking attack that destroys the topology of the RPL protocol as the attacker node changes the route of all the traffic in the IoT network. In this paper, we have given a survey of Sinkhole attacks in IoT and proposed different methods for preventing and detecting these attacks in a low-power-based IoT network.
Due to the widespread use of the Internet of Things (IoT) in recent years, the need for IoT technologies to handle communications with the rest of the globe has grown dramatically. Wireless sensor networks (WSNs) play a vital role in the operation of the IoT. The creation of Internet of Things operating systems (OS), which can handle the newly constructed IoT hardware, as well as new protocols and procedures for all communication levels, all of which are now in development, will pave the way for the future. When compared to other devices, these gadgets require a comparatively little amount of electricity, memory, and other resources. This has caused the scientific community to become more aware of the relevance of IoT device operating systems as a result of their findings. These devices may be made more versatile and powerful by including an operating system that contains real-time capabilities, kernel, networking, and other features, among other things. IEEE 802.15.4 networks are linked together using IPv6, which has a wide address space and so enables more devices to connect to the internet using the 6LoWPAN protocol. It is necessary to address some privacy and security issues that have arisen as a result of the widespread use of the Internet, notwithstanding the great benefits that have resulted. For the Internet of Things operating systems, this research has provided a network security architecture that ensures secure communication by utilizing the Cooja network simulator in combination with the Contiki operating system and demonstrate and explained how the nodes can protect from the network layer and physical layer attacks. Also, this research has depicted the energy consumption results of each designated node type during the authentication and communication process. Finally, proposed a few further improvements for the architecture which will enhance the network layer protection.
The “Internet of Things (IoT)” is a term that describes physical sensors, processing software, power and other technologies to connect or interchange information between systems and devices through the Internet and other forms of communication. RPL protocol can efficiently establish network routes, communicate routing information, and adjust the topology. The 6LoWPAN concept was born out of the belief that IP should protect even the tiniest devices, and for low-power devices, minimal computational capabilities should be permitted to join IoT. The DIS-Flooding against RPL-based IoT with its mitigation techniques are discussed in this paper.
As a large number of sensor nodes as well as limited resources such as energy, memory, computing power, as well as bandwidth. Lossy linkages connect these nodes together. In early 2008,IETF working group looked into using current routing protocols for LLNs. Routing Over minimum power and Lossy networksROLL standardizes an IPv6 routing solution for LLNs because of the importance of LLNs in IoT.IPv6 Routing Protocol is based on the 6LoWPAN standard. RPL has matured significantly. The research community is becoming increasingly interested in it. The topology of RPL can be built in a variety of ways. It creates a topology in advance. Due to the lack of a complete review of RPL, in this paper a mobility management framework has been proposed along with experimental evaluation by applying parameters likePacket Delivery Ratio, throughput, end to end delay, consumed energy on the basis of the various parameters and its analysis done accurately. Finally, this paper can help academics better understand the RPL and engage in future research projects to improve it.
How can high-level directives concerning risk, cybersecurity and compliance be operationalized in the central nervous system of any organization above a certain complexity? How can the effectiveness of technological solutions for security be proven and measured, and how can this technology be aligned with the governance and financial goals at the board level? These are the essential questions for any CEO, CIO or CISO that is concerned with the wellbeing of the firm. The concept of Zero Trust (ZT) approaches information and cybersecurity from the perspective of the asset to be protected, and from the value that asset represents. Zero Trust has been around for quite some time. Most professionals associate Zero Trust with a particular architectural approach to cybersecurity, involving concepts such as segments, resources that are accessed in a secure manner and the maxim “always verify never trust”. This paper describes the current state of the art in Zero Trust usage. We investigate the limitations of current approaches and how these are addressed in the form of Critical Success Factors in the Zero Trust Framework developed by ON2IT ‘Zero Trust Innovators’ (1). Furthermore, this paper describes the design and engineering of a Zero Trust artefact that addresses the problems at hand (2), according to Design Science Research (DSR). The last part of this paper outlines the setup of an empirical validation trough practitioner oriented research, in order to gain a broader acceptance and implementation of Zero Trust strategies (3). The final result is a proposed framework and associated technology which, via Zero Trust principles, addresses multiple layers of the organization to grasp and align cybersecurity risks and understand the readiness and fitness of the organization and its measures to counter cybersecurity risks.
Objective measures are ubiquitous in the formulation, design and implementation of deep space missions. Tour durations, flyby altitudes, propellant budgets, power consumption, and other metrics are essential to developing and managing NASA missions. But beyond the simple metrics of cost and workforce, it has been difficult to identify objective, quantitative measures that assist in evaluating choices made during formulation or implementation phases in terms of their impact on flight operations. As part of the development of the Europa Clipper Mission system, a set of operations metrics have been defined along with the necessary design information and software tooling to calculate them. We have applied these methods and metrics to help assess the impact to the flight team on the six options for the Clipper Tour that are currently being vetted for selection in the fall of 2021. To generate these metrics, the Clipper MOS team first designed the set of essential processes by which flight operations will be conducted, using a standard approach and template to identify (among other aspects) timelines for each process, along with their time constraints (e.g., uplinks for sequence execution). Each of the resulting 50 processes is documented in a common format and concurred by stakeholders. Process timelines were converted into generic schedules and workforce-loaded using COTS scheduling software, based on the inputs of the process authors and domain experts. Custom code was generated to create an operations schedule for a specific portion of Clipper's prime mission, with instances of a given process scheduled based on specific timing rules (e.g., process X starts once per week on Thursdays) or relative to mission events (e.g., sequence generation process begins on a Monday, at least three weeks before each Europa closest approach). Over a 5-month period, and for each of six Clipper candidate tours, the result was a 20,000+ line, workforce-loaded schedule that documents all of the process-driven work effort at the level of individual roles, along with a significant portion of the level-of-effort work. Post-processing code calculated the absolute and relative number of work hours during a nominal 5 day / 40 hour work week, the work effort during 2nd and 3rd shift, as well as 1st shift on weekends. The resultant schedules and shift tables were used to generate objective measures that can be related to both human factors and to operational risk and showed that Clipper tours which utilize 6:1 resonant (21.25 day) orbits instead of 4:1 resonant (14.17 day) orbits during the first dozen or so Europa flybys are advantageous to flight operations. A similar approach can be extended to assist missions in more objective assessments of a number of mission issues and trades, including tour selection and spacecraft design for operability.
With the increasing number of catastrophic weather events and resulting disruption in the energy supply to essential loads, the distribution grid operators’ focus has shifted from reliability to resiliency against high impact, low-frequency events. Given the enhanced automation to enable the smarter grid, there are several assets/resources at the disposal of electric utilities to enhances resiliency. However, with a lack of comprehensive resilience tools for informed operational decisions and planning, utilities face a challenge in investing and prioritizing operational control actions for resiliency. The distribution system resilience is also highly dependent on system attributes, including network, control, generating resources, location of loads and resources, as well as the progression of an extreme event. In this work, we present a novel multi-stage resilience measure called the Anticipate-Withstand-Recover (AWR) metrics. The AWR metrics are based on integrating relevant ‘system characteristics based factors’, before, during, and after the extreme event. The developed methodology utilizes a pragmatic and flexible approach by adopting concepts from the national emergency preparedness paradigm, proactive and reactive controls of grid assets, graph theory with system and component constraints, and multi-criteria decision-making process. The proposed metrics are applied to provide decision support for a) the operational resilience and b) planning investments, and validated for a real system in Alaska during the entirety of the event progression.