Biblio
While the introduction of the softwarelization technologies such as SDN and NFV transfers main focus of network management from hardware to software, the network operators still have to care for a lot of network and computing equipment located in the network center. Toward fully automated network management, we believe that robotic approach will be significant, meaning that robot will care for the physical equipment on behalf of human. This paper explains our experience and insight gained throughout development of a network management robot. We utilize ROS(Robot Operating System) which is a powerful platform for robot development and secures the ease of development and expandability. Our roadmap of the network management robot is also shown as well as three use cases such as environmental monitoring, operator assistance and autonomous maintenance of the equipment. Finally, the paper briefly explains experimental results conducted in a commercial network center.
The Internet of Things (IoT) is an emerging technology, an extension of the traditional Internet which make everything is connected each other based on Radio Frequency Identification (RFID), Sensor, GPS or Machine to Machine technologies, etc. The security issues surrounding IoT have been of detrimental impact to its development and has consequently attracted research interest. However, there are very few approaches which assess the security of IoT from the perspective of an attacker. Penetration testing is widely used to evaluate traditional internet or systems security to date and it normally spends numerous cost and time. In this paper, we analyze the security problems of IoT and propose a penetration testing approach and its automation based on belief-desire-intention (BDI) model to evaluate the security of the IoT.
The aim of this paper is to present a fresh methodology of improved evidence synthesis for assessing software trustworthiness, which can unwind collisions stemming from proofs and these proofs' own uncertainties. To achieve this end, the paper, on the ground of ISO/IEC 9126 and web software attributes, models the indicator framework by factor analysis. Then, the paper conducts an calculation of the weight for each indicator via the technique of structural entropy and makes a fuzzy judgment matrix concerning specialists' comments. This study performs a computation of scoring and grade regarding software trustworthiness by using of the criterion concerning confidence degree discernment and comes up with countermeasures to promote trustworthiness. Relying on online accounting software, this study makes an empirical analysis to further confirm validity and robustness. This paper concludes with pointing out limitations.
The recently applied General Data Protection Regulation (GDPR) aims to protect all EU citizens from privacy and data breaches in an increasingly data-driven world. Consequently, this deeply affects the factory domain and its human-centric automation paradigm. Especially collaboration of human and machines as well as individual support are enabled and enhanced by processing audio and video data, e.g. by using algorithms which re-identify humans or analyse human behaviour. We introduce most significant impacts of the recent legal regulation change towards the automations domain at a glance. Furthermore, we introduce a representative scenario from production, deduce its legal affections from GDPR resulting in a privacy-aware software architecture. This architecture covers modern virtualization techniques along with authorization and end-to-end encryption to ensure a secure communication between distributes services and databases for distinct purposes.
Traceability has grown from being a specialized need for certain safety critical segments of the industry, to now being a recognized value-add tool for the industry as a whole that can be utilized for manual to automated processes End to End throughout the supply chain. The perception of traceability data collection persists as being a burden that provides value only when the most rare and disastrous of events take place. Disparate standards have evolved in the industry, mainly dictated by large OEM companies in the market create confusion, as a multitude of requirements and definitions proliferate. The intent of the IPC-1782 project is to bring the whole principle of traceability up to date and enable business to move faster, increase revenue, increase productivity, and decrease costs as a result of increased trust. Traceability, as defined in this standard will represent the most effective quality tool available, becoming an intrinsic part of best practice operations, with the encouragement of automated data collection from existing manufacturing systems which works well with Industry 4.0, integrating quality, reliability, product safety, predictive (routine, preventative, and corrective) maintenance, throughput, manufacturing, engineering and supply-chain data, reducing cost of ownership as well as ensuring timeliness and accuracy all the way from a finished product back through to the initial materials and granular attributes about the processes along the way. The goal of this standard is to create a single expandable and extendable data structure that can be adopted for all levels of traceability and enable easily exchanged information, as appropriate, across many industries. The scope includes support for the most demanding instances for detail and integrity such as those required by critical safety systems, all the way through to situations where only basic traceability, such as for simple consumer products, are required. A key driver for the adoption of the standard is the ability to find a relevant and achievable level of traceability that exactly meets the requirement following risk assessment of the business. The wealth of data accessible from traceability for analysis (e.g.; Big Data, etc.) can easily and quickly yield information that can raise expectations of very significant quality and performance improvements, as well as providing the necessary protection against the costs of issues in the market and providing very timely information to regulatory bodies along with consumers/customers as appropriate. This information can also be used to quickly raise yields, drive product innovation that resonates with consumers, and help drive development tests & design requirements that are meaningful to the Marketplace. Leveraging IPC 1782 to create the best value of Component Traceability for your business.
Cyber risk assessment of a Cyber-Physical System (CPS) without damaging it and without contaminating it with malware is an important and hard problem. Previous work developed a solution to this problem using a control component for simulating cyber effects in a CPS model to mimic a cyber attack. This paper extends the previous work by presenting an algorithm for semi-automated insertion of control components into a CPS model based on Discrete Event Systems (DEVS) formalism. We also describe how to use this algorithm to insert a control component into Live, Virtual, Constructive (LVC) environments that may have non-DEVS models, thereby extending our solution to other systems in general.
In this paper, we propose a technique to detect phishing attacks based on behavior of human when exposed to fake website. Some online users submit fake credentials to the login page before submitting their actual credentials. He/She observes the login status of the resulting page to check whether the website is fake or legitimate. We automate the same behavior with our application (FeedPhish) which feeds fake values into login page. If the web page logs in successfully, it is classified as phishing otherwise it undergoes further heuristic filtering. If the suspicious site passes through all heuristic filters then the website is classified as a legitimate site. As per the experimentation results, our application has achieved a true positive rate of 97.61%, true negative rate of 94.37% and overall accuracy of 96.38%. Our application neither demands third party services nor prior knowledge like web history, whitelist or blacklist of URLS. It is able to detect not only zero-day phishing attacks but also detects phishing sites which are hosted on compromised domains.
With the rapid application of the network based communication in industries, the security related problems appear to be inevitable for automation networks. The integration of internet into the automation plant benefited companies and engineers a lot and on the other side paved ways to number of threats. An attack on such control critical infrastructure may endangers people's health and safety, damage industrial facilities and produce financial loss. One of the approach to secure the network in automation is the development of an efficient Network based Intrusion Detection System (NIDS). Despite several techniques available for intrusion detection, they still lag in identifying the possible attacks or novel attacks on network efficiently. In this paper, we evaluate the performance of detection mechanism by combining the deep learning techniques with the machine learning techniques for the development of Intrusion Detection System (IDS). The performance metrics such as precession, recall and F-Measure were measured.
the terms Smart grid, IntelliGrid, and secure astute grid are being used today to describe technologies that automatically and expeditiously (separate far from others) faults, renovate potency, monitor demand, and maintain and recuperate (firm and steady nature/lasting nature/vigor) for more reliable generation, transmission, and distribution of electric potency. In general, the terms describe the utilization of microprocessor-predicated astute electronic contrivances (IEDs) communicating with one another to consummate tasks afore now done by humans or left undone. These IEDs watch/ notice/ celebrate/ comply with the state of the puissance system, make edified decisions, and then take action to preserve the (firm and steady nature/lasting nature/vigor) and performance of the grid. Technology use/military accommodation in the home will sanction end users to manage their consumption predicated on their own predilections. In order to manage their consumption or the injuctive authorization placed on the grid, people (who utilize a product or accommodation) need information and an (able to transmute and get better) power distribution system. The astute grid is an accumulation of information sources and the automatic control system that manages the distribution of puissance, understands the transmutations in demand, and reacts to it by managing demand replication. Different billing (prosperity plans/ways of reaching goals) for mutable time and type of avail, as well as conservation and use or sale of distributed utilizable things/valuable supplies, will become part of perspicacious solutions. The traditional electrical power grid is currently evolving into the perspicacious grid. Perspicacious grid integrates the traditional electrical power grid with information and communication technologies (ICT). Such integration empowers the electrical utilities providers and consumers, amends the efficiency and the availability of the puissance system while perpetually monitoring, - ontrolling and managing the authoritative ordinances of customers. A keenly intellective grid is an astronomically immense intricate network composed of millions of contrivances and entities connected with each other. Such a massive network comes with many security concerns and susceptibilities. In this paper, we survey the latest on keenly intellective grid security. We highlight the involution of the keenly intellective grid network and discuss the susceptibilities concrete to this sizably voluminous heterogeneous network. We discuss then the challenges that subsist in securing the keenly intellective grid network and how the current security solutions applied for IT networks are not adequate to secure astute grid networks. We conclude by over viewing the current and needed security solutions for the keenly intellective gird.
Distribution system security region (DSSR) has been widely used to analyze the distribution system operation security. This paper innovatively defines the scale of DSSR, namely the number of boundary constraints and variables of all operational constraints, analyzes and puts forward the corresponding evaluation method. Firstly, the influence of the number of security boundary constraints and variables on the scale of DSSR is analyzed. The factors that mainly influence the scale are found, such as the number of transformers, feeders, as well as sectionalizing switches, and feeder contacts modes between transformers. Secondly, a matrix representing the relations among transformers in distribution system is defined to reflect the characteristics of network's structure, while an algorithm of the scale of DSSR based on transformers connection relationship matrix is proposed, which avoids the trouble of listing security region constraints. Finally, the proposed method is applied in a test system to confirm the effectiveness of the concepts and methods. It provides the necessary foundation for DSSR theory as well as safety analysis.
Cryptography is the fascinating science that deals with constructing and destructing the secret codes. The evolving digitization in this modern era possesses cryptography as one of its backbones to perform the transactions with confidentiality and security wherever the authentication is required. With the modern technology that has evolved, the use of codes has exploded, enriching cryptology and empowering citizens. One of the most important things that encryption provides anyone using any kind of computing device is `privacy'. There is no way to have true privacy with strong security, the method with which we are dealing with is to make the cipher text more robust to be by-passed. In current work, the well known and renowned Caesar cipher and Rail fence cipher techniques are combined with a cross language cipher technique and the detailed comparative analysis amongst them is carried out. The simulations have been carried out on Eclipse Juno version IDE for executions and Java, an open source language has been used to implement these said techniques.
Unmanned systems are increasing in number, while their manning requirements remain the same. To decrease manpower demands, machine learning techniques and autonomy are gaining traction and visibility. One barrier is human perception and understanding of autonomy. Machine learning techniques can result in “black box” algorithms that may yield high fitness, but poor comprehension by operators. However, Interactive Machine Learning (IML), a method to incorporate human input over the course of algorithm development by using neuro-evolutionary machine-learning techniques, may offer a solution. IML is evaluated here for its impact on developing autonomous team behaviors in an area search task. Initial findings show that IML-generated search plans were chosen over plans generated using a non-interactive ML technique, even though the participants trusted them slightly less. Further, participants discriminated each of the two types of plans from each other with a high degree of accuracy, suggesting the IML approach imparts behavioral characteristics into algorithms, making them more recognizable. Together the results lay the foundation for exploring how to team humans successfully with ML behavior.
Establishing and operating an Information Security Management System (ISMS) to protect information values and information systems is in itself a challenge for larger enterprises and small and medium sized businesses alike. A high level of automation is required to reduce operational efforts to an acceptable level when implementing an ISMS. In this paper we present the ADAMANT framework to increase automation in information security management as a whole by establishing a continuous risk-driven and context-aware ISMS that not only automates security controls but considers all highly interconnected information security management tasks. We further illustrate how ADAMANT is suited to establish an ISO 27001 compliant ISMS for small and medium-sized enterprises and how not only the monitoring of security controls but a majority of ISMS related activities can be supported through automated process execution and workflow enactment.
Customer Edge Switching (CES) is an experimental Internet architecture that provides reliable and resilient multi-domain communications. It provides resilience against security threats because domains negotiate inbound and outbound policies before admitting new traffic. As CES and its signalling protocols are being prototyped, there is a need for independent testing of the CES architecture. Hence, our research goal is to develop an automated test framework that CES protocol designers and early adopters can use to improve the architecture. The test framework includes security, functional, and performance tests. Using the Robot Framework and STRIDE analysis, in this paper we present this automated security test framework. By evaluating sample test scenarios, we show that the Robot Framework and our CES test suite have provided productive discussions about this new architecture, in addition to serving as clear, easy-to-read documentation. Our research also confirms that test automation can be useful to improve new protocol architectures and validate their implementation.
Among the several threats to cyber services Distributed denial-of-service (DDoS) attack is most prevailing nowadays. DDoS involves making an online service unavailable by flooding the bandwidth or resources of a targeted system. It is easier for an insider having legitimate access to the system to circumvent any security controls thus resulting in insider attack. To mitigate insider assisted DDoS attacks, this paper proposes a moving target defense mechanism that involves isolation of insiders from innocent clients by using attack proxies. Further using the concept of load balancing an effective algorithm to detect and handle insider attack is developed with the aim of maximizing attack isolation while minimizing the total number of proxies used.
The complexity, multiplicity, and impact of cyber-attacks have been increasing at an alarming rate despite the significant research and development investment in cyber security products and tools. The current techniques to detect and protect cyber infrastructures from these smart and sophisticated attacks are mainly characterized as being ad hoc, manual intensive, and too slow. We present in this paper AIM-PSC that is developed jointly by researchers at AVIRTEK and The University of Arizona Center for Cloud and Autonomic Computing that is inspired by biological systems, which can efficiently handle complexity, dynamism and uncertainty. In AIM-PSC system, an online monitoring and multi-level analysis are used to analyze the anomalous behaviors of networks, software systems and applications. By combining the results of different types of analysis using a statistical decision fusion approach we can accurately detect any types of cyber-attacks with high detection and low false alarm rates and proactively respond with corrective actions to mitigate their impacts and stop their propagation.
In the last few years, a shift from mass production to mass customisation is observed in the industry. Easily reprogrammable robots that can perform a wide variety of tasks are desired to keep up with the trend of mass customisation while saving costs and development time. Learning by Demonstration (LfD) is an easy way to program the robots in an intuitive manner and provides a solution to this problem. In this work, we discuss and evaluate LAP, a three-stage LfD method that conforms to the criteria for the high-mix-low-volume (HMLV) industrial settings. The algorithm learns a trajectory in the task space after which small segments can be adapted on-the-fly by using a human-in-the-loop approach. The human operator acts as a high-level adaptation, correction and evaluation mechanism to guide the robot. This way, no sensors or complex feedback algorithms are needed to improve robot behaviour, so errors and inaccuracies induced by these subsystems are avoided. After the system performs at a satisfactory level after the adaptation, the operator will be removed from the loop. The robot will then proceed in a feed-forward fashion to optimise for speed. We demonstrate this method by simulating an industrial painting application. A KUKA LBR iiwa is taught how to draw an eight figure which is reshaped by the operator during adaptation.
The design of systems with independent agency to act on the environment or which can act as persuasive agents requires consideration of not only the technical aspects of design, but of the psychological, sociological, and philosophical aspects as well. Creating usable, safe, and ethical systems will require research into human-computer communication, in order to design systems that can create and maintain a relationship with users, explain their workings, and act in the best interests of both users and of the larger society.
As the frequency, severity, and sophistication of cyber attacks increase, along with our dependence on reliable computing infrastructure, the role of Intrusion Detection Systems (IDS) gaining importance. One of the challenges in deploying an IDS stems from selecting a combination of detectors that are relevant and accurate for the environment where security is being considered. In this work, we propose a new measurement approach to address two key obstacles: the base-rate fallacy, and the unit of analysis problem. Our key contribution is to utilize the notion of a `signal', an indicator of an event that is observable to an IDS, as the measurement target, and apply the multiple instance paradigm (from machine learning) to enable cross-comparable measures regardless of the unit of analysis. To support our approach, we present a detailed case study and provide empirical examples of the effectiveness of both the model and measure by demonstrating the automated construction, optimization, and correlation of signals from different domains of observation (e.g. network based, host based, application based) and using different IDS techniques (signature based, anomaly based).
Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.