Biblio
Cryptography is the fascinating science that deals with constructing and destructing the secret codes. The evolving digitization in this modern era possesses cryptography as one of its backbones to perform the transactions with confidentiality and security wherever the authentication is required. With the modern technology that has evolved, the use of codes has exploded, enriching cryptology and empowering citizens. One of the most important things that encryption provides anyone using any kind of computing device is `privacy'. There is no way to have true privacy with strong security, the method with which we are dealing with is to make the cipher text more robust to be by-passed. In current work, the well known and renowned Caesar cipher and Rail fence cipher techniques are combined with a cross language cipher technique and the detailed comparative analysis amongst them is carried out. The simulations have been carried out on Eclipse Juno version IDE for executions and Java, an open source language has been used to implement these said techniques.
In the past couple of years Cloud Computing has become an eminent part of the IT industry. As a result of its economic benefits more and more people are heading towards Cloud adoption. In present times there are numerous Cloud Service providers (CSP) allowing customers to host their applications and data onto Cloud. However Cloud Security continues to be the biggest obstacle in Cloud adoption and thereby prevents customers from accessing its services. Various techniques have been implemented by provides in order to mitigate risks pertaining to Cloud security. In this paper, we present a Hybrid Cryptographic System (HCS) that combines the benefits of both symmetric and asymmetric encryption thus resulting in a secure Cloud environment. The paper focuses on creating a secure Cloud ecosystem wherein we make use of multi-factor authentication along with multiple levels of hashing and encryption. The proposed system along with the algorithm are simulated using the CloudSim simulator. To this end, we illustrate the working of our proposed system along with the simulated results.
Choosing how to write natural language scenarios is challenging, because stakeholders may over-generalize their descriptions or overlook or be unaware of alternate scenarios. In security, for example, this can result in weak security constraints that are too general, or missing constraints. Another challenge is that analysts are unclear on where to stop generating new scenarios. In this paper, we introduce the Multifactor Quality Method (MQM) to help requirements analysts to empirically collect system constraints in scenarios based on elicited expert preferences. The method combines quantitative statistical analysis to measure system quality with qualitative coding to extract new requirements. The method is bootstrapped with minimal analyst expertise in the domain affected by the quality area, and then guides an analyst toward selecting expert-recommended requirements to monotonically increase system quality. We report the results of applying the method to security. This include 550 requirements elicited from 69 security experts during a bootstrapping stage, and subsequent evaluation of these results in a verification stage with 45 security experts to measure the overall improvement of the new requirements. Security experts in our studies have an average of 10 years of experience. Our results show that using our method, we detect an increase in the security quality ratings collected in the verification stage. Finally, we discuss how our proposed method helps to improve security requirements elicitation, analysis, and measurement.
This paper describes biometric-based cryptographic techniques for providing confidential communications and strong, mutual and multifactor authentication on the Internet of Things. The described security techniques support the goals of universal access when users are allowed to select from multiple choice alternatives to authenticate their identities. By using a Biometric Authenticated Key Exchange (BAKE) protocol, user credentials are protected against phishing and Man-in-the-Middle attacks. Forward secrecy is achieved using a Diffie-Hellman key establishment scheme with fresh random values each time the BAKE protocol is operated. Confidentiality is achieved using lightweight cryptographic algorithms that are well suited for implementation in resource constrained environments, those limited by processing speed, limited memory and power availability. Lightweight cryptography can offer strong confidentiality solutions that are practical to implement in Internet of Things systems, where efficient execution, and small memory requirements and code size are required.
Modern security protocols may involve humans in order to compare or copy short strings between different devices. Multi-factor authentication protocols, such as Google 2-factor or 3D-secure are typical examples of such protocols. However, such short strings may be subject to brute force attacks. In this paper we propose a symbolic model which includes attacker capabilities for both guessing short strings, and producing collisions when short strings result from an application of weak hash functions. We propose a new decision procedure for analysing (a bounded number of sessions of) protocols that rely on short strings. The procedure has been integrated in the AKISS tool and tested on protocols from the ISO/IEC 9798-6:2010 standard.
Embedded and mobile devices forming part of the Internet-of-Things (IoT) need new authentication technologies and techniques. This requirement is due to the increase in effort and time attackers will use to compromise a device, often remote, based on the possibility of a significant monetary return. This paper proposes exploiting a device's accelerometers in-built functionality to implement multi-factor authentication. An experimental embedded system designed to emulate a typical mobile device is used to implement the ideas and investigated as proof-of-concept.
Authentication is one of the key aspects of securing applications and systems alike. While in most existing systems this is achieved using usernames and passwords it has been continuously shown that this authentication method is not secure. Studies that have been conducted have shown that these systems have vulnerabilities which lead to cases of impersonation and identity theft thus there is need to improve such systems to protect sensitive data. In this research, we explore the combination of the user's location together with traditional usernames and passwords as a multi factor authentication system to make authentication more secure. The idea involves comparing a user's mobile device location with that of the browser and comparing the device's Bluetooth key with the key used during registration. We believe by leveraging existing technologies such as Bluetooth and GPS we can reduce implementation costs whilst improving security.
Our project, NFC Unlock, implements a secure multifactor authentication system for computers using Near Field Communication technology. The application is written in C\# with pGina. It implements an NFC authentication which replaces the standard Windows credentials to allow the use of an NFC tag and a passcode to authenticate the user. Unlike the most prevalent multifactor authentication methods, NFC authentication does not require a user wait for an SMS code to type into the computer. A user enters a passcode and scans the NFC tag to log in. In order to prevent the data from being hacked, the system encrypts the NFC tag ID and the passcode with Advanced Encryption Standard. Users can easily register an NFC tag and link it to their computer account. The program also has several extra features including text alerts, record keeping of all login and login attempts, and a user-friendly configuration menu. Initial tests show that the NFC-based multifactor authentication system has the advantage of improved security with a simplified login process.
Because the authentication method based username-password has the disadvantage of easy disclosure and low reliability, and also the excess password management degrades the user experience tremendously, the user is eager to get rid of the bond of the password in order to seek a new way of authentication. Therefore, the multifactor biometrics-based user authentication wins the favor of people with advantages of simplicity, convenience and high reliability, especially in the mobile payment environment. Unfortunately, in the existing scheme, biometric information is stored on the server side. As thus, once the server is hacked by attackers to cause the leakage of the fingerprint information, it will take a deadly threat to the user privacy. Aim at the security problem due to the fingerprint information in the mobile payment environment, we propose a novel multifactor two-server authentication scheme under mobile computing (MTSAS). In the MTSAS, it divides the authentication method and authentication means, in the meanwhile, the user's biometric characteristics cannot leave the user device. And also, MTSAS chooses the different authentication factors depending on the privacy level of the authentication, and then provides the authentication based on the different security levels. BAN logic's result proves that MTSAS has achieved the purpose of authentication, and meets the security requirements. In comparison with other schemes, the analysis shows that the proposed scheme MTSAS not only has the reasonable computational efficiency, but also keeps the superior communication cost.
Internet of Things (IoT) is an emerging trend that is changing the way devices connect and communicate. Integration of cloud computing with IoT i.e. Cloud of Things (CoT) provide scalability, virtualized control and access to the services provided by IoT. Security issues are a major obstacle in widespread deployment and application of CoT. Among these issues, authentication and identification of user is crucial. In this study paper, survey of various authentication schemes is carried out. The aim of this paper is to study a multifactor authentication system which uses secret splitting in detail. The system uses exclusive-or operations, encryption algorithms and Diffie-Hellman key exchange algorithm to share key over the network. Security analysis shows the resistance of the system against different types of attacks.
The modular multilevel converter with series and parallel connectivity was shown to provide advantages in several industrial applications. Its reliability largely depends on the absence of failures in the power semiconductors. We propose and analyze a fault-diagnosis technique to identify shorted switches based on features generated through wavelet transform of the converter output and subsequent classification in support vector machines. The multi-class support vector machine is trained with multiple recordings of the output of each fault condition as well as the converter under normal operation. Simulation results reveal that the proposed method has high classification latency and high robustness. Except for the monitoring of the output, which is required for the converter control in any case, this method does not require additional module sensors.
This paper presents a novel sensor parameter fault diagnosis method for generally multiple-input multiple-output (MIMO) affine nonlinear systems based on adaptive observer. Firstly, the affine nonlinear systems are transformed into the particular systems via diffeomorphic transformation using Lie derivative. Then, based on the techniques of high-gain observer and adaptive estimation, an adaptive observer structure is designed with simple method for jointly estimating the states and the unknown parameters in the output equation of the nonlinear systems. And an algorithm of the fault estimation is derived. The global exponential convergence of the proposed observer is proved succinctly. Also the proposed method can be applied to the fault diagnosis of generally affine nonlinear systems directly by the reversibility of aforementioned coordinate transformation. Finally, a numerical example is presented to illustrate the efficiency of the proposed fault diagnosis scheme.
Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.
As a consequence of the recent development of situational awareness technologies for smart grids, the gathering and analysis of data from multiple sources offer a significant opportunity for enhanced fault diagnosis. In order to achieve improved accuracy for both fault detection and classification, a novel combined data analytics technique is presented and demonstrated in this paper. The proposed technique is based on a segmented approach to Bayesian modelling that provides probabilistic graphical representations of both electrical power and data communication networks. In this manner, the reliability of both the data communication and electrical power networks are considered in order to improve overall power system transmission line fault diagnosis.
In Energy Internet mode, a large number of alarm information is generated when equipment exception and multiple faults in large power grid, which seriously affects the information collection, fault analysis and delays the accident treatment for the monitors. To this point, this paper proposed a method for power grid monitoring to monitor and diagnose fault in real time, constructed the equipment fault logical model based on five section alarm information, built the standard fault information set, realized fault information optimization, fault equipment location, fault type diagnosis, false-report message and missing-report message analysis using matching algorithm. The validity and practicality of the proposed method by an actual case was verified, which can shorten the time of obtaining and analyzing fault information, accelerate the progress of accident treatment, ensure the safe and stable operation of power grid.
Transmission lines' monitoring systems produce a large amount of data that hinders faults diagnosis. For this reason, approaches that can acquire and automatically interpret the information coming from lines' monitoring are needed. Furthermore, human errors stemming from operator dependent real-time decision need to be reduced. In this paper a multiple faults diagnosis method to determine transmission lines' operating conditions is proposed. Different scenarios, including insulator chains contamination with different types and concentrations of pollutants were modeled by equivalents circuits. Their performance were characterized by leakage current (LC) measurements and related to specific fault modes. Features extraction's algorithm relying on the difference between normal and faulty conditions were used to define qualitative trends for the diagnosis of various fault modes.
This paper presents a solution to a multiple-model based stochastic active fault diagnosis problem over the infinite-time horizon. A general additive detection cost criterion is considered to reflect the objectives. Since the system state is unknown, the design consists of a perfect state information reformulation and optimization problem solution by approximate dynamic programming. An adaptive particle filter state estimation algorithm based on the efficient sample size is proposed to maintain the estimate quality while reducing computational costs. A reduction of information statistics of the state is carried out using non-resampled particles to make the solution feasible. Simulation results illustrate the effectiveness of the proposed design.
Electro-hydraulic servo actuation system is a mechanical, electrical and hydraulic mixing complex system. If it can't be repaired for a long time, it is necessary to consider the possibility of occurrence of multiple faults. Considering this possibility, this paper presents an extended Kalman filter (EKF) based method for multiple faults diagnosis. Through analysing the failure modes and mechanism of the electro-hydraulic servo actuation system and modelling selected typical failure modes, the relationship between the key parameters of the system and the faults is obtained. The extended Kalman filter which is a commonly used algorithm for estimating parameters is used to on-line fault diagnosis. Then use the extended Kalman filter to diagnose potential faults. The simulation results show that the multi-fault diagnosis method based on extended Kalman filter is effective for multi-fault diagnosis of electro-hydraulic servo actuation system.
This paper proposes a design method of a support tool for detection and diagnosis of failures in discrete event systems (DES). The design of this diagnoser goes through three phases: an identification phase and finding paths and temporal parameters of the model describing the two modes of normal and faulty operation, a detection phase provided by the comparison and monitoring time operation and a location phase based on the combination of the temporal evolution of the parameters and thresholds exceeded technique. Our contribution lays in the application of this technique in the presence of faults arising simultaneously, sensors and actuators. The validation of the proposed approach is illustrated in a filling system through a simulation.
A novel method, consisting of fault detection, rough set generation, element isolation and parameter estimation is presented for multiple-fault diagnosis on analog circuit with tolerance. Firstly, a linear-programming concept is developed to transform fault detection of circuit with limited accessible terminals into measurement to check existence of a feasible solution under tolerance constraints. Secondly, fault characteristic equation is deduced to generate a fault rough set. It is proved that the node voltages of nominal circuit can be used in fault characteristic equation with fault tolerance. Lastly, fault detection of circuit with revised deviation restriction for suspected fault elements is proceeded to locate faulty elements and estimate their parameters. The diagnosis accuracy and parameter identification precision of the method are verified by simulation results.
A method for the multiple faults diagnosis in linear analog circuits is presented in this paper. The proposed approach is based upon the concept named by the indirect compensation theorem. This theorem is reducing the procedure of fault diagnosis in the analog circuit to the symbolic analysis process. An extension of the indirect compensation theorem for the linear subcircuit is proposed. The indirect compensation provides equivalent replacement of the n-ports subcircuit by n norators and n fixators of voltages and currents. The proposed multiple faults diagnosis techniques can be used for evaluation of any kind of terminal characteristics of the two-port network. For calculation of the circuit determinant expressions, the Generalized Parameter Extraction Method is implemented. The main advantage of the analysis method is that it is cancellation free. It requires neither matrix nor ordinary graph description of the circuit. The process of symbolic circuit analysis is automated by the freeware computer program Cirsym which can be used online. The experimental results are presented to show the efficiency and reliability of the proposed technique.
When supporting commercial or defense systems, a perennial challenge is providing effective test and diagnosis strategies to minimize downtime, thereby maximizing system availability. Potentially one of the most effective ways to maximize downtime is to be able to detect and isolate as many faults in a system at one time as possible. This is referred to as the "multiple-fault diagnosis" problem. While several tools have been developed over the years to assist in performing multiple-fault diagnosis, considerable work remains to provide the best diagnosis possible. Recently, a new model for evolutionary computation has been developed called the "Factored Evolutionary Algorithm" (FEA). In this paper, we combine our prior work in deriving diagnostic Bayesian networks from static fault isolation manuals and fault trees with the FEA strategy to perform abductive inference as a way of addressing the multiple-fault diagnosis problem. We demonstrate the effectiveness of this approach on several networks derived from existing, real-world FIMs.
The Web today is a growing universe of pages and applications teeming with interactive content. The security of such applications is of the utmost importance, as exploits can have a devastating impact on personal and economic levels. The number one programming language in Web applications is PHP, powering more than 80% of the top ten million websites. Yet it was not designed with security in mind and, today, bears a patchwork of fixes and inconsistently designed functions with often unexpected and hardly predictable behavior that typically yield a large attack surface. Consequently, it is prone to different types of vulnerabilities, such as SQL Injection or Cross-Site Scripting. In this paper, we present an interprocedural analysis technique for PHP applications based on code property graphs that scales well to large amounts of code and is highly adaptable in its nature. We implement our prototype using the latest features of PHP 7, leverage an efficient graph database to store code property graphs for PHP, and subsequently identify different types of Web application vulnerabilities by means of programmable graph traversals. We show the efficacy and the scalability of our approach by reporting on an analysis of 1,854 popular open-source projects, comprising almost 80 million lines of code.