Biblio
With the rapid development of DC transmission technology and High Voltage Direct Current (HVDC) programs, the reliability of AC/DC hybrid power grid draws more and more attentions. The paper takes both the system static and dynamic characteristics into account, and proposes a novel AC/DC hybrid system reliability evaluation method considering transient security constraints based on Monte-Carlo method and transient stability analytical method. The interaction of AC system and DC system after fault is considered in evaluation process. The transient stability analysis is performed firstly when fault occurs in the system and BPA software is applied to the analysis to improve the computational accuracy and speed. Then the new system state is generated according to the transient analysis results. Then a minimum load shedding model of AC/DC hybrid system with HVDC is proposed. And then adequacy analysis is taken to the new state. The proposed method can evaluate the reliability of AC/DC hybrid grid more comprehensively and reduce the complexity of problem which is tested by IEEE-RTS 96 system and an actual large-scale system.
Now-a-days security is a challenging task in different types of networks, such as Mobile Networks, Wireless Sensor Networks (WSN) and Radio Frequency Identifications Systems (RFIS) etc, to overcome these challenges we use sincryption. Signcryption is a new public key cryptographic primitive that performs the functions of digital signature and encryption in single logical step. The main contribution of signcrytion scheme, it is more suitable for low constrained environment. Moreover some signcryption schemes based on RSA, Elliptic Curve (EC) and Hyper Elliptic Curve (HEC). This paper contains a critical review of signcryption schemes based on hyper elliptic curve, since it reduce communication and computational costs for low constrained devices. It also explores advantages and disadvantages of different signcryption schemes based on HEC.
Data provenance provides a way for scientists to observe how experimental data originates, conveys process history, and explains influential factors such as experimental rationale and associated environmental factors from system metrics measured at runtime. The US Department of Energy Office of Science Integrated end-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project has developed a provenance harvester that is capable of collecting observations from file based evidence typically produced by distributed applications. To achieve this, file based evidence is extracted and transformed into an intermediate data format inspired in part by W3C CSV on the Web recommendations, called the Harvester Provenance Application Interface (HAPI) syntax. This syntax provides a general means to pre-stage provenance into messages that are both human readable and capable of being written to a provenance store, Provenance Environment (ProvEn). HAPI is being applied to harvest provenance from climate ensemble runs for Accelerated Climate Modeling for Energy (ACME) project funded under the U.S. Department of Energy's Office of Biological and Environmental Research (BER) Earth System Modeling (ESM) program. ACME informally provides provenance in a native form through configuration files, directory structures, and log files that contain success/failure indicators, code traces, and performance measurements. Because of its generic format, HAPI is also being applied to harvest tabular job management provenance from Belle II DIRAC scheduler relational database tables as well as other scientific applications that log provenance related information.
The ubiquity of the Internet and email, have provided a mostly insecure communication medium for the consumer. During the last few decades, we have seen the development of several ways to secure email messages. However, these solutions are inflexible and difficult to use for encrypting email messages to protect security and privacy while communicating or collaborating via email. Under the current paradigm, the arduous process of setting up email encryption is non-intuitive for the average user. The complexity of the current practices has also yielded to incorrect developers' interpretation of architecture which has resulted in interoperability issues. As a result, the lack of simple and easy-to-use infrastructure in current practices means that the consumers still use plain text emails over insecure networks. In this paper, we introduce and describe a novel, holistic model with new techniques for protecting email messages. The architecture of our innovative model is simpler and easier to use than those currently employed. We use the simplified trust model, which can relieve users from having to perform many complex steps to achieve email security. Utilizing the new techniques presented in this paper can safeguard users' email from unauthorized access and protect their privacy. In addition, a simplified infrastructure enables developers to understand the architecture more readily eliminating interoperability.
Smart IoT applications require connecting multiple IoT devices and networks with multiple services running in fog and cloud computing platforms. One approach to connecting IoT devices with cloud and fog services is to create a federated virtual network. The main benefit of this approach is that IoT devices can then interact with multiple remote services using an application specific federated network where no traffic from other applications passes. This federated network spans multiple cloud platforms and IoT networks but it can be managed as a single entity. From the point of view of security, federated virtual networks can be managed centrally and be secured with a coherent global network security policy. This does not mean that the same security policy applies everywhere, but that the different security policies are specified in a single coherent security policy. In this paper we propose to extend a federated cloud networking security architecture so that it can secure IoT devices and networks. The federated network is extended to the edge of IoT networks by integrating a federation agent in an IoT gateway or network controller (Can bus, 6LowPan, Lora, ...). This allows communication between the federated cloud network and the IoT network. The security architecture is based on the concepts of network function virtualisation (NFV) and service function chaining (SFC) for composing security services. The IoT network and devices can then be protected by security virtual network functions (VNF) running at the edge of the IoT network.
In the near future, vehicular cloud will help to improve traffic safety and efficiency. Unfortunately, a computing of vehicular cloud and fog cloud faced a set of challenges in security, authentication, privacy, confidentiality and detection of misbehaving vehicles. In addition to, there is a need to recognize false messages from received messages in VANETs during moving on the road. In this work, the security issues and challenges for computing in the vehicular cloud over for computing is studied.
In the Internet of Things (IoT), smart devices are connected using various communication protocols, such as Wi-Fi, ZigBee. Some IoT devices have multiple built-in communication modules. If an IoT device equipped with multiple communication protocols is compromised by an attacker using one communication protocol (e.g., Wi-Fi), it can be exploited as an entry point to the IoT network. Another protocol (e.g., ZigBee) of this IoT device could be used to exploit vulnerabilities of other IoT devices using the same communication protocol. In order to find potential attacks caused by this kind of cross-protocol devices, we group IoT devices based on their communication protocols and construct a graphical security model for each group of devices using the same communication protocol. We combine the security models via the cross-protocol devices and compute hidden attack paths traversing different groups of devices. We use two use cases in the smart home scenario to demonstrate our approach and discuss some feasible countermeasures.
Node compromising is still the most hard attack in Wireless Sensor Networks (WSNs). It affects key distribution which is a building block in securing communications in any network. The weak point of several roposed key distribution schemes in WSNs is their lack of resilience to node compromising attacks. When a node is compromised, all its key material is revealed leading to insecure communication links throughout the network. This drawback is more harmful for long-lived WSNs that are deployed in multiple phases, i.e., Multi-phase WSNs (MPWSNs). In the last few years, many key management schemes were proposed to ensure security in WSNs. However, these schemes are conceived for single phase WSNs and their security degrades with time when an attacker captures nodes. To deal with this drawback and enhance the resilience to node compromising over the whole lifetime of the network, we propose in this paper, a new key pre-distribution scheme adapted to MPWSNs. Our scheme takes advantage of the resilience improvement of Q-composite key scheme and adds self-healing which is the ability of the scheme to decrease the effect of node compromising over time. Self-healing is achieved by pre-distributing each generation with fresh keys. The evaluation of our scheme proves that it has a good key connectivity and a high resilience to node compromising attack compared to existing key management schemes.
Cloud computing has become a part of people's lives. However, there are many unresolved problems with security of this technology. According to the assessment of international experts in the field of security, there are risks in the appearance of cloud collusion in uncertain conditions. To mitigate this type of uncertainty, and minimize data redundancy of encryption together with harms caused by cloud collusion, modified threshold Asmuth-Bloom and weighted Mignotte secret sharing schemes are used. We show that if the villains do know the secret parts, and/or do not know the secret key, they cannot recuperate the secret. If the attackers do not know the required number of secret parts but know the secret key, the probability that they obtain the secret depends the size of the machine word in bits that is less than 1/2(1-1). We demonstrate that the proposed scheme ensures security under several types of attacks. We propose four approaches to select weights for secret sharing schemes to optimize the system behavior based on data access speed: pessimistic, balanced, and optimistic, and on speed per price ratio. We use the approximate method to improve the detection, localization and error correction accuracy under cloud parameters uncertainty.
There are vast amounts of information in our world. Accessing the most accurate information in a speedy way is becoming more difficult and complicated. A lot of relevant information gets ignored which leads to much duplication of work and effort. The focuses tend to provide rapid and intelligent retrieval systems. Information retrieval (IR) is the process of searching for information that is related to some topics of interest. Due to the massive search results, the user will normally have difficulty in identifying the relevant ones. To alleviate this problem, a recommendation system is used. A recommendation system is a sort of filtering information system, which predicts the relevance of retrieved information to the user's needs according to some criteria. Hence, it can provide the user with the results that best fit their needs. The services provided through the web normally provide massive information about any requested item or service. An efficient recommendation system is required to classify this information result. A recommendation system can be further improved if augmented with a level of trust information. That is, recommendations are ranked according to their level of trust. In our research, we produced a recommendation system combined with an efficient level of trust system to guarantee that the posts, comments and feedbacks from users are trusted. We customized the concept of LoT (Level of Trust) [1] since it can cover medical, shopping and learning through social media. The proposed system TRS\_LoT provides trusted recommendations to the users with a high percentage of accuracy. Whereas a 300 post with more than 5000 comments from ``Amazon'' was selected to be used as a dataset, the experiment has been conducted by using same dataset based on ``post rating''.
With the development of cloud computing and its economic benefit, more and more companies and individuals outsource their data and computation to clouds. Meanwhile, the business way of resource outsourcing makes the data out of control from its owner and results in many security issues. The existing secure keyword search methods assume that cloud servers are curious-but-honest or partial honest, which makes them powerless to deal with the deliberately falsified or fabricated results of insider attacks. In this paper, we propose a verifiable single keyword top-k search scheme against insider attacks which can verify the integrity of search results. Data owners generate verification codes (VCs) for the corresponding files, which embed the ordered sequence information of the relevance scores between files and keywords. Then files and corresponding VCs are outsourced to cloud servers. When a data user performs a keyword search in cloud servers, the qualified result files are determined according to the relevance scores between the files and the interested keyword and then returned to the data user together with a VC. The integrity of the result files is verified by data users through reconstructing a new VC on the received files and comparing it with the received one. Performance evaluation have been conducted to demonstrate the efficiency and result redundancy of the proposed scheme.
The rise of big data age in the Internet has led to the explosive growth of data size. However, trust issue has become the biggest problem of big data, leading to the difficulty in data safe circulation and industry development. The blockchain technology provides a new solution to this problem by combining non-tampering, traceable features with smart contracts that automatically execute default instructions. In this paper, we present a credible big data sharing model based on blockchain technology and smart contract to ensure the safe circulation of data resources.
The state-of-the-art Android malware often encrypts or encodes malicious code snippets to evade malware detection. In this paper, such undetectable codes are called Mysterious Codes. To make such codes detectable, we design a system called Droidrevealer to automatically identify Mysterious Codes and then decode or decrypt them. The prototype of Droidrevealer is implemented and evaluated with 5,600 malwares. The results show that 257 samples contain the Mysterious Codes and 11,367 items are exposed. Furthermore, several sensitive behaviors hidden in the Mysterious Codes are disclosed by Droidrevealer.
Hackers create different types of Malware such as Trojans which they use to steal user-confidential information (e.g. credit card details) with a few simple commands, recent malware however has been created intelligently and in an uncontrolled size, which puts malware analysis as one of the top important subjects of information security. This paper proposes an efficient dynamic malware-detection method based on API similarity. This proposed method outperform the traditional signature-based detection method. The experiment evaluated 197 malware samples and the proposed method showed promising results of correctly identified malware.
We set up a framework for the formal proofs of RFID protocols in the computational model. We rely on the so-called computationally complete symbolic attacker model. Our contributions are: 1) to design (and prove sound) axioms reflecting the properties of hash functions (Collision-Resistance, PRF). 2) to formalize computational unlinkability in the model. 3) to illustrate the method, providing the first formal proofs of unlinkability of RFID protocols, in the omputational model.
With the development of cyber threats on the Internet, the number of malware, especially unknown malware, is also dramatically increasing. Since all of malware cannot be analyzed by analysts, it is very important to find out new malware that should be analyzed by them. In order to cope with this issue, the existing approaches focused on malware classification using static or dynamic analysis results of malware. However, the static and the dynamic analyses themselves are also too costly and not easy to build the isolated, secure and Internet-like analysis environments such as sandbox. In this paper, we propose a lightweight malware classification method based on detection results of anti-virus software. Since the proposed method can reduce the volume of malware that should be analyzed by analysts, it can be used as a preprocess for in-depth analysis of malware. The experimental showed that the proposed method succeeded in classification of 1,000 malware samples into 187 unique groups. This means that 81% of the original malware samples do not need to analyze by analysts.
The presented work of this paper is to propose the implementation of chaotic crypto-system with the new key generator using chaos in digital filter for data encryption and decryption. The chaos in digital filter of the second order system is produced by the coefficients which are initialed in the key generator to produce other new coefficients. Private key system using the initial coefficients value condition and dynamic input as password of 16 characters is to generate the coefficients for crypto-system. In addition, we have tension specifically to propose the solution of data security in lightweight cryptography based on external and internal key in which conducts with the appropriate key sensitivity plus high performance. The chaos in digital filter has functioned as the main major in the system. The experimental results illustrate that the proposed data encryption with new key generator system is the high sensitive system with accuracy key test 99% and can make data more secure with high performance.
Under the Internet of Things (IoT), the coexistence proof of multiple RFID tagged objects becomes a very useful mechanism in many application areas such as health care, evidences in court, and stores. The yoking-proof scheme addresses this issue. However, all existing yoking-proof schemes require two or more rounds communication to generate the yoking-proof. In this paper, we investigate the design of one-round yoking-proof schemes. Our contributions are threefold: (1) to confirm the coexistence of the RFID tag pair, we propose a one-round offline yoking-proof scheme with privacy protection. (2) We define a privacy model of the yoking-proof scheme and enhance Moriyama's security model for the yoking-proof scheme. The security and the privacy of the proposed scheme are proved under our models. (3) We further extend the yoking-proof scheme for the coexistence of m RFID tags, where m\textbackslashtextgreater2. The extended scheme maintains one-round. In addition, the proposed technique has efficiency advantage, compared with previous work.
In this paper a random number generation method based on a piecewise linear one dimensional (PL1D) discrete time chaotic maps is proposed for applications in cryptography and steganography. Appropriate parameters are determined by examining the distribution of underlying chaotic signal and random number generator (RNG) is numerically verified by four fundamental statistical test of FIPS 140-2. Proposed design is practically realized on the field programmable analog and digital arrays (FPAA-FPGA). Finally it is experimentally verified that the presented RNG fulfills the NIST 800-22 randomness test without post processing.
In the information age of today, with the rapid development and wide application of communication technology and network technology, more and more information has been transmitted through the network and information security and protection is becoming more and more important, the cryptography theory and technology have become an important research field in Information Science and technology. In recent years, many researchers have found that there is a close relationship between chaos and cryptography. Chaotic system to initial conditions is extremely sensitive and can produce a large number of with good cryptographic properties of class randomness, correlation, complexity and wide spectrum sequence, provides a new and effective means for data encryption. But chaotic cryptography, as a new cross discipline, is still in its initial stage of development. Although many chaotic encryption schemes have been proposed, the method of chaotic cryptography is not yet fully mature. The research is carried out under such a background, to be used in chaotic map of the chaotic cipher system, chaotic sequence cipher, used for key generation of chaotic random number generators and other key problems is discussed. For one-dimensional chaotic encryption algorithm, key space small, security is not higher defect, this paper selects logistic mapping coupled to generate twodimensional hyper chaotic system as the research object, the research focus on the hyper chaotic sequence in the application of data encryption, in chaotic data encryption algorithm to make some beneficial attempts, at the same time, the research on applications of chaos in data encryption to do some exploring.
Figuring innovations and development of web diminishes the exertion required for different procedures. Among them the most profited businesses are electronic frameworks, managing an account, showcasing, web based business and so on. This framework mostly includes the data trades ceaselessly starting with one host then onto the next. Amid this move there are such a variety of spots where the secrecy of the information and client gets loosed. Ordinarily the zone where there is greater likelihood of assault event is known as defenceless zones. Electronic framework association is one of such place where numerous clients performs there undertaking as indicated by the benefits allotted to them by the director. Here the aggressor makes the utilization of open ranges, for example, login or some different spots from where the noxious script is embedded into the framework. This scripts points towards trading off the security imperatives intended for the framework. Few of them identified with clients embedded scripts towards web communications are SQL infusion and cross webpage scripting (XSS). Such assaults must be distinguished and evacuated before they have an effect on the security and classification of the information. Amid the most recent couple of years different arrangements have been incorporated to the framework for making such security issues settled on time. Input approvals is one of the notable fields however experiences the issue of execution drops and constrained coordinating. Some other component, for example, disinfection and polluting will create high false report demonstrating the misclassified designs. At the center, both include string assessment and change investigation towards un-trusted hotspots for totally deciphering the effect and profundity of the assault. This work proposes an enhanced lead based assault discovery with specifically message fields for viably identifying the malevolent scripts. The work obstructs the ordinary access for malignant so- rce utilizing and hearty manage coordinating through unified vault which routinely gets refreshed. At the underlying level of assessment, the work appears to give a solid base to further research.
Efficient management and control of modern and next-gen networks is of paramount importance as networks have to maintain highly reliable service quality whilst supporting rapid growth in traffic demand and new application services. Rapid mitigation of network service degradations is a key factor in delivering high service quality. Automation is vital to achieving rapid mitigation of issues, particularly at the network edge where the scale and diversity is the greatest. This automation involves the rapid detection, localization and (where possible) repair of service-impacting faults and performance impairments. However, the most significant challenge here is knowing what events to detect, how to correlate events to localize an issue and what mitigation actions should be performed in response to the identified issues. These are defined as policies to systems such as ECOMP. In this paper, we present AESOP, a data-driven intelligent system to facilitate automatic learning of policies and rules for triggering remedial actions in networks. AESOP combines best operational practices (domain knowledge) with a variety of measurement data to learn and validate operational policies to mitigate service issues in networks. AESOP's design addresses the following key challenges: (i) learning from high-dimensional noisy data, (ii) capturing multiple fault models, (iii) modeling the high service-cost of false positives, and (iv) accounting for the evolving network infrastructure. We present the design of our system and show results from our ongoing experiments to show the effectiveness of our policy leaning framework.