Biblio
Internet Protocol version 6 (IPv6) over Low power Wireless Personal Area Networks (6LoWPAN) is extensively used in wireless sensor networks (WSNs) due to its ability to transmit IPv6 packet with low bandwidth and limited resources. 6LoWPAN has several operations in each layer. Most existing security challenges are focused on the network layer, which is represented by its routing protocol for low-power and lossy network (RPL). RPL components include WSN nodes that have constrained resources. Therefore, the exposure of RPL to various attacks may lead to network damage. A sinkhole attack is a routing attack that could affect the network topology. This paper aims to investigate the existing detection mechanisms used in detecting sinkhole attack on RPL-based networks. This work categorizes and presents each mechanism according to certain aspects. Then, their advantages and drawbacks with regard to resource consumption and false positive rate are discussed and compared.
Security has always been concern when it comes to data sharing in cloud computing. Cloud computing provides high computation power and memory. Cloud computing is convenient way for data sharing. But users may sometime needs to outsourced the shared data to cloud server though it contains valuable and sensitive information. Thus it is necessary to provide cryptographically enhanced access control for data sharing system. This paper discuss about the promising access control for data sharing in cloud which is identity-based encryption. We introduce the efficient revocation scheme for the system which is revocable-storage identity-based encryption scheme. It provides both forward and backward security of ciphertext. Then we will have glance at the architecture and steps involved in identity-based encryption. Finally we propose system that provide secure file sharing system using identity-based encryption scheme.
Location-based Services (LBSs) provide valuable features but can also reveal sensitive user information. Decentralized privacy protection removes the need for a so-called anonymizer, but relying on peers is a double-edged sword: adversaries could mislead with fictitious responses or even collude to compromise their peers' privacy. We address here exactly this problem: we strengthen the decentralized LBS privacy approach, securing peer-to-peer (P2P) interactions. Our scheme can provide precise timely P2P responses by passing proactively cached Point of Interest (POI) information. It reduces the exposure both to the honest-but-curious LBS servers and peer nodes. Our scheme allows P2P responses to be validated with very low fraction of queries affected even if a significant fraction of nodes are compromised. The exposure can be kept very low even if the LBS server or a large set of colluding curious nodes collude with curious identity management entities.
The restoration of power distribution systems has a crucial role in the electric utility environment, taking into account both the pressure experienced by the operators that must choose the corrective actions to be followed in emergency restoration plans and the goals imposed by the regulatory agencies. In this sense, decision-aiding systems and self-healing networks may be good alternatives since they either perform an automated analysis of the situation, providing consistent and high-quality restoration plans, or even directly perform the restoration fast and automatically in both cases reducing the impacts caused by network disturbances. This work proposes a new restoration strategy which is novel in the sense it deals with the problem from the operator viewpoint, without simplifications that are used in most literature works. In this proposal, a permutation based genetic algorithm is employed to restore the maximum amount of loads, in real time, without depending on a priori knowledge of the location of the fault. To validate the proposed methodology two large real systems were tested: one with 2 substations, 5 feeders, 703 buses, and 132 switches, and; the other with 3 substations, 7 feeders, 21,633 buses, and 2,808 switches. These networks were tested considering situations of single and multiple failures. The results obtained were achieved with very low processing time (of the order of ten seconds), while compliance with all operational requirements was ensured.
As modern attacks become more stealthy and persistent, detecting or preventing them at their early stages becomes virtually impossible. Instead, an attack investigation or provenance system aims to continuously monitor and log interesting system events with minimal overhead. Later, if the system observes any anomalous behavior, it analyzes the log to identify who initiated the attack and which resources were affected by the attack and then assess and recover from any damage incurred. However, because of a fundamental tradeoff between log granularity and system performance, existing systems typically record system-call events without detailed program-level activities (e.g., memory operation) required for accurately reconstructing attack causality or demand that every monitored program be instrumented to provide program-level information. To address this issue, we propose RAIN, a Refinable Attack INvestigation system based on a record-replay technology that records system-call events during runtime and performs instruction-level dynamic information flow tracking (DIFT) during on-demand process replay. Instead of replaying every process with DIFT, RAIN conducts system-call-level reachability analysis to filter out unrelated processes and to minimize the number of processes to be replayed, making inter-process DIFT feasible. Evaluation results show that RAIN effectively prunes out unrelated processes and determines attack causality with negligible false positive rates. In addition, the runtime overhead of RAIN is similar to existing system-call level provenance systems and its analysis overhead is much smaller than full-system DIFT.
Ransomware techniques have evolved over time with the most resilient attacks making data recovery practically impossible. This has driven countermeasures to shift towards recovery against prevention but in this paper, we model ransomware attacks from an infection vector point of view. We follow the basic infection chain of crypto ransomware and use Bayesian network statistics to infer some of the most common ransomware infection vectors. We also employ the use of attack and sensor nodes to capture uncertainty in the Bayesian network.
Ransomware attacks are becoming prevalent nowadays with the flourishing of crypto-currencies. As the most harmful variant of ransomware crypto-ransomware encrypts the victim's valuable data, and asks for ransom money. Paying the ransom money, however, may not guarantee recovery of the data being encrypted. Most of the existing work for ransomware defense purely focuses on ransomware detection. A few of them consider data recovery from ransomware attacks, but they are not able to defend against ransomware which can obtain a high system privilege. In this work, we design RDS3, a novel Ransomware Defense Strategy, in which we Stealthily back up data in the Spare space of a computing device, such that the data encrypted by ransomware can be restored. Our key idea is that the spare space which stores the backup data is fully isolated from the ransomware. In this way, the ransomware is not able to ``touch'' the backup data regardless of what privilege it can obtain. Security analysis and experimental evaluation show that RDS3 can mitigate ransomware attacks with an acceptable overhead.
There are several security requirements identification methods proposed by researchers in up-front requirements engineering (RE). However, in open source software (OSS) projects, developers use lightweight representation and refine requirements frequently by writing comments. They also tend to discuss security aspect in comments by providing code snippets, attachments, and external resource links. Since most security requirements identification methods in up-front RE are based on textual information retrieval techniques, these methods are not suitable for OSS projects or just-in-time RE. In our study, we propose a new model based on logistic regression to identify security requirements in OSS projects. We used five metrics to build security requirements identification models and tested the performance of these metrics by applying those models to three OSS projects. Our results show that four out of five metrics achieved high performance in intra-project testing.
We present a framework for learning to describe finegrained visual differences between instances using attribute phrases. Attribute phrases capture distinguishing aspects of an object (e.g., “propeller on the nose” or “door near the wing” for airplanes) in a compositional manner. Instances within a category can be described by a set of these phrases and collectively they span the space of semantic attributes for a category. We collect a large dataset of such phrases by asking annotators to describe several visual differences between a pair of instances within a category. We then learn to describe and ground these phrases to images in the context of a reference game between a speaker and a listener. The goal of a speaker is to describe attributes of an image that allows the listener to correctly identify it within a pair. Data collected in a pairwise manner improves the ability of the speaker to generate, and the ability of the listener to interpret visual descriptions. Moreover, due to the compositionality of attribute phrases, the trained listeners can interpret descriptions not seen during training for image retrieval, and the speakers can generate attribute-based explanations for differences between previously unseen categories. We also show that embedding an image into the semantic space of attribute phrases derived from listeners offers 20% improvement in accuracy over existing attributebased representations on the FGVC-aircraft dataset.
Generative policies enable devices to generate their own policies that are validated, consistent and conflict free. This autonomy is required for security policy generation to deal with the large number of smart devices per person that will soon become reality. In this paper, we discuss the research issues that have to be addressed in order for devices involved in security enforcement to automatically generate their security policies - enabling policy-based autonomous security management. We discuss the challenges involved in the task of automatic security policy generation, and outline some approaches based om machine learning that may potentially provide a solution to the same.
The vision of cyber-physical systems (CPSs) considered the Internet as the future communication network for such systems. A challenge with this regard is to provide high communication reliability, especially, for CPSs applications in critical infrastructures. Examples include smart grid applications with reliability requirements between 99-99.9999% [2]. Even though the Internet is a cost effective solution for such applications, the reliability of its end-to-end (e2e) paths is inadequate (often less than 99%). In this paper, we propose Reliable Multipath Communication Approach for Internet-based CPSs (RC4CPS). RC4CPS is an e2e approach that utilizes the inherent redundancy of the Internet and multipath (MP) transport protocols concept to improve reliability measured in terms of availability. It provides online monitoring and MP selection in order to fulfill the application specific reliability requirement. In addition, our MP selection considers e2e paths dependency and unavailability prediction to maximize the reliability gains of MP communication. Our results show that RC4CPS dynamic MP selection satisfied the reliability requirement along with selecting e2e paths with low dependency and unavailability probability.
In this paper, we present a decentralized nonlinear robust controller to enhance the transient stability margin of synchronous generators. Although, the trend in power system control is shifting towards centralized or distributed controller approaches, the remote data dependency of these schemes fuels cyber-physical security issues. Since the excessive delay or losing remote data affect severely the operation of those controllers, the designed controller emerges as an alternative for stabilization of Smart Grids in case of unavailability of remote data and in the presence of plant parametric uncertainties. The proposed controller actuates distributed storage systems such as flywheels in order to reduce stabilization time and it implements a novel input time delay compensation technique. Lyapunov stability analysis proves that all the tracking error signals are globally uniformly ultimately bounded. Furthermore, the simulation results demonstrate that the proposed controller outperforms traditional local power systems controllers such as Power System Stabilizers.
With the rapid development of DC transmission technology and High Voltage Direct Current (HVDC) programs, the reliability of AC/DC hybrid power grid draws more and more attentions. The paper takes both the system static and dynamic characteristics into account, and proposes a novel AC/DC hybrid system reliability evaluation method considering transient security constraints based on Monte-Carlo method and transient stability analytical method. The interaction of AC system and DC system after fault is considered in evaluation process. The transient stability analysis is performed firstly when fault occurs in the system and BPA software is applied to the analysis to improve the computational accuracy and speed. Then the new system state is generated according to the transient analysis results. Then a minimum load shedding model of AC/DC hybrid system with HVDC is proposed. And then adequacy analysis is taken to the new state. The proposed method can evaluate the reliability of AC/DC hybrid grid more comprehensively and reduce the complexity of problem which is tested by IEEE-RTS 96 system and an actual large-scale system.
This paper proposed a feedback shift register structure which can be split, it is based on a research of operating characteristics about 70 kinds of cryptographic algorithms and the research shows that the “different operations similar structure” reconfigurable design is feasible. Under the configuration information, the proposed structure can implement the multiplication in finite field GF(2n), the multiply/divide linear feedback shift register and other operations. Finally, this paper did a logic synthesis based on 55nm CMOS standard-cell library and the results show that the proposed structure gets a hardware resource saving of nearly 32%, the average power consumption saving of nearly 55% without the critical delay increasing significantly. Therefore, the “different operations similar structure” reconfigurable design is a new design method and the proposed feedback shift register structure can be an important processing unit for coarse-grained reconfigurable cryptologic array.
The implementation of RFID technology in computer systems gives access to quality information on the location or object tracking in real time, thereby improving workflow and lead to safer, faster and better business decisions. This paper discusses the quantitative indicators of the quality of the computer system supported by RFID technology applied in monitoring facilities (pallets, packages and people) marked with RFID tag. Results of analysis of quantitative indicators of quality compute system supported by RFID technology are presented in tables.
Choosing how to write natural language scenarios is challenging, because stakeholders may over-generalize their descriptions or overlook or be unaware of alternate scenarios. In security, for example, this can result in weak security constraints that are too general, or missing constraints. Another challenge is that analysts are unclear on where to stop generating new scenarios. In this paper, we introduce the Multifactor Quality Method (MQM) to help requirements analysts to empirically collect system constraints in scenarios based on elicited expert preferences. The method combines quantitative statistical analysis to measure system quality with qualitative coding to extract new requirements. The method is bootstrapped with minimal analyst expertise in the domain affected by the quality area, and then guides an analyst toward selecting expert-recommended requirements to monotonically increase system quality. We report the results of applying the method to security. This include 550 requirements elicited from 69 security experts during a bootstrapping stage, and subsequent evaluation of these results in a verification stage with 45 security experts to measure the overall improvement of the new requirements. Security experts in our studies have an average of 10 years of experience. Our results show that using our method, we detect an increase in the security quality ratings collected in the verification stage. Finally, we discuss how our proposed method helps to improve security requirements elicitation, analysis, and measurement.
Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.
In Energy Internet mode, a large number of alarm information is generated when equipment exception and multiple faults in large power grid, which seriously affects the information collection, fault analysis and delays the accident treatment for the monitors. To this point, this paper proposed a method for power grid monitoring to monitor and diagnose fault in real time, constructed the equipment fault logical model based on five section alarm information, built the standard fault information set, realized fault information optimization, fault equipment location, fault type diagnosis, false-report message and missing-report message analysis using matching algorithm. The validity and practicality of the proposed method by an actual case was verified, which can shorten the time of obtaining and analyzing fault information, accelerate the progress of accident treatment, ensure the safe and stable operation of power grid.
Wireless sensor networks are responsible for sensing, gathering and processing the information of the objects in the network coverage area. Basic data fusion technology generally does not provide data privacy protection mechanism, and the privacy protection mechanism in health care, military reconnaissance, smart home and other areas of the application is usually indispensable. In this paper, we consider the privacy, confidentiality, and the accuracy of fusion results, and propose a data fusion algorithm for privacy preserving. This algorithm relies on the characteristics of data fusion, and uses the method of pre-distribution random number in the node to get the privacy protection requirements of the original data. Theoretical analysis shows that the malicious attacker attempts to steal the difficulty of node privacy in PPND algorithm. At the same time in the TOSSIM simulation results also show that, compared with TAG, SMART algorithm, PPND algorithm in the data traffic, the convergence accuracy of the good performance.
While we have long had principles describing how access control enforcement should be implemented, such as the reference monitor concept, imprecision in access control mechanisms and access control policies leads to risks that may enable exploitation. In practice, least privilege access control policies often allow information flows that may enable exploits. In addition, the implementation of access control mechanisms often tries to balance security with ease of use implicitly (e.g., with respect to determining where to place authorization hooks) and approaches to tighten access control, such as accounting for program context, are ad hoc. In this paper, we define four types of risks in access control enforcement and explore possible approaches and challenges in tracking those types of risks. In principle, we advocate runtime tracking to produce risk estimates for each of these types of risk. To better understand the potential of risk estimation for authorization, we propose risk estimate functions for each of the four types of risk, finding that benign program deployments accumulate risks in each of the four areas for ten Android programs examined. As a result, we find that tracking of relative risk may be useful for guiding changes to security choices, such as authorized unsafe operations or placement of authorization checks, when risk differs from that expected.
We report on our discovery of an algorithmic flaw in the construction of primes for RSA key generation in a widely-used library of a major manufacturer of cryptographic hardware. The primes generated by the library suffer from a significant loss of entropy. We propose a practical factorization method for various key lengths including 1024 and 2048 bits. Our method requires no additional information except for the value of the public modulus and does not depend on a weak or a faulty random number generator. We devised an extension of Coppersmith's factorization attack utilizing an alternative form of the primes in question. The library in question is found in NIST FIPS 140-2 and CC\textasciitildeEAL\textasciitilde5+ certified devices used for a wide range of real-world applications, including identity cards, passports, Trusted Platform Modules, PGP and tokens for authentication or software signing. As the relevant library code was introduced in 2012 at the latest (and probably earlier), the impacted devices are now widespread. Tens of thousands of such keys were directly identified, many with significant impacts, especially for electronic identity documents, software signing, Trusted Computing and PGP. We estimate the number of affected devices to be in the order of at least tens of millions. The worst cases for the factorization of 1024 and 2048-bit keys are less than 3 CPU-months and 100 CPU-years on single core of common recent CPUs, respectively, while the expected time is half of that of the worst case. The attack can be parallelized on multiple CPUs. Worse still, all susceptible keys contain a strong fingerprint that is verifiable in microseconds on an ordinary laptop – meaning that all vulnerable keys can be quickly identified, even in very large datasets.
Cloud computing presents unlimited prospects for Information Technology (IT) industry and business enterprises alike. Rapid advancement brings a dark underbelly of new vulnerabilities and challenges unfolding with alarming regularity. Although cloud technology provides a ubiquitous environment facilitating business enterprises to conduct business across disparate locations, security effectiveness of this platform interspersed with threats which can bring everything that subscribes to the cloud, to a halt raises questions. However advantages of cloud platforms far outweighs drawbacks and study of new challenges helps overcome drawbacks of this technology. One such emerging security threat is of ransomware attack on the cloud which threatens to hold systems and data on cloud network to ransom with widespread damaging implications. This provides huge scope for IT security specialists to sharpen their skillset to overcome this new challenge. This paper covers the broad cloud architecture, current inherent cloud threat mechanisms, ransomware vulnerabilities posed and suggested methods to mitigate it.
Recognizing Families In the Wild (RFIW) is a large-scale, multi-track automatic kinship recognition evaluation, supporting both kinship verification and family classification on scales much larger than ever before. It was organized as a Data Challenge Workshop hosted in conjunction with ACM Multimedia 2017. This was achieved with the largest image collection that supports kin-based vision tasks. In the end, we use this manuscript to summarize evaluation protocols, progress made and some technical background and performance ratings of the algorithms used, and a discussion on promising directions for both research and engineers to be taken next in this line of work.