Biblio
In recent years, Moving Target Defense (MTD) has emerged as a potential game changer in the security landscape, due to its potential to create asymmetric uncertainty that favors the defender. Many different MTD techniques have then been proposed, each addressing an often very specific set of attack vectors. Despite the huge progress made in this area, there are still some critical gaps with respect to the analysis and quantification of the cost and benefits of deploying MTD techniques. In fact, common metrics to assess the performance of these techniques are still lacking and most of them tend to assess their performance in different and often incompatible ways. This paper addresses these gaps by proposing a quantitative analytic model for assessing the resource availability and performance of MTDs, and a method for the determination of the highest possible reconfiguration rate, and thus smallest probability of attacker's success, that meets performance and stability constraints. Finally, we present an experimental validation of the proposed approach.
NoSQL databases have become popular with enterprises due to their scalable and flexible storage management of big data. Nevertheless, their popularity also brings up security concerns. Most NoSQL databases lacked secure data encryption, relying on developers to implement cryptographic methods at application level or middleware layer as a wrapper around the database. While this approach protects the integrity of data, it increases the difficulty of executing queries. We were motivated to design a system that not only provides NoSQL databases with the necessary data security, but also supports the execution of query over encrypted data. Furthermore, how to exploit the distributed fashion of NoSQL databases to deliver high performance and scalability with massive client accesses is another important challenge. In this research, we introduce Crypt-NoSQL, the first prototype to support execution of query over encrypted data on NoSQL databases with high performance. Three different models of Crypt-NoSQL were proposed and performance was evaluated with Yahoo! Cloud Service Benchmark (YCSB) considering an enormous number of clients. Our experimental results show that Crypt-NoSQL can process queries over encrypted data with high performance and scalability. A guidance of establishing service level agreement (SLA) for Crypt-NoSQL as a cloud service is also proposed.
Reliable and scalable storage systems are key to cloud-based applications. In cloud storage, users store their data on remote servers rather than their local computers. Secure storage is used to ensure the safety of data in clouds. As more and more users rely on third-party cloud vendors to store their data, concerns have arisen among users and cloud providers. Encryption-based approaches are commonly used in secure storage systems. Data are encrypted and stored on persistent storage like disks and flash memories. When data are needed by the users, they are decrypted and accessed by the users. This way of managing data hurts the scalability and throughput of cloud systems. In the meantime, cloud systems have to perform fault-tolerance strategies on data, which also brings performance deduction. The combination of these issues cause a high price for data security in cloud systems. Aware of such issues. we propose methods to reduce the overhead of secure storage while guaranteeing the safeness of data.
Quantum Key Distribution (QKD) is a revolutionary technology which leverages the laws of quantum mechanics to distribute cryptographic keying material between two parties with theoretically unconditional security. Terrestrial QKD systems are limited to distances of \textbackslashtextless;200 km in both optical fiber and line-of-sight free-space configurations due to severe losses during single photon propagation and the curvature of the Earth. Thus, the feasibility of fielding a low Earth orbit (LEO) QKD satellite to overcome this limitation is being explored. Moreover, in August 2016, the Chinese Academy of Sciences successfully launched the world's first QKD satellite. However, many of the practical engineering performance and security tradeoffs associated with space-based QKD are not well understood for global secure key distribution. This paper presents several system-level considerations for modeling and studying space-based QKD architectures and systems. More specifically, this paper explores the behaviors and requirements that researchers must examine to develop a model for studying the effectiveness of QKD between LEO satellites and ground stations.
Data provenance provides a way for scientists to observe how experimental data originates, conveys process history, and explains influential factors such as experimental rationale and associated environmental factors from system metrics measured at runtime. The US Department of Energy Office of Science Integrated end-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project has developed a provenance harvester that is capable of collecting observations from file based evidence typically produced by distributed applications. To achieve this, file based evidence is extracted and transformed into an intermediate data format inspired in part by W3C CSV on the Web recommendations, called the Harvester Provenance Application Interface (HAPI) syntax. This syntax provides a general means to pre-stage provenance into messages that are both human readable and capable of being written to a provenance store, Provenance Environment (ProvEn). HAPI is being applied to harvest provenance from climate ensemble runs for Accelerated Climate Modeling for Energy (ACME) project funded under the U.S. Department of Energy's Office of Biological and Environmental Research (BER) Earth System Modeling (ESM) program. ACME informally provides provenance in a native form through configuration files, directory structures, and log files that contain success/failure indicators, code traces, and performance measurements. Because of its generic format, HAPI is also being applied to harvest tabular job management provenance from Belle II DIRAC scheduler relational database tables as well as other scientific applications that log provenance related information.
A two-factor authenticated key-agreement scheme for session initiation protocol emerged as a best remedy to overcome the ascribed limitations of the password-based authentication scheme. Recently, Lu et al. proposed an anonymous two-factor authenticated key-agreement scheme for SIP using elliptic curve cryptography. They claimed that their scheme is secure against attacks and achieves user anonymity. Conversely, this paper's keen analysis points out several severe security weaknesses of the Lu et al.'s scheme. In addition, this paper puts forward an enhanced anonymous two-factor mutual authenticated key-agreement scheme for session initiation protocol using elliptic curve cryptography. The security analysis and performance analysis sections demonstrates that the proposed scheme is more robust and efficient than Lu et al.'s scheme.
This paper contributes a systematic research approach as well as findings of an empirical study conducted to investigate the effect of virtual agents on task performance and player experience in digital games. As virtual agents are supposed to evoke social effects similar to real humans under certain conditions, the basic social phenomenon social facilitation is examined in a testbed game that was specifically developed to enable systematical variation of single impact factors of social facilitation. Independent variables were the presence of a virtual agent (present vs. not present) and the output device (ordinary monitor vs. head-mounted display). Results indicate social inhibition effects, but only for players using a head-mounted display. Additional potential impact factors and future research directions are discussed.
Secure Data Sharing (SDS) enables users to share data in the cloud in a confidential and integrity-preserving manner. Many recent SDS approaches are based on Attribute-Based Encryption (ABE), leveraging the advantage that ABE allows to address a multitude of users with only one ciphertext. However, ABE approaches often come with the downside that they require a central fully-trusted entity that is able to decrypt any ciphertext in the system. In this paper, we investigate on whether ABE could be used to efficiently implement Decentralized Secure Data Sharing (D-SDS), which explicitly demands that the authorization and access control enforcement is carried out solely by the owner of the data, without the help of a fully-trusted third party. For this purpose, we did a comprehensive analysis of recent ABE approaches with regard to D-SDS requirements. We found one ABE approach to be suitable, and we show different alternatives to employ this ABE approach in a group-based D-SDS scenario. For a realistic estimation of the resource consumption, we give concrete resource consumption values for workloads taken from real-world system traces and exemplary up-to-date mobile devices. Our results indicate that for the most D-SDS operations, the resulting computation times and outgoing network traffic will be acceptable in many use cases. However, the computation times and outgoing traffic for the management of large groups might prevent using mobile devices.
Application of lightweight block ciphers in the TLS protocol is studied in this paper. More precisely, since the use of lightweight cryptographic algorithms is prerequisite for addressing security in highly constrained environments such as the Internet of Things, we focus on the behavior of the TLS performance in case that AES is being replaced by a lightweight block cipher; to this end, the recently proposed Speck cipher is being used as a case study. Experimental results exhibit that significant gain in performance can be achieved in such constrained environments, whereas in some cases Speck with larger key size than AES may also result in higher throughput.
The IPv6 Routing Protocol for Low-Power and Lossy Networks was recently introduced as the new routing standard for the Internet of Things. Although RPL defines basic security modes, it remains vulnerable to topological attacks which facilitate blackholing, interception, and resource exhaustion. We are concerned with analyzing the corresponding threats and protecting future RPL deployments from such attacks. Our contributions are twofold. First, we analyze the state of the art, in particular the protective scheme VeRA and present two new rank order attacks as well as extensions to mitigate them. Second, we derive and evaluate TRAIL, a generic scheme for topology authentication in RPL. TRAIL solely relies on the basic assumptions of RPL that (1) the root node serves as a trust anchor and (2) each node interconnects to the root in a straight hierarchy. Using proper reachability tests, TRAIL scalably and reliably identifies any topological attacker without strong cryptographic efforts.
Network monitoring is vital to the administration and operation of networks, but it requires privileged access that only highly trusted parties are granted. This severely limits the opportunity for external parties, such as service or equipment providers, auditors, or even clients, to measure the health or operation of a network in which they are stakeholders, but do not have access to its internal structure. In this position paper we propose the use of middleboxes to open up network monitoring to external parties using privacy-preserving technology. This will allow distrusted parties to make more inferences about the network state than currently possible, without learning any precise information about the network or the data that crosses it. Thus the state of the network will be more transparent to external stakeholders, who will be empowered to verify claims made by network operators. Network operators will be able to provide more information about their network without compromising security or privacy.
Ever-growing performance of supercomputers nowadays brings demanding requirements of energy efficiency and resilience, due to rapidly expanding size and duration in use of the large-scale computing systems. Many application/architecture-dependent parameters that determine energy efficiency and resilience individually have causal effects with each other, which directly affect the trade-offs among performance, energy efficiency and resilience at scale. To enable high-efficiency management for large-scale High-Performance Computing (HPC) systems nowadays, quantitatively understanding the entangled effects among performance, energy efficiency, and resilience is thus required. While previous work focuses on exploring energy-saving and resilience-enhancing opportunities separately, little has been done to theoretically and empirically investigate the interplay between energy efficiency and resilience at scale. In this article, by extending the Amdahl’s Law and the Karp-Flatt Metric, taking resilience into consideration, we quantitatively model the integrated energy efficiency in terms of performance per Watt and showcase the trade-offs among typical HPC parameters, such as number of cores, frequency/voltage, and failure rates. Experimental results for a wide spectrum of HPC benchmarks on two HPC systems show that the proposed models are accurate in extrapolating resilience-aware performance and energy efficiency, and capable of capturing the interplay among various energy-saving and resilience factors. Moreover, the models can help find the optimal HPC configuration for the highest integrated energy efficiency, in the presence of failures and applied resilience techniques.
Current algorithms for context-free parsing inflict a trade-off between ease of understanding, ease of implementation, theoretical complexity, and practical performance. No algorithm achieves all of these properties simultaneously. Might et al. introduced parsing with derivatives, which handles arbitrary context-free grammars while being both easy to understand and simple to implement. Despite much initial enthusiasm and a multitude of independent implementations, its worst-case complexity has never been proven to be better than exponential. In fact, high-level arguments claiming it is fundamentally exponential have been advanced and even accepted as part of the folklore. Performance ended up being sluggish in practice, and this sluggishness was taken as informal evidence of exponentiality. In this paper, we reexamine the performance of parsing with derivatives. We have discovered that it is not exponential but, in fact, cubic. Moreover, simple (though perhaps not obvious) modifications to the implementation by Might et al. lead to an implementation that is not only easy to understand but also highly performant in practice.
Computer systems are developed taking into account that they should be easily maintained in the future. It is one of the main requirements for the sound architectural design. The existing approaches to introducing fault tolerance rely on recursive system structuring out of functional components – this typically results in non-optimal fault tolerance. The paper proposes a vision of structuring complex many-core systems by introducing a special component supporting system-wide fault tolerance coordination. The component acts as a central module making decisions about fault tolerance strategies to be implemented by individual system components depending on the performance and energy requirements specified as system operating modes.
The aim of this research is to advance the user active authentication using keystroke dynamics. Through this research, we assess the performance and influence of various keystroke features on keystroke dynamics authentication systems. In particular, we investigate the performance of keystroke features on a subset of most frequently used English words. The performance of four features such as i) key duration, ii) flight time latency, iii) diagraph time latency, and iv) word total time duration are analyzed. Two machine learning techniques are employed for assessing keystroke authentications. The selected classification methods are support vector machine (SVM), and k-nearest neighbor classifier (K-NN). The logged experimental data are captured for 28 users. The experimental results show that key duration time offers the best performance result among all four keystroke features, followed by word total time.
This paper has conducted a trial in establishing a systematic instrument for evaluating the performance of the marine information systems. Analytic Network Process (ANP) was introduced for determining the relative importance of a set of interdependent criteria concerned by the stakeholders (shipper/consignee, customer broker, forwarder, and container yard). Three major information platforms (MTNet, TradeVan, and Nice Shipping) in Taiwan were evaluated according to the criteria derived from ANP. Results show that the performance of marine information system can be divided into three constructs, namely: Safety and Technology (3 items), Service (3 items), and Charge (3 items). The Safety and Technology is the most important construct of marine information system evaluation, whereas Charger is the least important construct. This study give insights to improve the performance of the existing marine information systems and serve as the useful reference for the future freight information platform.
Modeling and evaluating the performance of large-scale wireless sensor networks (WSNs) is a challenging problem. The traditional method for representing the global state of a system as a cross product of the states of individual nodes in the system results in a state space whose size is exponential in the number of nodes. We propose an alternative way of representing the global state of a system: namely, as a probability mass function (pmf) which represents the fraction of nodes in different states. A pmf corresponds to a point in a Euclidean space of possible pmf values, and the evolution of the state of a system is represented by trajectories in this Euclidean space. We propose a novel performance evaluation method that examines all pmf trajectories in a dense Euclidean space by exploring only finite relevant portions of the space. We call our method Euclidean model checking. Euclidean model checking is useful both in the design phase—where it can help determine system parameters based on a specification—and in the evaluation phase—where it can help verify performance properties of a system. We illustrate the utility of Euclidean model checking by using it to design a time difference of arrival (TDoA) distance measurement protocol and to evaluate the protocol’s implementation on a 90-node WSN. To facilitate such performance evaluations, we provide a Markov model estimation method based on applying a standard statistical estimation technique to samples resulting from the execution of a system.
Mobile users access location services from a location based server. While doing so, the user's privacy is at risk. The server has access to all details about the user. Example the recently visited places, the type of information he accesses. We have presented synergetic technique to safeguard location privacy of users accessing location-based services via mobile devices. Mobile devices have a capability to form ad-hoc networks to hide a user's identity and position. The user who requires the service is the query originator and who requests the service on behalf of query originator is the query sender. The query originator selects the query sender with equal probability which leads to anonymity in the network. The location revealed to the location service provider is a rectangle instead of exact co-ordinate. In this paper we have simulated the mobile network and shown the results for cloaking area sizes and performance against the variation in the density of users.