Visible to the public Biblio

Filters: Keyword is Performance  [Clear All Filters]
2018-02-14
Raju, S., Boddepalli, S., Gampa, S., Yan, Q., Deogun, J. S..  2017.  Identity management using blockchain for cognitive cellular networks. 2017 IEEE International Conference on Communications (ICC). :1–6.
Cloud-centric cognitive cellular networks utilize dynamic spectrum access and opportunistic network access technologies as a means to mitigate spectrum crunch and network demand. However, furnishing a carrier with personally identifiable information for user setup increases the risk of profiling in cognitive cellular networks, wherein users seek secondary access at various times with multiple carriers. Moreover, network access provisioning - assertion, authentication, authorization, and accounting - implemented in conventional cellular networks is inadequate in the cognitive space, as it is neither spontaneous nor scalable. In this paper, we propose a privacy-enhancing user identity management system using blockchain technology which places due importance on both anonymity and attribution, and supports end-to-end management from user assertion to usage billing. The setup enables network access using pseudonymous identities, hindering the reconstruction of a subscriber's identity. Our test results indicate that this approach diminishes access provisioning duration by up to 4x, decreases network signaling traffic by almost 40%, and enables near real-time user billing that may lead to approximately 3x reduction in payments settlement time.
2018-01-16
Connell, Warren, Menascé, Daniel A., Albanese, Massimiliano.  2017.  Performance Modeling of Moving Target Defenses. Proceedings of the 2017 Workshop on Moving Target Defense. :53–63.

In recent years, Moving Target Defense (MTD) has emerged as a potential game changer in the security landscape, due to its potential to create asymmetric uncertainty that favors the defender. Many different MTD techniques have then been proposed, each addressing an often very specific set of attack vectors. Despite the huge progress made in this area, there are still some critical gaps with respect to the analysis and quantification of the cost and benefits of deploying MTD techniques. In fact, common metrics to assess the performance of these techniques are still lacking and most of them tend to assess their performance in different and often incompatible ways. This paper addresses these gaps by proposing a quantitative analytic model for assessing the resource availability and performance of MTDs, and a method for the determination of the highest possible reconfiguration rate, and thus smallest probability of attacker's success, that meets performance and stability constraints. Finally, we present an experimental validation of the proposed approach.

2017-12-28
Shih, M. H., Chang, J. M..  2017.  Design and analysis of high performance crypt-NoSQL. 2017 IEEE Conference on Dependable and Secure Computing. :52–59.

NoSQL databases have become popular with enterprises due to their scalable and flexible storage management of big data. Nevertheless, their popularity also brings up security concerns. Most NoSQL databases lacked secure data encryption, relying on developers to implement cryptographic methods at application level or middleware layer as a wrapper around the database. While this approach protects the integrity of data, it increases the difficulty of executing queries. We were motivated to design a system that not only provides NoSQL databases with the necessary data security, but also supports the execution of query over encrypted data. Furthermore, how to exploit the distributed fashion of NoSQL databases to deliver high performance and scalability with massive client accesses is another important challenge. In this research, we introduce Crypt-NoSQL, the first prototype to support execution of query over encrypted data on NoSQL databases with high performance. Three different models of Crypt-NoSQL were proposed and performance was evaluated with Yahoo! Cloud Service Benchmark (YCSB) considering an enormous number of clients. Our experimental results show that Crypt-NoSQL can process queries over encrypted data with high performance and scalability. A guidance of establishing service level agreement (SLA) for Crypt-NoSQL as a cloud service is also proposed.

Chen, L., Dai, W., Qiu, M., Jiang, N..  2017.  A Design for Scalable and Secure Key-Value Stores. 2017 IEEE International Conference on Smart Cloud (SmartCloud). :216–221.

Reliable and scalable storage systems are key to cloud-based applications. In cloud storage, users store their data on remote servers rather than their local computers. Secure storage is used to ensure the safety of data in clouds. As more and more users rely on third-party cloud vendors to store their data, concerns have arisen among users and cloud providers. Encryption-based approaches are commonly used in secure storage systems. Data are encrypted and stored on persistent storage like disks and flash memories. When data are needed by the users, they are decrypted and accessed by the users. This way of managing data hurts the scalability and throughput of cloud systems. In the meantime, cloud systems have to perform fault-tolerance strategies on data, which also brings performance deduction. The combination of these issues cause a high price for data security in cloud systems. Aware of such issues. we propose methods to reduce the overhead of secure storage while guaranteeing the safeness of data.

Mailloux, L. O., Sargeant, B. N., Hodson, D. D., Grimaila, M. R..  2017.  System-level considerations for modeling space-based quantum key distribution architectures. 2017 Annual IEEE International Systems Conference (SysCon). :1–6.

Quantum Key Distribution (QKD) is a revolutionary technology which leverages the laws of quantum mechanics to distribute cryptographic keying material between two parties with theoretically unconditional security. Terrestrial QKD systems are limited to distances of \textbackslashtextless;200 km in both optical fiber and line-of-sight free-space configurations due to severe losses during single photon propagation and the curvature of the Earth. Thus, the feasibility of fielding a low Earth orbit (LEO) QKD satellite to overcome this limitation is being explored. Moreover, in August 2016, the Chinese Academy of Sciences successfully launched the world's first QKD satellite. However, many of the practical engineering performance and security tradeoffs associated with space-based QKD are not well understood for global secure key distribution. This paper presents several system-level considerations for modeling and studying space-based QKD architectures and systems. More specifically, this paper explores the behaviors and requirements that researchers must examine to develop a model for studying the effectiveness of QKD between LEO satellites and ground stations.

2017-12-12
Stephan, E., Raju, B., Elsethagen, T., Pouchard, L., Gamboa, C..  2017.  A scientific data provenance harvester for distributed applications. 2017 New York Scientific Data Summit (NYSDS). :1–9.

Data provenance provides a way for scientists to observe how experimental data originates, conveys process history, and explains influential factors such as experimental rationale and associated environmental factors from system metrics measured at runtime. The US Department of Energy Office of Science Integrated end-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project has developed a provenance harvester that is capable of collecting observations from file based evidence typically produced by distributed applications. To achieve this, file based evidence is extracted and transformed into an intermediate data format inspired in part by W3C CSV on the Web recommendations, called the Harvester Provenance Application Interface (HAPI) syntax. This syntax provides a general means to pre-stage provenance into messages that are both human readable and capable of being written to a provenance store, Provenance Environment (ProvEn). HAPI is being applied to harvest provenance from climate ensemble runs for Accelerated Climate Modeling for Energy (ACME) project funded under the U.S. Department of Energy's Office of Biological and Environmental Research (BER) Earth System Modeling (ESM) program. ACME informally provides provenance in a native form through configuration files, directory structures, and log files that contain success/failure indicators, code traces, and performance measurements. Because of its generic format, HAPI is also being applied to harvest tabular job management provenance from Belle II DIRAC scheduler relational database tables as well as other scientific applications that log provenance related information.

2017-11-20
Reddy, Alavalapati Goutham, Yoon, Eun-Jun, Das, Ashok Kumar, Yoo, Kee-Young.  2016.  An Enhanced Anonymous Two-factor Mutual Authentication with Key-agreement Scheme for Session Initiation Protocol. Proceedings of the 9th International Conference on Security of Information and Networks. :145–149.

A two-factor authenticated key-agreement scheme for session initiation protocol emerged as a best remedy to overcome the ascribed limitations of the password-based authentication scheme. Recently, Lu et al. proposed an anonymous two-factor authenticated key-agreement scheme for SIP using elliptic curve cryptography. They claimed that their scheme is secure against attacks and achieves user anonymity. Conversely, this paper's keen analysis points out several severe security weaknesses of the Lu et al.'s scheme. In addition, this paper puts forward an enhanced anonymous two-factor mutual authenticated key-agreement scheme for session initiation protocol using elliptic curve cryptography. The security analysis and performance analysis sections demonstrates that the proposed scheme is more robust and efficient than Lu et al.'s scheme.

2017-11-03
Gervais, Arthur, Karame, Ghassan O., Wüst, Karl, Glykantzis, Vasileios, Ritzdorf, Hubert, Capkun, Srdjan.  2016.  On the Security and Performance of Proof of Work Blockchains. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :3–16.
Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters. In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions.
2017-10-25
Moura, Giovane C.M., Schmidt, Ricardo de O., Heidemann, John, de Vries, Wouter B., Muller, Moritz, Wei, Lan, Hesselman, Cristian.  2016.  Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event. Proceedings of the 2016 Internet Measurement Conference. :255–270.
Distributed Denial-of-Service (DDoS) attacks continue to be a major threat on the Internet today. DDoS attacks overwhelm target services with requests or other traffic, causing requests from legitimate users to be shut out. A common defense against DDoS is to replicate a service in multiple physical locations/sites. If all sites announce a common prefix, BGP will associate users around the Internet with a nearby site, defining the catchment of that site. Anycast defends against DDoS both by increasing aggregate capacity across many sites, and allowing each site's catchment to contain attack traffic, leaving other sites unaffected. IP anycast is widely used by commercial CDNs and for essential infrastructure such as DNS, but there is little evaluation of anycast under stress. This paper provides the first evaluation of several IP anycast services under stress with public data. Our subject is the Internet's Root Domain Name Service, made up of 13 independently designed services ("letters", 11 with IP anycast) running at more than 500 sites. Many of these services were stressed by sustained traffic at 100× normal load on Nov. 30 and Dec. 1, 2015. We use public data for most of our analysis to examine how different services respond to stress, and identify two policies: sites may absorb attack traffic, containing the damage but reducing service to some users, or they may withdraw routes to shift both good and bad traffic to other sites. We study how these deployment policies resulted in different levels of service to different users during the events. We also show evidence of collateral damage on other services located near the attacks.
2017-10-18
Emmerich, Katharina, Masuch, Maic.  2016.  The Influence of Virtual Agents on Player Experience and Performance. Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. :10–21.

This paper contributes a systematic research approach as well as findings of an empirical study conducted to investigate the effect of virtual agents on task performance and player experience in digital games. As virtual agents are supposed to evoke social effects similar to real humans under certain conditions, the basic social phenomenon social facilitation is examined in a testbed game that was specifically developed to enable systematical variation of single impact factors of social facilitation. Independent variables were the presence of a virtual agent (present vs. not present) and the output device (ordinary monitor vs. head-mounted display). Results indicate social inhibition effects, but only for players using a head-mounted display. Additional potential impact factors and future research directions are discussed.

2017-10-10
Kuehner, Holger, Hartenstein, Hannes.  2016.  Decentralized Secure Data Sharing with Attribute-Based Encryption: A Resource Consumption Analysis. Proceedings of the 4th ACM International Workshop on Security in Cloud Computing. :74–81.

Secure Data Sharing (SDS) enables users to share data in the cloud in a confidential and integrity-preserving manner. Many recent SDS approaches are based on Attribute-Based Encryption (ABE), leveraging the advantage that ABE allows to address a multitude of users with only one ciphertext. However, ABE approaches often come with the downside that they require a central fully-trusted entity that is able to decrypt any ciphertext in the system. In this paper, we investigate on whether ABE could be used to efficiently implement Decentralized Secure Data Sharing (D-SDS), which explicitly demands that the authorization and access control enforcement is carried out solely by the owner of the data, without the help of a fully-trusted third party. For this purpose, we did a comprehensive analysis of recent ABE approaches with regard to D-SDS requirements. We found one ABE approach to be suitable, and we show different alternatives to employ this ABE approach in a group-based D-SDS scenario. For a realistic estimation of the resource consumption, we give concrete resource consumption values for workloads taken from real-world system traces and exemplary up-to-date mobile devices. Our results indicate that for the most D-SDS operations, the resulting computation times and outgoing network traffic will be acceptable in many use cases. However, the computation times and outgoing traffic for the management of large groups might prevent using mobile devices.

2017-09-15
Iosifidis, Efthymios, Limniotis, Konstantinos.  2016.  A Study of Lightweight Block Ciphers in TLS: The Case of Speck. Proceedings of the 20th Pan-Hellenic Conference on Informatics. :64:1–64:5.

Application of lightweight block ciphers in the TLS protocol is studied in this paper. More precisely, since the use of lightweight cryptographic algorithms is prerequisite for addressing security in highly constrained environments such as the Internet of Things, we focus on the behavior of the TLS performance in case that AES is being replaced by a lightweight block cipher; to this end, the recently proposed Speck cipher is being used as a case study. Experimental results exhibit that significant gain in performance can be achieved in such constrained environments, whereas in some cases Speck with larger key size than AES may also result in higher throughput.

2017-08-18
Perrey, Heiner, Landsmann, Martin, Ugus, Osman, Wählisch, Matthias, Schmidt, Thomas C..  2016.  TRAIL: Topology Authentication in RPL. Proceedings of the 2016 International Conference on Embedded Wireless Systems and Networks. :59–64.

The IPv6 Routing Protocol for Low-Power and Lossy Networks was recently introduced as the new routing standard for the Internet of Things. Although RPL defines basic security modes, it remains vulnerable to topological attacks which facilitate blackholing, interception, and resource exhaustion. We are concerned with analyzing the corresponding threats and protecting future RPL deployments from such attacks. Our contributions are twofold. First, we analyze the state of the art, in particular the protective scheme VeRA and present two new rank order attacks as well as extensions to mitigate them. Second, we derive and evaluate TRAIL, a generic scheme for topology authentication in RPL. TRAIL solely relies on the basic assumptions of RPL that (1) the root node serves as a trust anchor and (2) each node interconnects to the root in a straight hierarchy. Using proper reachability tests, TRAIL scalably and reliably identifies any topological attacker without strong cryptographic efforts.

2017-08-02
Sultana, Nik, Kohlweiss, Markulf, Moore, Andrew W..  2016.  Light at the Middle of the Tunnel: Middleboxes for Selective Disclosure of Network Monitoring to Distrusted Parties. Proceedings of the 2016 Workshop on Hot Topics in Middleboxes and Network Function Virtualization. :1–6.

Network monitoring is vital to the administration and operation of networks, but it requires privileged access that only highly trusted parties are granted. This severely limits the opportunity for external parties, such as service or equipment providers, auditors, or even clients, to measure the health or operation of a network in which they are stakeholders, but do not have access to its internal structure. In this position paper we propose the use of middleboxes to open up network monitoring to external parties using privacy-preserving technology. This will allow distrusted parties to make more inferences about the network state than currently possible, without learning any precise information about the network or the data that crosses it. Thus the state of the network will be more transparent to external stakeholders, who will be empowered to verify claims made by network operators. Network operators will be able to provide more information about their network without compromising security or privacy.

2017-05-18
Tan, Li, Chen, Zizhong, Song, Shuaiwen Leon.  2015.  Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology. ACM Trans. Archit. Code Optim.. 12:35:1–35:27.

Ever-growing performance of supercomputers nowadays brings demanding requirements of energy efficiency and resilience, due to rapidly expanding size and duration in use of the large-scale computing systems. Many application/architecture-dependent parameters that determine energy efficiency and resilience individually have causal effects with each other, which directly affect the trade-offs among performance, energy efficiency and resilience at scale. To enable high-efficiency management for large-scale High-Performance Computing (HPC) systems nowadays, quantitatively understanding the entangled effects among performance, energy efficiency, and resilience is thus required. While previous work focuses on exploring energy-saving and resilience-enhancing opportunities separately, little has been done to theoretically and empirically investigate the interplay between energy efficiency and resilience at scale. In this article, by extending the Amdahl’s Law and the Karp-Flatt Metric, taking resilience into consideration, we quantitatively model the integrated energy efficiency in terms of performance per Watt and showcase the trade-offs among typical HPC parameters, such as number of cores, frequency/voltage, and failure rates. Experimental results for a wide spectrum of HPC benchmarks on two HPC systems show that the proposed models are accurate in extrapolating resilience-aware performance and energy efficiency, and capable of capturing the interplay among various energy-saving and resilience factors. Moreover, the models can help find the optimal HPC configuration for the highest integrated energy efficiency, in the presence of failures and applied resilience techniques.

2017-05-17
Adams, Michael D., Hollenbeck, Celeste, Might, Matthew.  2016.  On the Complexity and Performance of Parsing with Derivatives. Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation. :224–236.

Current algorithms for context-free parsing inflict a trade-off between ease of understanding, ease of implementation, theoretical complexity, and practical performance. No algorithm achieves all of these properties simultaneously. Might et al. introduced parsing with derivatives, which handles arbitrary context-free grammars while being both easy to understand and simple to implement. Despite much initial enthusiasm and a multitude of independent implementations, its worst-case complexity has never been proven to be better than exponential. In fact, high-level arguments claiming it is fundamentally exponential have been advanced and even accepted as part of the folklore. Performance ended up being sluggish in practice, and this sluggishness was taken as informal evidence of exponentiality. In this paper, we reexamine the performance of parsing with derivatives. We have discovered that it is not exponential but, in fact, cubic. Moreover, simple (though perhaps not obvious) modifications to the implementation by Might et al. lead to an implementation that is not only easy to understand but also highly performant in practice.

2017-05-16
Gensh, Rem, Romanovsky, Alexander, Yakovlev, Alex.  2016.  On Structuring Holistic Fault Tolerance. Proceedings of the 15th International Conference on Modularity. :130–133.

Computer systems are developed taking into account that they should be easily maintained in the future. It is one of the main requirements for the sound architectural design. The existing approaches to introducing fault tolerance rely on recursive system structuring out of functional components – this typically results in non-optimal fault tolerance. The paper proposes a vision of structuring complex many-core systems by introducing a special component supporting system-wide fault tolerance coordination. The component acts as a central module making decisions about fault tolerance strategies to be implemented by individual system components depending on the performance and energy requirements specified as system operating modes.

2017-03-08
Darabseh, A., Namin, A. S..  2015.  On Accuracy of Classification-Based Keystroke Dynamics for Continuous User Authentication. 2015 International Conference on Cyberworlds (CW). :321–324.

The aim of this research is to advance the user active authentication using keystroke dynamics. Through this research, we assess the performance and influence of various keystroke features on keystroke dynamics authentication systems. In particular, we investigate the performance of keystroke features on a subset of most frequently used English words. The performance of four features such as i) key duration, ii) flight time latency, iii) diagraph time latency, and iv) word total time duration are analyzed. Two machine learning techniques are employed for assessing keystroke authentications. The selected classification methods are support vector machine (SVM), and k-nearest neighbor classifier (K-NN). The logged experimental data are captured for 28 users. The experimental results show that key duration time offers the best performance result among all four keystroke features, followed by word total time.

Wang, R. T., Chen, C. T..  2015.  Framework Building and Application of the Performance Evaluation in Marine Logistics Information Platform in Taiwan. 2015 2nd International Conference on Information Science and Control Engineering. :245–249.

This paper has conducted a trial in establishing a systematic instrument for evaluating the performance of the marine information systems. Analytic Network Process (ANP) was introduced for determining the relative importance of a set of interdependent criteria concerned by the stakeholders (shipper/consignee, customer broker, forwarder, and container yard). Three major information platforms (MTNet, TradeVan, and Nice Shipping) in Taiwan were evaluated according to the criteria derived from ANP. Results show that the performance of marine information system can be divided into three constructs, namely: Safety and Technology (3 items), Service (3 items), and Charge (3 items). The Safety and Technology is the most important construct of marine information system evaluation, whereas Charger is the least important construct. This study give insights to improve the performance of the existing marine information systems and serve as the useful reference for the future freight information platform.

2015-11-23
YoungMin Kwon, University of Illinois at Urbana-Champaign, Gul Agha, University of Illinois at Urbana-Champaign.  2014.  Performance Evaluation of Sensor Networks by Statistical Modeling and Euclidean Model Checking. ACM Transactions on Sensor Networks. 9(4)

Modeling and evaluating the performance of large-scale wireless sensor networks (WSNs) is a challenging problem. The traditional method for representing the global state of a system as a cross product of the states of individual nodes in the system results in a state space whose size is exponential in the number of nodes. We propose an alternative way of representing the global state of a system: namely, as a probability mass function (pmf) which represents the fraction of nodes in different states. A pmf corresponds to a point in a Euclidean space of possible pmf values, and the evolution of the state of a system is represented by trajectories in this Euclidean space. We propose a novel performance evaluation method that examines all pmf trajectories in a dense Euclidean space by exploring only finite relevant portions of the space. We call our method Euclidean model checking. Euclidean model checking is useful both in the design phase—where it can help determine system parameters based on a specification—and in the evaluation phase—where it can help verify performance properties of a system. We illustrate the utility of Euclidean model checking by using it to design a time difference of arrival (TDoA) distance measurement protocol and to evaluate the protocol’s implementation on a 90-node WSN. To facilitate such performance evaluations, we provide a Markov model estimation method based on applying a standard statistical estimation technique to samples resulting from the execution of a system.

2015-05-04
Jagdale, B.N., Bakal, J.W..  2014.  Synergetic cloaking technique in wireless network for location privacy. Industrial and Information Systems (ICIIS), 2014 9th International Conference on. :1-6.

Mobile users access location services from a location based server. While doing so, the user's privacy is at risk. The server has access to all details about the user. Example the recently visited places, the type of information he accesses. We have presented synergetic technique to safeguard location privacy of users accessing location-based services via mobile devices. Mobile devices have a capability to form ad-hoc networks to hide a user's identity and position. The user who requires the service is the query originator and who requests the service on behalf of query originator is the query sender. The query originator selects the query sender with equal probability which leads to anonymity in the network. The location revealed to the location service provider is a rectangle instead of exact co-ordinate. In this paper we have simulated the mobile network and shown the results for cloaking area sizes and performance against the variation in the density of users.