Visible to the public Biblio

Found 6023 results

Filters: Keyword is Scalability  [Clear All Filters]
2017-03-20
Dormann, Will.  2016.  Google Authentication Risks on iOS. Proceedings of the 1st International Workshop on Mobile Development. :3–5.

The Google Identity Platform is a system that allows a user to sign in to applications and other services by using a Google account. Google Sign-In is one such method for providing one’s identity to the Google Identity Platform. Google Sign-In is available for Android applications and iOS applications, as well as for websites and other devices. Users of Google Sign-In find that it integrates well with the Android platform, but iOS users (iPhone, iPad, etc.) do not have the same experience. The user experience when logging in to a Google account on an iOS application can not only be more tedious than the Android experience, but it also conditions users to engage in behaviors that put the information in their Google accounts at risk.

Asharov, Gilad, Naor, Moni, Segev, Gil, Shahaf, Ido.  2016.  Searchable Symmetric Encryption: Optimal Locality in Linear Space via Two-dimensional Balanced Allocations. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :1101–1114.

Searchable symmetric encryption (SSE) enables a client to store a database on an untrusted server while supporting keyword search in a secure manner. Despite the rapidly increasing interest in SSE technology, experiments indicate that the performance of the known schemes scales badly to large databases. Somewhat surprisingly, this is not due to their usage of cryptographic tools, but rather due to their poor locality (where locality is defined as the number of non-contiguous memory locations the server accesses with each query). The only known schemes that do not suffer from poor locality suffer either from an impractical space overhead or from an impractical read efficiency (where read efficiency is defined as the ratio between the number of bits the server reads with each query and the actual size of the answer). We construct the first SSE schemes that simultaneously enjoy optimal locality, optimal space overhead, and nearly-optimal read efficiency. Specifically, for a database of size N, under the modest assumption that no keyword appears in more than N1 − 1/loglogN documents, we construct a scheme with read efficiency Õ(loglogN). This essentially matches the lower bound of Cash and Tessaro (EUROCRYPT ’14) showing that any SSE scheme must be sub-optimal in either its locality, its space overhead, or its read efficiency. In addition, even without making any assumptions on the structure of the database, we construct a scheme with read efficiency Õ(logN). Our schemes are obtained via a two-dimensional generalization of the classic balanced allocations (“balls and bins”) problem that we put forward. We construct nearly-optimal two-dimensional balanced allocation schemes, and then combine their algorithmic structure with subtle cryptographic techniques.

Swami, Shivam, Rakshit, Joydeep, Mohanram, Kartik.  2016.  SECRET: Smartly EnCRypted Energy Efficient Non-volatile Memories. Proceedings of the 53rd Annual Design Automation Conference. :166:1–166:6.

Data persistence in emerging non-volatile memories (NVMs) poses a multitude of security vulnerabilities, motivating main memory encryption for data security. However, practical encryption algorithms demonstrate strong diffusion characteristics that increase cell flips, resulting in increased write energy/latency and reduced lifetime of NVMs. State-of-the-art security solutions have focused on reducing the encryption penalty (increased write energy/latency and reduced memory lifetime) in single-level cell (SLC) NVMs; however, the realization of low encryption penalty solutions for multi-/triple-level cell (MLC/TLC) secure NVMs remains an open area of research. This work synergistically integrates zero-based partial writes with XOR-based energy masking to realize Smartly EnCRypted Energy efficienT, i.e., SECRET MLC/TLC NVMs, without compromising the security of the underlying encryption technique. Our simulations on an MLC (TLC) resistive RAM (RRAM) architecture across SPEC CPU2006 workloads demonstrate that for 6.25% (7.84%) memory overhead, SECRET reduces write energy by 80% (63%), latency by 37% (49%), and improves memory lifetime by 63% (56%) over conventional advanced encryption standard-based (AES-based) counter mode encryption.

Jo, Je-Gyeong, Ryou, Jae-cheol.  2016.  HTML and PDF Fuzzing Methodology in iOS. Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication. :8:1–8:5.

iOS is well-known operating system which is strong in security. However, many attacking methods of iOS have recently been published which are called "Masque Attack", "Null Dereference" and "Italy Hacking Team's RCS". Therefore, security and safety is not suitable word to iOS. In addition, many security researchers have a problem to analyze iOS because the iOS is difficult to debug because of closed source. So, we propose a new security testing method for iOS. At first, we perform to fuzz iOS's web browser called MobileSafari. The MobileSafari is possible to express HTML, PDF and mp4, etc. We perform test abnormal HTML and PDF using our fuzzing method. We hope that our research can be helpful to iOS's security and safety.

Suarez, Drew, Mayer, Daniel.  2016.  Faux Disk Encryption: Realities of Secure Storage on Mobile Devices. Proceedings of the International Conference on Mobile Software Engineering and Systems. :283–284.

This paper reviews the challenges faced when securing data on mobile devices. After a discussion of the state-of-the-art of secure storage for iOS and Android, the paper introduces an attack which demonstrates how Full Disk Encryption (FDE) on Android can be ineffective in practice.

Krutz, Daniel E., Munaiah, Nuthan, Meneely, Andrew, Malachowsky, Samuel A..  2016.  Examining the Relationship Between Security Metrics and User Ratings of Mobile Apps: A Case Study. Proceedings of the International Workshop on App Market Analytics. :8–14.

The success or failure of a mobile application (`app') is largely determined by user ratings. Users frequently make their app choices based on the ratings of apps in comparison with similar, often competing apps. Users also expect apps to continually provide new features while maintaining quality, or the ratings drop. At the same time apps must also be secure, but is there a historical trade-off between security and ratings? Or are app store ratings a more all-encompassing measure of product maturity? We used static analysis tools to collect security-related metrics in 38,466 Android apps from the Google Play store. We compared the rate of an app's permission misuse, number of requested permissions, and Androrisk score, against its user rating. We found that high-rated apps have statistically significantly higher security risk metrics than low-rated apps. However, the correlations are weak. This result supports the conventional wisdom that users are not factoring security risks into their ratings in a meaningful way. This could be due to several reasons including users not placing much emphasis on security, or that the typical user is unable to gauge the security risk level of the apps they use everyday.

Atici, Mehmet Ali, Sagiroglu, Seref, Dogru, Ibrahim Alper.  2016.  Android malware analysis approach based on control flow graphs and machine learning algorithms. :26–31.

Smart devices from smartphones to wearable computers today have been used in many purposes. These devices run various mobile operating systems like Android, iOS, Symbian, Windows Mobile, etc. Since the mobile devices are widely used and contain personal information, they are subject to security attacks by mobile malware applications. In this work we propose a new approach based on control flow graphs and machine learning algorithms for static Android malware analysis. Experimental results have shown that the proposed approach achieves a high classification accuracy of 96.26% in general and high detection rate of 99.15% for DroidKungfu malware families which are very harmful and difficult to detect because of encrypting the root exploits, by reducing data dimension significantly for real time analysis.

Im, Jong-Hyuk, Choi, JinChun, Nyang, DaeHun, Lee, Mun-Kyu.  2016.  Privacy-Preserving Palm Print Authentication Using Homomorphic Encryption. :878–881.

Biometric verification systems have security issues regarding the storage of biometric data in that a user's biometric features cannot be changed into other ones even when a system is compromised. To address this issue, it may be safe to store the biometrics data on a reliable remote server instead of storing them in a local device. However, this approach may raise a privacy issue. In this paper, we propose a biometric verification system where the biometric data are stored in a remote server in an encrypted form and the similarity of the user input to the registered biometric data is computed in an encrypted domain using a homomorphic encryption. We evaluated the performance of the proposed system through an implementation on an Android-based smartphone and an i7-based server.

Fuhry, Benny, Tighzert, Walter, Kerschbaum, Florian.  2016.  Encrypting Analytical Web Applications. Proceedings of the 2016 ACM on Cloud Computing Security Workshop. :35–46.

The software-as-a-service (SaaS) market is growing very fast, but still many clients are concerned about the confidentiality of their data in the cloud. Motivated hackers or malicious insiders could try to steal the clients' data. Encryption is a potential solution, but supporting the necessary functionality also in existing applications is difficult. In this paper, we examine encrypting analytical web applications that perform extensive number processing operations in the database. Existing solutions for encrypting data in web applications poorly support such encryption. We employ a proxy that adjusts the encryption to the level necessary for the client's usage and also supports additively homomorphic encryption. This proxy is deployed at the client and all encryption keys are stored and managed there, while the application is running in the cloud. Our proxy is stateless and we only need to modify the database driver of the application. We evaluate an instantiation of our architecture on an exemplary application. We only slightly increase page load time on average from 3.1 seconds to 4.7. However, roughly 40% of all data columns remain probabilistic encrypted. The client can set the desired security level for each column using our policy mechanism. Hence our proxy architecture offers a solution to increase the confidentiality of the data at the cloud provider at a moderate performance penalty.

Canfora, Gerardo, Medvet, Eric, Mercaldo, Francesco, Visaggio, Corrado Aaron.  2016.  Acquiring and Analyzing App Metrics for Effective Mobile Malware Detection. Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :50–57.

Android malware is becoming very effective in evading detection techniques, and traditional malware detection techniques are demonstrating their weaknesses. Signature based detection shows at least two drawbacks: first, the detection is possible only after the malware has been identified, and the time needed to produce and distribute the signature provides attackers with window of opportunities for spreading the malware in the wild. For solving this problem, different approaches that try to characterize the malicious behavior through the invoked system and API calls emerged. Unfortunately, several evasion techniques have proven effective to evade detection based on system and API calls. In this paper, we propose an approach for capturing the malicious behavior in terms of device resource consumption (using a thorough set of features), which is much more difficult to camouflage. We describe a procedure, and the corresponding practical setting, for extracting those features with the aim of maximizing their discriminative power. Finally, we describe the promising results we obtained experimenting on more than 2000 applications, on which our approach exhibited an accuracy greater than 99%.

Barbareschi, Mario, Cilardo, Alessandro, Mazzeo, Antonino.  2016.  Partial FPGA Bitstream Encryption Enabling Hardware DRM in Mobile Environments. Proceedings of the ACM International Conference on Computing Frontiers. :443–448.

The concept of digital right management (DRM) has become extremely important in current mobile environments. This paper shows how partial bitstream encryption can allow the secure distribution of hardware applications resembling the mechanisms of traditional software DRM. Building on the recent developments towards the secure distribution of hardware cores, the paper demonstrates a prototypical implementation of a user mobile device supporting such distribution mechanisms. The prototype extends the Android operating system with support for hardware reconfigurability and showcases the interplay of novel security concepts enabled by hardware DRM, the advantages of a design flow based on high-level synthesis, and the opportunities provided by current software-rich reconfigurable Systems-on-Chips. Relying on this prototype, we also collected extensive quantitative results demonstrating the limited overhead incurred by the secure distribution architecture.

2017-03-07
You, Taewan.  2016.  Toward the future of internet architecture for IoE: Precedent research on evolving the identifier and locator separation schemes. 2016 International Conference on Information and Communication Technology Convergence (ICTC). :436–439.

Internet has been being becoming the most famous and biggest communication networks as social, industrial, and public infrastructure since Internet was invented at late 1960s. In a historical retrospect of Internet's evolution, the Internet architecture continues evolution repeatedly by going through various technical challenges, for instance, in early 1990s, Internet had encountered danger of scalability, after a short while it had been overcome and successfully evolved by applying emerging techniques such as CIDR, NAT, and IPv6. Especially this paper emphasizes scalability issues as technical challenges with forecasting that Internet of things era has come. Firstly, we describe the Identifier and locator separation scheme that can achieve dramatically architectural evolution in historical perspective. Additionally, it reviews various kinds of Identifier and locator separation scheme because recently the scheme can be the major design pillar towards future of Internet architecture such as both various clean-slated future Internet architectures and evolving Internet architectures. Lastly we show a result of analysis by analysis table for future of internet of everything where number of Internet connected devices will growth to more than 20 billion by 2020.

2017-02-23
J. Shen, S. Ji, J. Shen, Z. Fu, J. Wang.  2015.  "Auditing Protocols for Cloud Storage: A Survey". 2015 First International Conference on Computational Intelligence Theory, Systems and Applications (CCITSA). :222-227.

So far, cloud storage has been accepted by an increasing number of people, which is not a fresh notion any more. It brings cloud users a lot of conveniences, such as the relief of local storage and location independent access. Nevertheless, the correctness and completeness as well as the privacy of outsourced data are what worry could users. As a result, most people are unwilling to store data in the cloud, in case that the sensitive information concerning something important is disclosed. Only when people feel worry-free, can they accept cloud storage more easily. Certainly, many experts have taken this problem into consideration, and tried to solve it. In this paper, we survey the solutions to the problems concerning auditing in cloud computing and give a comparison of them. The methods and performances as well as the pros and cons are discussed for the state-of-the-art auditing protocols.

2017-02-21
Q. Wang, Y. Ren, M. Scaperoth, G. Parmer.  2015.  "SPeCK: a kernel for scalable predictability". 21st IEEE Real-Time and Embedded Technology and Applications Symposium. :121-132.

Multi- and many-core systems are increasingly prevalent in embedded systems. Additionally, isolation requirements between different partitions and criticalities are gaining in importance. This difficult combination is not well addressed by current software systems. Parallel systems require consistency guarantees on shared data-structures often provided by locks that use predictable resource sharing protocols. However, as the number of cores increase, even a single shared cache-line (e.g. for the lock) can cause significant interference. In this paper, we present a clean-slate design of the SPeCK kernel, the next generation of our COMPOSITE OS, that attempts to provide a strong version of scalable predictability - where predictability bounds made on a single core, remain constant with an increase in cores. Results show that, despite using a non-preemptive kernel, it has strong scalable predictability, low average-case overheads, and demonstrates better response-times than a state-of-the-art preemptive system.

Shuhao Liu, Baochun Li.  2015.  "On scaling software-Defined Networking in wide-area networks". Tsinghua Science and Technology. 20:221-232.

Software-Defined Networking (SDN) has emerged as a promising direction for next-generation network design. Due to its clean-slate and highly flexible design, it is believed to be the foundational principle for designing network architectures and improving their flexibility, resilience, reliability, and security. As the technology matures, research in both industry and academia has designed a considerable number of tools to scale software-defined networks, in preparation for the wide deployment in wide-area networks. In this paper, we survey the mechanisms that can be used to address the scalability issues in software-defined wide-area networks. Starting from a successful distributed system, the Domain Name System, we discuss the essential elements to make a large scale network infrastructure scalable. Then, the existing technologies proposed in the literature are reviewed in three categories: scaling out/up the data plane and scaling the control plane. We conclude with possible research directions towards scaling software-defined wide-area networks.

J. Pan, R. Jain, S. Paul.  2015.  "Enhanced Evaluation of the Interdomain Routing System for Balanced Routing Scalability and New Internet Architecture Deployments". IEEE Systems Journal. 9:892-903.

Internet is facing many challenges that cannot be solved easily through ad hoc patches. To address these challenges, many research programs and projects have been initiated and many solutions are being proposed. However, before we have a new architecture that can motivate Internet service providers (ISPs) to deploy and evolve, we need to address two issues: 1) know the current status better by appropriately evaluating the existing Internet; and 2) find how various incentives and strategies will affect the deployment of the new architecture. For the first issue, we define a series of quantitative metrics that can potentially unify results from several measurement projects using different approaches and can be an intrinsic part of future Internet architecture (FIA) for monitoring and evaluation. Using these metrics, we systematically evaluate the current interdomain routing system and reveal many “autonomous-system-level” observations and key lessons for new Internet architectures. Particularly, the evaluation results reveal the imbalance underlying the interdomain routing system and how the deployment of FIAs can benefit from these findings. With these findings, for the second issue, appropriate deployment strategies of the future architecture changes can be formed with balanced incentives for both customers and ISPs. The results can be used to shape the short- and long-term goals for new architectures that are simple evolutions of the current Internet (so-called dirty-slate architectures) and to some extent to clean-slate architectures.

2016-07-13
Giulia Fanti, University of Illinois at Urbana-Champaign, Peter Kairouz, University of Illinois at Urbana-Champaign, Sewoong Oh, University of at Urbana-Champaign, Kannan Ramchandra, University of California, Berkeley, Pramod Viswanath, University of Illinois at Urbana-Champaign.  2016.  Rumor Source Obfuscation on Irregular Trees. ACM SIGMETRICS.

Anonymous messaging applications have recently gained popularity as a means for sharing opinions without fear of judgment or repercussion. These messages propagate anonymously over a network, typically de ned by social connections or physical proximity. However, recent advances in rumor source detection show that the source of such an anonymous message can be inferred by certain statistical inference attacks. Adaptive di usion was recently proposed as a solution that achieves optimal source obfuscation over regular trees. However, in real social networks, the degrees difer from node to node, and adaptive di usion can be signicantly sub-optimal. This gap increases as the degrees become more irregular.

In order to quantify this gap, we model the underlying network as coming from standard branching processes with i.i.d. degree distributions. Building upon the analysis techniques from branching processes, we give an analytical characterization of the dependence of the probability of detection achieved by adaptive di usion on the degree distribution. Further, this analysis provides a key insight: passing a rumor to a friend who has many friends makes the source more ambiguous. This leads to a new family of protocols that we call Preferential Attachment Adaptive Di usion (PAAD). When messages are propagated according to PAAD, we give both the MAP estimator for nding the source and also an analysis of the probability of detection achieved by this adversary. The analytical results are not directly comparable, since the adversary's observed information has a di erent distribution under adaptive di usion than under PAAD. Instead, we present results from numerical experiments that suggest that PAAD achieves a lower probability of detection, at the cost of increased communication for coordination.

2015-05-06
Voskuilen, G., Vijaykumar, T.N..  2014.  Fractal++: Closing the performance gap between fractal and conventional coherence. Computer Architecture (ISCA), 2014 ACM/IEEE 41st International Symposium on. :409-420.

Cache coherence protocol bugs can cause multicores to fail. Existing coherence verification approaches incur state explosion at small scales or require considerable human effort. As protocols' complexity and multicores' core counts increase, verification continues to be a challenge. Recently, researchers proposed fractal coherence which achieves scalable verification by enforcing observational equivalence between sub-systems in the coherence protocol. A larger sub-system is verified implicitly if a smaller sub-system has been verified. Unfortunately, fractal protocols suffer from two fundamental limitations: (1) indirect-communication: sub-systems cannot directly communicate and (2) partially-serial-invalidations: cores must be invalidated in a specific, serial order. These limitations disallow common performance optimizations used by conventional directory protocols: reply-forwarding where caches communicate directly and parallel invalidations. Therefore, fractal protocols lack performance scalability while directory protocols lack verification scalability. To enable both performance and verification scalability, we propose Fractal++ which employs a new class of protocol optimizations for verification-constrained architectures: decoupled-replies, contention-hints, and fully-parallel-fractal-invalidations. The first two optimizations allow reply-forwarding-like performance while the third optimization enables parallel invalidations in fractal protocols. Unlike conventional protocols, Fractal++ preserves observational equivalence and hence is scalably verifiable. In 32-core simulations of single- and four-socket systems, Fractal++ performs nearly as well as a directory protocol while providing scalable verifiability whereas the best-performing previous fractal protocol performs 8% on average and up to 26% worse with a single-socket and 12% on average and up to 34% worse with a longer-latency multi-socket system.
 

Albino Pereira, A., Bosco M.Sobral, J., Merkle Westphall, C..  2014.  Towards Scalability for Federated Identity Systems for Cloud-Based Environments. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

As multi-tenant authorization and federated identity management systems for cloud computing matures, the provisioning of services using this paradigm allows maximum efficiency on business that requires access control. However, regarding scalability support, mainly horizontal, some characteristics of those approaches based on central authentication protocols are problematic. The objective of this work is to address these issues by providing an adapted sticky-session mechanism for a Shibboleth architecture using CAS. This alternative, compared with the recommended shared memory approach, shown improved efficiency and less overall infrastructure complexity.

Pi-Chung Wang.  2014.  Scalable Packet Classification for Datacenter Networks. Selected Areas in Communications, IEEE Journal on. 32:124-137.

The key challenge to a datacenter network is its scalability to handle many customers and their applications. In a datacenter network, packet classification plays an important role in supporting various network services. Previous algorithms store classification rules with the same length combinations in a hash table to simplify the search procedure. The search performance of hash-based algorithms is tied to the number of hash tables. To achieve fast and scalable packet classification, we propose an algorithm, encoded rule expansion, to transform rules into an equivalent set of rules with fewer distinct length combinations, without affecting the classification results. The new algorithm can minimize the storage penalty of transformation and achieve a short search time. In addition, the scheme supports fast incremental updates. Our simulation results show that more than 90% hash tables can be eliminated. The reduction of length combinations leads to an improvement on speed performance of packet classification by an order of magnitude. The results also show that the software implementation of our scheme without using any hardware parallelism can support up to one thousand customer VLANs and one million rules, where each rule consumes less than 60 bytes and each packet classification can be accomplished under 50 memory accesses.
 

2015-05-05
Hassan, S., Abbas Kamboh, A., Azam, F..  2014.  Analysis of Cloud Computing Performance, Scalability, Availability, amp; Security. Information Science and Applications (ICISA), 2014 International Conference on. :1-5.

Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.
 

Albino Pereira, A., Bosco M.Sobral, J., Merkle Westphall, C..  2014.  Towards Scalability for Federated Identity Systems for Cloud-Based Environments. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

As multi-tenant authorization and federated identity management systems for cloud computing matures, the provisioning of services using this paradigm allows maximum efficiency on business that requires access control. However, regarding scalability support, mainly horizontal, some characteristics of those approaches based on central authentication protocols are problematic. The objective of this work is to address these issues by providing an adapted sticky-session mechanism for a Shibboleth architecture using CAS. This alternative, compared with the recommended shared memory approach, shown improved efficiency and less overall infrastructure complexity.

2015-05-04
Barbosa de Carvalho, M., Pereira Esteves, R., da Cunha Rodrigues, G., Cassales Marquezan, C., Zambenedetti Granville, L., Rockenbach Tarouco, L.M..  2014.  Efficient configuration of monitoring slices for cloud platform administrators. Computers and Communication (ISCC), 2014 IEEE Symposium on. :1-7.

Monitoring is an important issue in cloud environments because it assures that acquired cloud slices attend the user's expectations. However, these environments are multitenant and dynamic, requiring automation techniques to offload cloud administrators. In a previous work, we proposed FlexACMS: a framework to automate monitoring configuration related to cloud slices using multiple monitoring solutions. In this work, we enhanced FlexACMS to allow dynamic and automatic attribution of monitoring configuration tasks to servers without administrator intervention, which was not available in previous version. FlexACMS also considers the monitoring server load when attributing configuration tasks, which allows load balancing between monitoring servers. The evaluation showed that enhancements reduced FlexACMS response time up to 60% in comparison to previous version. The scalability evaluation of enhanced version demonstrated the feasibility of our approach in large scale cloud environments.