Visible to the public Biblio

Found 12044 results

Filters: Keyword is Resiliency  [Clear All Filters]
2017-03-20
LeBlanc, Heath J., Hassan, Firas, Gomez, Edgar, Alsbou, Nesreen.  2016.  Inter-vehicle Communication Assisted Localization with Resilience to False Data Injection Attacks. Proceedings of the First ACM International Workshop on Smart, Autonomous, and Connected Vehicular Systems and Services. :64–65.

Vehicle localization is important in many applications of vehicular networks. The Global Positioning System (GPS) has been critical for vehicle localization. However, the case where the GPS is spoofed through a false data injection attack can be lead to devastating consequences, especially in localization solutions that make use of cooperation among multiple vehicles. Hence, resilient localization algorithms are needed that can achieve a baseline of performance in the case of a false data injection attack. This poster presents preliminary results of an inter-vehicle communication assisted localization algorithm that is resilient to false data injection attacks for the vehicles not directly attacked. The algorithm makes use of V2V and V2I communication – along with on-board GPS receiver, odometer, and compass – to achieve precise localization results.

Shahriar, Hossain, Haddad, Hisham.  2016.  Object Injection Vulnerability Discovery Based on Latent Semantic Indexing. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :801–807.

Object Injection Vulnerability (OIV) is an emerging threat for web applications. It involves accepting external inputs during deserialization operation and use the inputs for sensitive operations such as file access, modification, and deletion. The challenge is the automation of the detection process. When the application size is large, it becomes hard to perform traditional approaches such as data flow analysis. Recent approaches fall short of narrowing down the list of source files to aid developers in discovering OIV and the flexibility to check for the presence of OIV through various known APIs. In this work, we address these limitations by exploring a concept borrowed from the information retrieval domain called Latent Semantic Indexing (LSI) to discover OIV. The approach analyzes application source code and builds an initial term document matrix which is then transformed systematically using singular value decomposition to reduce the search space. The approach identifies a small set of documents (source files) that are likely responsible for OIVs. We apply the LSI concept to three open source PHP applications that have been reported to contain OIVs. Our initial evaluation results suggest that the proposed LSI-based approach can identify OIVs and identify new vulnerabilities.

Amullen, Esther, Lin, Hui, Kalbarczyk, Zbigniew, Keel, Lee.  2016.  Multi-agent System for Detecting False Data Injection Attacks Against the Power Grid. Proceedings of the 2Nd Annual Industrial Control System Security Workshop. :38–44.

A class of cyber-attacks called False Data Injection attacks that target measurement data used for state estimation in the power grid are currently under study by the research community. These attacks modify sensor readings obtained from meters with the aim of misleading the control center into taking ill-advised response action. It has been shown that an attacker with knowledge of the network topology can craft an attack that bypasses existing bad data detection schemes (largely based on residual generation) employed in the power grid. We propose a multi-agent system for detecting false data injection attacks against state estimation. The multi-agent system is composed of software implemented agents created for each substation. The agents facilitate the exchange of information including measurement data and state variables among substations. We demonstrate that the information exchanged among substations, even untrusted, enables agents cooperatively detect disparities between local state variables at the substation and global state variables computed by the state estimator. We show that a false data injection attack that passes bad data detection for the entire system does not pass bad data detection for each agent.

Min, Byungho, Varadharajan, Vijay.  2016.  Cascading Attacks Against Smart Grid Using Control Command Disaggregation and Services. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :2142–2147.

In this paper, we propose new types of cascading attacks against smart grid that use control command disaggregation and core smart grid services. Although there have been tremendous research efforts in injection attacks against the smart grid, to our knowledge most studies focus on false meter data injection, and false command and false feedback injection attacks have been scarcely investigated. In addition, control command disaggregation has not been addressed from a security point of view, in spite of the fact that it is becoming one of core concepts in the smart grid and hence analysing its security implications is crucial to the smart grid security. Our cascading attacks use false control command, false feedback or false meter data injection, and cascade the effects of such injections throughout the smart grid subsystems and components. Our analysis and evaluation results show that the proposed attacks can cause serious service disruptions in the smart grid. The evaluation has been performed on a widely used smart grid simulation platform.

Wang, Yinan, Zeng, Sicheng, Yang, Qiang, Lin, Zhiyun, Xu, Wenyuan, Yan, Gangfeng.  2016.  A new framework of electrical cyber physical systems. :1334–1339.

This paper establishes a new framework for electrical cyber-physical systems (ECPSs). The communication network is designed by the characteristics of a power grid. The interdependent relationship of communication networks and power grids is described by data-uploading channels and commands-downloading channels. Control strategies (such as load shedding and relay protection) are extended to this new framework for analyzing the performance of ECPSs under several attack scenarios. The fragility of ECPSs under cyber attacks (DoS attack and false data injection attack) and the effectiveness of relay protection policies are verified by experimental results.

Malecha, Gregory, Ricketts, Daniel, Alvarez, Mario M., Lerner, Sorin.  2016.  Towards foundational verification of cyber-physical systems. :1–5.

The safety-critical aspects of cyber-physical systems motivate the need for rigorous analysis of these systems. In the literature this work is often done using idealized models of systems where the analysis can be carried out using high-level reasoning techniques such as Lyapunov functions and model checking. In this paper we present VERIDRONE, a foundational framework for reasoning about cyber-physical systems at all levels from high-level models to C code that implements the system. VERIDRONE is a library within the Coq proof assistant enabling us to build on its foundational implementation, its interactive development environments, and its wealth of libraries capturing interesting theories ranging from real numbers and differential equations to verified compilers and floating point numbers. These features make proof assistants in general, and Coq in particular, a powerful platform for unifying foundational results about safety-critical systems and ensuring interesting properties at all levels of the stack.

Chen, Siyuan, Zeng, Peng, Choo, Kim-Kwang Raymond.  2016.  A Provably Secure Blind Signature Based on Coding Theory. :376–382.

Blind signature can be deployed to preserve user anonymity and is widely used in digital cash and e-voting. As an interactive protocol, blind signature schemes require high efficiency. In this paper, we propose a code-based blind signature scheme with high efficiency as it can produce a valid signature without many loops unlike existing code-based signature schemes. We then prove the security of our scheme in the random oracle model and analyze the efficiency of our scheme. Since a code-based signature scheme is post-quantum cryptography, therefore, the scheme is also able to resist quantum attacks.

Goldfeld, Ziv, Cuff, Paul, Permuter, Haim H..  2016.  Semantic-Security Capacity for the Physical Layer via Information Theory. :17–27.

Physical layer security can ensure secure communication over noisy channels in the presence of an eavesdropper with unlimited computational power. We adopt an information theoretic variant of semantic-security (SS) (a cryptographic gold standard), as our secrecy metric and study the open problem of the type II wiretap channel (WTC II) with a noisy main channel is, whose secrecy-capacity is unknown even under looser metrics than SS. Herein the secrecy-capacity is derived and shown to be equal to its SS capacity. In this setting, the legitimate users communicate via a discrete-memory less (DM) channel in the presence of an eavesdropper that has perfect access to a subset of its choosing of the transmitted symbols, constrained to a fixed fraction of the block length. The secrecy criterion is achieved simultaneously for all possible eavesdropper subset choices. On top of that, SS requires negligible mutual information between the message and the eavesdropper's observations even when maximized over all message distributions. A key tool for the achievability proof is a novel and stronger version of Wyner's soft covering lemma. Specifically, the lemma shows that a random codebook achieves the soft-covering phenomenon with high probability. The probability of failure is doubly-exponentially small in the block length. Since the combined number of messages and subsets grows only exponentially with the block length, SS for the WTC II is established by using the union bound and invoking the stronger soft-covering lemma. The direct proof shows that rates up to the weak-secrecy capacity of the classic WTC with a DM erasure channel (EC) to the eavesdropper are achievable. The converse follows by establishing the capacity of this DM wiretap EC as an upper bound for the WTC II. From a broader perspective, the stronger soft-covering lemma constitutes a tool for showing the existence of codebooks that satisfy exponentially many constraints, a beneficial ability for many other applications in information theoretic security.

Gnilke, Oliver Wilhelm, Tran, Ha Thanh Nguyen, Karrila, Alex, Hollanti, Camilla.  2016.  Well-rounded lattices for reliability and security in Rayleigh fading SISO channels. :359–363.

For many wiretap channel models asymptotically optimal coding schemes are known, but less effort has been put into actual realizations of wiretap codes for practical parameters. Bounds on the mutual information and error probability when using coset coding on a Rayleigh fading channel were recently established by Oggier and Belfiore, and the results in this paper build on their work. However, instead of using their ultimate inverse norm sum approximation, a more precise expression for the eavesdropper's probability of correct decision is used in order to determine a general class of good coset codes. The code constructions are based on well-rounded lattices arising from simple geometric criteria. In addition to new coset codes and simulation results, novel number-theoretic results on well-rounded ideal lattices are presented.

Han, Shuai, Liu, Shengli, Zhang, Fangguo, Chen, Kefei.  2016.  Homomorphic Linear Authentication Schemes from (\$textbackslashepsilon\$)-Authentication Codes. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :487–498.

Proofs of Data Possession/Retrievability (PoDP/PoR) schemes are essential to cloud storage services, since they can increase clients' confidence on the integrity and availability of their data. The majority of PoDP/PoR schemes are constructed from homomorphic linear authentication (HLA) schemes, which decrease the price of communication between the client and the server. In this paper, a new subclass of authentication codes, named ε-authentication codes, is proposed, and a modular construction of HLA schemes from ε-authentication codes is presented. We prove that the security notions of HLA schemes are closely related to the size of the authenticator/tag space and the successful probability of impersonation attacks (with non-zero source states) of the underlying ε-authentication codes. We show that most of HLA schemes used for the PoDP/PoR schemes are instantiations of our modular construction from some ε-authentication codes. Following this line, an algebraic-curves-based ε-authentication code yields a new HLA scheme.

Dormann, Will.  2016.  Google Authentication Risks on iOS. Proceedings of the 1st International Workshop on Mobile Development. :3–5.

The Google Identity Platform is a system that allows a user to sign in to applications and other services by using a Google account. Google Sign-In is one such method for providing one’s identity to the Google Identity Platform. Google Sign-In is available for Android applications and iOS applications, as well as for websites and other devices. Users of Google Sign-In find that it integrates well with the Android platform, but iOS users (iPhone, iPad, etc.) do not have the same experience. The user experience when logging in to a Google account on an iOS application can not only be more tedious than the Android experience, but it also conditions users to engage in behaviors that put the information in their Google accounts at risk.

Asharov, Gilad, Naor, Moni, Segev, Gil, Shahaf, Ido.  2016.  Searchable Symmetric Encryption: Optimal Locality in Linear Space via Two-dimensional Balanced Allocations. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :1101–1114.

Searchable symmetric encryption (SSE) enables a client to store a database on an untrusted server while supporting keyword search in a secure manner. Despite the rapidly increasing interest in SSE technology, experiments indicate that the performance of the known schemes scales badly to large databases. Somewhat surprisingly, this is not due to their usage of cryptographic tools, but rather due to their poor locality (where locality is defined as the number of non-contiguous memory locations the server accesses with each query). The only known schemes that do not suffer from poor locality suffer either from an impractical space overhead or from an impractical read efficiency (where read efficiency is defined as the ratio between the number of bits the server reads with each query and the actual size of the answer). We construct the first SSE schemes that simultaneously enjoy optimal locality, optimal space overhead, and nearly-optimal read efficiency. Specifically, for a database of size N, under the modest assumption that no keyword appears in more than N1 − 1/loglogN documents, we construct a scheme with read efficiency Õ(loglogN). This essentially matches the lower bound of Cash and Tessaro (EUROCRYPT ’14) showing that any SSE scheme must be sub-optimal in either its locality, its space overhead, or its read efficiency. In addition, even without making any assumptions on the structure of the database, we construct a scheme with read efficiency Õ(logN). Our schemes are obtained via a two-dimensional generalization of the classic balanced allocations (“balls and bins”) problem that we put forward. We construct nearly-optimal two-dimensional balanced allocation schemes, and then combine their algorithmic structure with subtle cryptographic techniques.

Swami, Shivam, Rakshit, Joydeep, Mohanram, Kartik.  2016.  SECRET: Smartly EnCRypted Energy Efficient Non-volatile Memories. Proceedings of the 53rd Annual Design Automation Conference. :166:1–166:6.

Data persistence in emerging non-volatile memories (NVMs) poses a multitude of security vulnerabilities, motivating main memory encryption for data security. However, practical encryption algorithms demonstrate strong diffusion characteristics that increase cell flips, resulting in increased write energy/latency and reduced lifetime of NVMs. State-of-the-art security solutions have focused on reducing the encryption penalty (increased write energy/latency and reduced memory lifetime) in single-level cell (SLC) NVMs; however, the realization of low encryption penalty solutions for multi-/triple-level cell (MLC/TLC) secure NVMs remains an open area of research. This work synergistically integrates zero-based partial writes with XOR-based energy masking to realize Smartly EnCRypted Energy efficienT, i.e., SECRET MLC/TLC NVMs, without compromising the security of the underlying encryption technique. Our simulations on an MLC (TLC) resistive RAM (RRAM) architecture across SPEC CPU2006 workloads demonstrate that for 6.25% (7.84%) memory overhead, SECRET reduces write energy by 80% (63%), latency by 37% (49%), and improves memory lifetime by 63% (56%) over conventional advanced encryption standard-based (AES-based) counter mode encryption.

Jo, Je-Gyeong, Ryou, Jae-cheol.  2016.  HTML and PDF Fuzzing Methodology in iOS. Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication. :8:1–8:5.

iOS is well-known operating system which is strong in security. However, many attacking methods of iOS have recently been published which are called "Masque Attack", "Null Dereference" and "Italy Hacking Team's RCS". Therefore, security and safety is not suitable word to iOS. In addition, many security researchers have a problem to analyze iOS because the iOS is difficult to debug because of closed source. So, we propose a new security testing method for iOS. At first, we perform to fuzz iOS's web browser called MobileSafari. The MobileSafari is possible to express HTML, PDF and mp4, etc. We perform test abnormal HTML and PDF using our fuzzing method. We hope that our research can be helpful to iOS's security and safety.

Suarez, Drew, Mayer, Daniel.  2016.  Faux Disk Encryption: Realities of Secure Storage on Mobile Devices. Proceedings of the International Conference on Mobile Software Engineering and Systems. :283–284.

This paper reviews the challenges faced when securing data on mobile devices. After a discussion of the state-of-the-art of secure storage for iOS and Android, the paper introduces an attack which demonstrates how Full Disk Encryption (FDE) on Android can be ineffective in practice.

Malecha, Gregory, Ricketts, Daniel, Alvarez, Mario M., Lerner, Sorin.  2016.  Towards foundational verification of cyber-physical systems. :1–5.

The safety-critical aspects of cyber-physical systems motivate the need for rigorous analysis of these systems. In the literature this work is often done using idealized models of systems where the analysis can be carried out using high-level reasoning techniques such as Lyapunov functions and model checking. In this paper we present VERIDRONE, a foundational framework for reasoning about cyber-physical systems at all levels from high-level models to C code that implements the system. VERIDRONE is a library within the Coq proof assistant enabling us to build on its foundational implementation, its interactive development environments, and its wealth of libraries capturing interesting theories ranging from real numbers and differential equations to verified compilers and floating point numbers. These features make proof assistants in general, and Coq in particular, a powerful platform for unifying foundational results about safety-critical systems and ensuring interesting properties at all levels of the stack.
 

Chen, Siyuan, Zeng, Peng, Choo, Kim-Kwang Raymond.  2016.  A Provably Secure Blind Signature Based on Coding Theory. :376–382.

Blind signature can be deployed to preserve user anonymity and is widely used in digital cash and e-voting. As an interactive protocol, blind signature schemes require high efficiency. In this paper, we propose a code-based blind signature scheme with high efficiency as it can produce a valid signature without many loops unlike existing code-based signature schemes. We then prove the security of our scheme in the random oracle model and analyze the efficiency of our scheme. Since a code-based signature scheme is post-quantum cryptography, therefore, the scheme is also able to resist quantum attacks.
 

Goldfeld, Ziv, Cuff, Paul, Permuter, Haim H..  2016.  Semantic-Security Capacity for the Physical Layer via Information Theory. :17–27.

Physical layer security can ensure secure communication over noisy channels in the presence of an eavesdropper with unlimited computational power. We adopt an information theoretic variant of semantic-security (SS) (a cryptographic gold standard), as our secrecy metric and study the open problem of the type II wiretap channel (WTC II) with a noisy main channel is, whose secrecy-capacity is unknown even under looser metrics than SS. Herein the secrecy-capacity is derived and shown to be equal to its SS capacity. In this setting, the legitimate users communicate via a discrete-memory less (DM) channel in the presence of an eavesdropper that has perfect access to a subset of its choosing of the transmitted symbols, constrained to a fixed fraction of the block length. The secrecy criterion is achieved simultaneously for all possible eavesdropper subset choices. On top of that, SS requires negligible mutual information between the message and the eavesdropper's observations even when maximized over all message distributions. A key tool for the achievability proof is a novel and stronger version of Wyner's soft covering lemma. Specifically, the lemma shows that a random codebook achieves the soft-covering phenomenon with high probability. The probability of failure is doubly-exponentially small in the block length. Since the combined number of messages and subsets grows only exponentially with the block length, SS for the WTC II is established by using the union bound and invoking the stronger soft-covering lemma. The direct proof shows that rates up to the weak-secrecy capacity of the classic WTC with a DM erasure channel (EC) to the eavesdropper are achievable. The converse follows by establishing the capacity of this DM wiretap EC as an upper bound for the WTC II. From a broader perspective, the stronger soft-covering lemma constitutes a tool for showing the existence of codebooks that satisfy exponentially many constraints, a beneficial ability for many other applications in information theoretic security.
 

Gnilke, Oliver Wilhelm, Tran, Ha Thanh Nguyen, Karrila, Alex, Hollanti, Camilla.  2016.  Well-rounded lattices for reliability and security in Rayleigh fading SISO channels. :359–363.

For many wiretap channel models asymptotically optimal coding schemes are known, but less effort has been put into actual realizations of wiretap codes for practical parameters. Bounds on the mutual information and error probability when using coset coding on a Rayleigh fading channel were recently established by Oggier and Belfiore, and the results in this paper build on their work. However, instead of using their ultimate inverse norm sum approximation, a more precise expression for the eavesdropper's probability of correct decision is used in order to determine a general class of good coset codes. The code constructions are based on well-rounded lattices arising from simple geometric criteria. In addition to new coset codes and simulation results, novel number-theoretic results on well-rounded ideal lattices are presented.
 

Ferragut, Erik M., Brady, Andrew C., Brady, Ethan J., Ferragut, Jacob M., Ferragut, Nathan M., Wildgruber, Max C..  2016.  HackAttack: Game-Theoretic Analysis of Realistic Cyber Conflicts. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :8:1–8:8.

Game theory is appropriate for studying cyber conflict because it allows for an intelligent and goal-driven adversary. Applications of game theory have led to a number of results regarding optimal attack and defense strategies. However, the overwhelming majority of applications explore overly simplistic games, often ones in which each participant's actions are visible to every other participant. These simplifications strip away the fundamental properties of real cyber conflicts: probabilistic alerting, hidden actions, unknown opponent capabilities. In this paper, we demonstrate that it is possible to analyze a more realistic game, one in which different resources have different weaknesses, players have different exploits, and moves occur in secrecy, but they can be detected. Certainly, more advanced and complex games are possible, but the game presented here is more realistic than any other game we know of in the scientific literature. While optimal strategies can be found for simpler games using calculus, case-by-case analysis, or, for stochastic games, Q-learning, our more complex game is more naturally analyzed using the same methods used to study other complex games, such as checkers and chess. We define a simple evaluation function and employ multi-step searches to create strategies. We show that such scenarios can be analyzed, and find that in cases of extreme uncertainty, it is often better to ignore one's opponent's possible moves. Furthermore, we show that a simple evaluation function in a complex game can lead to interesting and nuanced strategies that follow tactics that tend to select moves that are well tuned to the details of the situation and the relative probabilities of success.

Carver, Jeffrey C., Burcham, Morgan, Kocak, Sedef Akinli, Bener, Ayse, Felderer, Michael, Gander, Matthias, King, Jason, Markkula, Jouni, Oivo, Markku, Sauerwein, Clemens et al..  2016.  Establishing a Baseline for Measuring Advancement in the Science of Security: An Analysis of the 2015 IEEE Security & Privacy Proceedings. Proceedings of the Symposium and Bootcamp on the Science of Security. :38–51.

To help establish a more scientific basis for security science, which will enable the development of fundamental theories and move the field from being primarily reactive to primarily proactive, it is important for research results to be reported in a scientifically rigorous manner. Such reporting will allow for the standard pillars of science, namely replication, meta-analysis, and theory building. In this paper we aim to establish a baseline of the state of scientific work in security through the analysis of indicators of scientific research as reported in the papers from the 2015 IEEE Symposium on Security and Privacy. To conduct this analysis, we developed a series of rubrics to determine the completeness of the papers relative to the type of evaluation used (e.g. case study, experiment, proof). Our findings showed that while papers are generally easy to read, they often do not explicitly document some key information like the research objectives, the process for choosing the cases to include in the studies, and the threats to validity. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.

Sharma, Seema, Ram, Babu.  2016.  Causes of Human Errors in Early Risk Assesment in Software Project Management. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :11:1–11:11.

This paper concerns the role of human errors in the field of Early Risk assessment in Software Project Management. Researchers have recently begun to focus on human errors in early risk assessment in large software projects; statistics show it to be major components of problems in software over 80% of economic losses are attributed to this problem. There has been comparatively diminutive experimental research on the role of human errors in this context, particularly evident at the organizational level, largely because of reluctance to share information and statistics on security issues in online software application. Grounded theory has been employed to investigate the main root of human errors in online security risks as a research methodology. An open-ended question was asked of 103 information security experts around the globe and the responses used to develop a list of human errors causes by open coding. The paper represents a contribution to our understanding of the causes of human errors in information security contexts. It is also one of the first information security research studies of the kind utilizing Strauss and Glaser's grounded theory approaches together, during data collection phases to achieve the required number of participants' responses and is a significant contribution to the field.

Hiller, Matthias, Önalan, Aysun Gurur, Sigl, Georg, Bossert, Martin.  2016.  Online Reliability Testing for PUF Key Derivation. Proceedings of the 6th International Workshop on Trustworthy Embedded Devices. :15–22.

Physical Unclonable Functions (PUFs) measure manufacturing variations inside integrated circuits to derive internal secrets during run-time and avoid to store secrets permanently in non-volatile memory. PUF responses are noisy such that they require error correction to generate reliable cryptographic keys. To date, when needed one single key is reproduced in the field and always used, regardless of its reliability. In this work, we compute online reliability information for a reproduced key and perform multiple PUF readout and error correction steps in case of an unreliable result. This permits to choose the most reliable key among multiple derived key candidates with different corrected error patterns. We achieve the same average key error probability from less PUF response bits with this approach. Our proof of concept design for a popular reference scenario uses Differential Sequence Coding (DSC) and a Viterbi decoder with reliability output information. It requires 39% less PUF response bits and 16% less helper data bits than the regular approach without the option for multiple readouts.

Haah, Jeongwan, Harrow, Aram W., Ji, Zhengfeng, Wu, Xiaodi, Yu, Nengkun.  2016.  Sample-optimal Tomography of Quantum States. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :913–925.

It is a fundamental problem to decide how many copies of an unknown mixed quantum state are necessary and sufficient to determine the state. This is the quantum analogue of the problem of estimating a probability distribution given some number of samples. Previously, it was known only that estimating states to error є in trace distance required O(dr2/є2) copies for a d-dimensional density matrix of rank r. Here, we give a measurement scheme (POVM) that uses O( (dr/ δ ) ln(d/δ) ) copies to estimate ρ to error δ in infidelity. This implies O( (dr / є2)· ln(d/є) ) copies suffice to achieve error є in trace distance. For fixed d, our measurement can be implemented on a quantum computer in time polynomial in n. We also use the Holevo bound from quantum information theory to prove a lower bound of Ω(dr/є2)/ log(d/rє) copies needed to achieve error є in trace distance. This implies a lower bound Ω(dr/δ)/log(d/rδ) for the estimation error δ in infidelity. These match our upper bounds up to log factors. Our techniques can also show an Ω(r2d/δ) lower bound for measurement strategies in which each copy is measured individually and then the outcomes are classically post-processed to produce an estimate. This matches the known achievability results and proves for the first time that such “product” measurements have asymptotically suboptimal scaling with d and r.

Krutz, Daniel E., Munaiah, Nuthan, Meneely, Andrew, Malachowsky, Samuel A..  2016.  Examining the Relationship Between Security Metrics and User Ratings of Mobile Apps: A Case Study. Proceedings of the International Workshop on App Market Analytics. :8–14.

The success or failure of a mobile application (`app') is largely determined by user ratings. Users frequently make their app choices based on the ratings of apps in comparison with similar, often competing apps. Users also expect apps to continually provide new features while maintaining quality, or the ratings drop. At the same time apps must also be secure, but is there a historical trade-off between security and ratings? Or are app store ratings a more all-encompassing measure of product maturity? We used static analysis tools to collect security-related metrics in 38,466 Android apps from the Google Play store. We compared the rate of an app's permission misuse, number of requested permissions, and Androrisk score, against its user rating. We found that high-rated apps have statistically significantly higher security risk metrics than low-rated apps. However, the correlations are weak. This result supports the conventional wisdom that users are not factoring security risks into their ratings in a meaningful way. This could be due to several reasons including users not placing much emphasis on security, or that the typical user is unable to gauge the security risk level of the apps they use everyday.