Visible to the public Biblio

Found 1171 results

Filters: First Letter Of Title is P  [Clear All Filters]
2017-09-27
Bousquet, Jean-François, Liu, Xiao.  2016.  Predicting the Performance of a Dual-band Bi-directional Transceiver for Shallow Water Deployments. Proceedings of the 11th ACM International Conference on Underwater Networks & Systems. :22:1–22:8.
In this work, a bi-directional transceiver with a maximum throughput of 24 kbps is presented. The spatio-temporal shallow water channel characteristics between a projector and a hydrophone array are analyzed in a seawater tank, and a methodology to maintain a 10−4 probability of bit error with prior knowledge of the channel statistics is proposed. Also, it is found that flow generated in the sea water provides a realistic representation of time-varying propagation conditions, particularly for the reverse link communication link at 22.5 kHz.
Malchow, Jan-Ole, Güldenring, Benjamin, Roth, Volker.  2016.  POSTER: Re-Thinking Risks and Rewards for Trusted Third Parties. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1796–1798.
Commercial trusted third parties (TTPs) may increase their bottom line by watering down their validation procedures because they assume no liability for lapses of judgement. Consumers bear the risk of misplaced trust. Reputation loss is a weak deterrent for TTPs because consumers do not choose them - web shops and browser vendors do. At the same time, consumers are the source of income of these parties. Hence, risks and rewards are not well-aligned. Towards a better alignment, we explore the brokering of connection insurances and transaction insurances, where consumers get to choose their insurer. We lay out the principal idea how such a brokerage might work at a technical level with minimal interference with existing protocols and mechanisms, we analyze the security requirements and we propose techniques to meet these requirements.
Barthe, Gilles, Gaboardi, Marco, Grégoire, Benjamin, Hsu, Justin, Strub, Pierre-Yves.  2016.  Proving Differential Privacy via Probabilistic Couplings. Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science. :749–758.
Over the last decade, differential privacy has achieved widespread adoption within the privacy community. Moreover, it has attracted significant attention from the verification community, resulting in several successful tools for formally proving differential privacy. Although their technical approaches vary greatly, all existing tools rely on reasoning principles derived from the composition theorem of differential privacy. While this suffices to verify most common private algorithms, there are several important algorithms whose privacy analysis does not rely solely on the composition theorem. Their proofs are significantly more complex, and are currently beyond the reach of verification tools. In this paper, we develop compositional methods for formally verifying differential privacy for algorithms whose analysis goes beyond the composition theorem. Our methods are based on deep connections between differential privacy and probabilistic couplings, an established mathematical tool for reasoning about stochastic processes. Even when the composition theorem is not helpful, we can often prove privacy by a coupling argument. We demonstrate our methods on two algorithms: the Exponential mechanism and the Above Threshold algorithm, the critical component of the famous Sparse Vector algorithm. We verify these examples in a relational program logic apRHL+, which can construct approximate couplings. This logic extends the existing apRHL logic with more general rules for the Laplace mechanism and the one-sided Laplace mechanism, and new structural rules enabling pointwise reasoning about privacy; all the rules are inspired by the connection with coupling. While our paper is presented from a formal verification perspective, we believe that its main insight is of independent interest for the differential privacy community.
Fan, Jiasheng, Chen, Fangjiong, Guan, Quansheng, Ji, Fei, Yu, Hua.  2016.  On the Probability of Finding a Receiver in an Ellipsoid Neighborhood of a Sender in 3D Random UANs. Proceedings of the 11th ACM International Conference on Underwater Networks & Systems. :51:1–51:2.
We consider 3-dimensional(3D) underwater random network (UAN) where the nodes are uniformly distributed in a cuboid region. Then we derive the closed-form probability of finding a receiver in an ellipsoid neighborhood of an arbitrary sender. Computer simulation shows that the analytical result is generally consistent with the simulated result.
2017-09-26
Woos, Doug, Wilcox, James R., Anton, Steve, Tatlock, Zachary, Ernst, Michael D., Anderson, Thomas.  2016.  Planning for Change in a Formal Verification of the Raft Consensus Protocol. Proceedings of the 5th ACM SIGPLAN Conference on Certified Programs and Proofs. :154–165.

We present the first formal verification of state machine safety for the Raft consensus protocol, a critical component of many distributed systems. We connected our proof to previous work to establish an end-to-end guarantee that our implementation provides linearizable state machine replication. This proof required iteratively discovering and proving 90 system invariants. Our verified implementation is extracted to OCaml and runs on real networks. The primary challenge we faced during the verification process was proof maintenance, since proving one invariant often required strengthening and updating other parts of our proof. To address this challenge, we propose a methodology of planning for change during verification. Our methodology adapts classical information hiding techniques to the context of proof assistants, factors out common invariant-strengthening patterns into custom induction principles, proves higher-order lemmas that show any property proved about a particular component implies analogous properties about related components, and makes proofs robust to change using structural tactics. We also discuss how our methodology may be applied to systems verification more broadly.

2017-09-19
Bogdan, Paul, Pande, Partha Pratim, Amrouch, Hussam, Shafique, Muhammad, Henkel, Jörg.  2016.  Power and Thermal Management in Massive Multicore Chips: Theoretical Foundation Meets Architectural Innovation and Resource Allocation. Proceedings of the International Conference on Compilers, Architectures and Synthesis for Embedded Systems. :4:1–4:2.

Continuing progress and integration levels in silicon technologies make possible complete end-user systems consisting of extremely high number of cores on a single chip targeting either embedded or high-performance computing. However, without new paradigms of energy- and thermally-efficient designs, producing information and communication systems capable of meeting the computing, storage and communication demands of the emerging applications will be unlikely. The broad topic of power and thermal management of massive multicore chips is actively being pursued by a number of researchers worldwide, from a variety of different perspectives, ranging from workload modeling to efficient on-chip network infrastructure design to resource allocation. Successful solutions will likely adopt and encompass elements from all or at least several levels of abstraction. Starting from these ideas, we consider a holistic approach in establishing the Power-Thermal-Performance (PTP) trade-offs of massive multicore processors by considering three inter-related but varying angles, viz., on-chip traffic modeling, novel Networks-on-Chip (NoC) architecture and resource allocation/mapping On-line workload (mathematical modeling, analysis and prediction) learning is fundamental for endowing the many-core platforms with self-optimizing capabilities [2][3]. This built-in intelligence capability of many-cores calls for monitoring the interactions between the set of running applications and the architectural (core and uncore) components, the online construction of mathematical models for the observed workloads, and determining the best resource allocation decisions given the limited amount of information about user-to-application-to-system dynamics. However, workload modeling is not a trivial task. Centralized approaches for analyzing and mining workloads can easily run into scalability issues with increasing number of monitored processing elements and uncore (routers and interface queues) components since it can either lead to significant traffic and energy overhead or require dedicated system infrastructure. In contrast, learning the most compact mathematical representation of the workload can be done in a distributed manner (within the proximity of the observation /sensing) as long as the mathematical techniques are flexible and exploit the mathematical characteristics of the workloads (degree of periodicity, degree of fractal and temporal scaling) [3]. As one can notice, this strategy does not postulate a-priori the mathematical expressions (e.g., a specific order of the autoregressive moving average (ARMA) model). Instead, the periodicity and fractality of the observed computation (e.g., instructions per cycles, last level cache misses, branch prediction successes and failures, TLB access/misses) and communication (request-reply latency, queues utilization, memory queuing delay) metrics dictate the number of coefficients, the linearity or nonlinearity of the dynamical state equations and the noise terms (e.g., Gaussian distributed) [3]. In other words, dedicated minimal logic can be allocated to interact with the local sensor to analyze the incoming workload at run-time, determine the required number of parameters and their values as a function of their characteristics and communicate only the workload model parameters to a hierarchical optimization module (autonomous control architecture). For instance, capturing the fractal characteristics of the core and uncore workloads led to the development of more efficient power management strategy [1] than those based on PID or model predictive control. In order to develop a compact and accurate mathematical framework for analyzing and modeling the incoming workload, we describe a general probabilistic approach that models the statistics of the increments in the magnitude of a stochastic process (associated with a specific workload metric) and the intervals of time (inter-event times) between successive changes in the stochastic process [3]. We show that the statistics of these two components of the stochastic process allows us to derive state equations and capture either short-range or long-range memory properties. To test the benefits of this new workload modeling approach, we describe its integration into a multi-fractal optimal control framework for solving the power management for a 64-core NoC-based manycore platform and contrast it with a mono-fractal and non-fractal schemes [3]. A scalable, low power, and high-bandwidth on-chip communication infrastructure is essential to sustain the predicted growth in the number of embedded cores in a single die. New interconnection fabrics are key for continued performance improvements and energy reduction of manycore chips, and an efficient and robust NoC architecture is one of the key steps towards achieving that goal. An NoC architecture that incorporates emerging interconnect paradigms will be an enabler for low-power, high-bandwidth manycore chips. Innovative interconnect paradigms based on optical technologies, RF/wireless methods, carbon nanotubes, or 3D integration are promising alternatives that may indeed overcome obstacles that impede continued advances of the manycore paradigm. These innovations will open new opportunities for research in NoC designs with emerging interconnect infrastructures. In this regard, wireless NoC (WiNoC) is a promising direction to design energy efficient multicore architectures. WiNoC not only helps in improving the energy efficiency and performance, it also opens up opportunities for implementing power management strategies. WiNoCs enable implementation of the two most popular power management mechanisms, viz., dynamic voltage and frequency scaling (DVFS) and voltage frequency island (VFI). The wireless links in the WiNoC establish one-hop shortcuts between the distant nodes and facilitate energy savings in data exchange [3]. The wireless shortcuts attract a significant amount of the overall traffic within the network. The amount of traffic detoured is substantial and the low power wireless links enable energy savings. However, the overall energy dissipation within the network is still dominated by the data traversing the wireline links. Hence, by incorporating DVFS on these wireline links we can save more energy. Moreover, by incorporating suitable congestion aware routing with DVFS, we can avoid thermal hotspots in the system [4]. It should be noted that for large system size the hardware overhead in terms of on-chip voltage regulators and synchronizers is much more in DVFS than in VFI. WiNoC-enabled VFI designs mitigate some of the full-system performance degradation inherent in VFI-partitioned multicore designs, and it also help in eliminating it entirely for certain applications [5]. The VFI-partitioned designs used in conjunction with a novel NoC architecture like WiNoC can achieve significant energy savings while minimizing the impact on the achievable performance. On-chip power density and temperature trends are continuously increasing due to high integration density of nano-scale transistors and failure of Dennard Scaling as a result of diminishing voltage scaling. Hence, all computing is temperature-constrained computing and therefore, employing thermal management techniques that keep chip temperatures within safe limits along with meeting the constraints of spatial/temporal thermal gradients and avoid wear-out effects [8] is key. We introduced the novel concept of Dark Silicon Patterning, i.e. spatio-temporal control of power states of different cores [9] Sophisticated patterning and thread-to-core mapping decisions are made considering the knowledge of process variations and lateral heat dissipation of power-gated cores in order to enhance the performance of multi-threaded workloads through dynamic core count scaling (DCCS). This is enabled by a lightweight online prediction of chip's thermal profile for a given patterning candidate. We also present an enhanced temperature-aware resource management technique that, besides active and dark states of cores, also exploit various grey states (i.e., using different voltage-frequency levels) in order to achieve a high performance for mixed ILP-TLP workloads under peak temperature constraints. High ILP applications benefit from high V-f and boosting levels, while high TLP applications benefit from As the scaling trends move from multi-core to many-core processors, the centralized solutions become infeasible, and thereby require distributed techniques. In [6], we proposed an agent-based distributed temperature-aware resource management technique called TAPE. It assigns a so-called agent to each core, a software or hardware entity that acts on behalf of the core. Following the principles of economic theory, these agents negotiate with each other to trade their power budgets in order to fulfil the performance requirements of their tasks, while keep the TPeak≤Tcritical. In case of thermal violations, task migration or V-f throttling is triggered, and a penalty is applied to the trading process to improve the decision making.

Shinde, Shweta, Chua, Zheng Leong, Narayanan, Viswesh, Saxena, Prateek.  2016.  Preventing Page Faults from Telling Your Secrets. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :317–328.

New hardware primitives such as Intel SGX secure a user-level process in presence of an untrusted or compromised OS. Such "enclaved execution" systems are vulnerable to several side-channels, one of which is the page fault channel. In this paper, we show that the page fault side-channel has sufficient channel capacity to extract bits of encryption keys from commodity implementations of cryptographic routines in OpenSSL and Libgcrypt – leaking 27% on average and up to 100% of the secret bits in many case-studies. To mitigate this, we propose a software-only defense that masks page fault patterns by determinising the program's memory access behavior. We show that such a technique can be built into a compiler, and implement it for a subset of C which is sufficient to handle the cryptographic routines we study. This defense when implemented generically can have significant overhead of up to 4000X, but with help of developer-assisted compiler optimizations, the overhead reduces to at most 29.22% in our case studies. Finally, we discuss scope for hardware-assisted defenses, and show one solution that can reduce overheads to 6.77% with support from hardware changes.

Sun, Bo, Fujino, Akinori, Mori, Tatsuya.  2016.  POSTER: Toward Automating the Generation of Malware Analysis Reports Using the Sandbox Logs. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1814–1816.

In recent years, the number of new examples of malware has continued to increase. To create effective countermeasures, security specialists often must manually inspect vast sandbox logs produced by the dynamic analysis method. Conversely, antivirus vendors usually publish malware analysis reports on their website. Because malware analysis reports and sandbox logs do not have direct connections, when analyzing sandbox logs, security specialists can not benefit from the information described in such expert reports. To address this issue, we developed a system called ReGenerator that automates the generation of reports related to sandbox logs by making use of existing reports published by antivirus vendors. Our system combines several techniques, including the Jaccard similarity, Natural Language Processing (NLP), and Generation (NLG), to produce concise human-readable reports describing malicious behavior for security specialists.

2017-09-15
Tripp, Omer, Pistoia, Marco, Ferrara, Pietro, Rubin, Julia.  2016.  Pinpointing Mobile Malware Using Code Analysis. Proceedings of the International Conference on Mobile Software Engineering and Systems. :275–276.

Mobile malware has recently become an acute problem. Existing solutions either base static reasoning on syntactic properties, such as exception handlers or configuration fields, or compute data-flow reachability over the program, which leads to scalability challenges. We explore a new and complementary category of features, which strikes a middleground between the above two categories. This new category focuses on security-relevant operations (communcation, lifecycle, etc) –- and in particular, their multiplicity and happens-before order –- as a means to distinguish between malicious and benign applications. Computing these features requires semantic, yet lightweight, modeling of the program's behavior. We have created a malware detection system for Android, MassDroid, that collects traces of security-relevant operations from the call graph via a scalable form of data-flow analysis. These are reduced to happens-before and multiplicity features, then fed into a supervised learning engine to obtain a malicious/benign classification. MassDroid also embodies a novel reporting interface, containing pointers into the code that serve as evidence supporting the determination. We have applied MassDroid to 35,000 Android apps from the wild. The results are highly encouraging with an F-score of 95% in standard testing, and textgreater90% when applied to previously unseen malware signatures. MassDroid is also efficient, requiring about two minutes per app. MassDroid is publicly available as a cloud service for malware detection.

Schneider, Tobias, Moradi, Amir, Güneysu, Tim.  2016.  ParTI: Towards Combined Hardware Countermeasures Against Side-Channeland Fault-Injection Attacks. Proceedings of the 2016 ACM Workshop on Theory of Implementation Security. :39–39.

Side-channel analysis and fault-injection attacks are known as major threats to any cryptographic implementation. Protecting cryptographic implementations with suitable countermeasures is thus essential before they are deployed in the wild. However, countermeasures for both threats are of completely different nature: Side-channel analysis is mitigated by techniques that hide or mask key-dependent information while resistance against fault-injection attacks can be achieved by redundancy in the computation for immediate error detection. Since already the integration of any single countermeasure in cryptographic hardware comes with significant costs in terms of performance and area, a combination of multiple countermeasures is expensive and often associated with undesired side effects. In this work, we introduce a countermeasure for cryptographic hardware implementations that combines the concept of a provably-secure masking scheme (i.e., threshold implementation) with an error detecting approach against fault injection. As a case study, we apply our generic construction to the lightweight LED cipher. Our LED instance achieves first-order resistance against side-channel attacks combined with a fault detection capability that is superior to that of simple duplication for most error distributions at an increased area demand of 4.3%.

Dziembowski, Stefan, Faust, Sebastian, Standaert, François-Xavier.  2016.  Private Circuits III: Hardware Trojan-Resilience via Testing Amplification. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :142–153.

Security against hardware trojans is currently becoming an essential ingredient to ensure trust in information systems. A variety of solutions have been introduced to reach this goal, ranging from reactive (i.e., detection-based) to preventive (i.e., trying to make the insertion of a trojan more difficult for the adversary). In this paper, we show how testing (which is a typical detection tool) can be used to state concrete security guarantees for preventive approaches to trojan-resilience. For this purpose, we build on and formalize two important previous works which introduced ``input scrambling" and ``split manufacturing" as countermeasures to hardware trojans. Using these ingredients, we present a generic compiler that can transform any circuit into a trojan-resilient one, for which we can state quantitative security guarantees on the number of correct executions of the circuit thanks to a new tool denoted as ``testing amplification". Compared to previous works, our threat model covers an extended range of hardware trojans while we stick with the goal of minimizing the number of honest elements in our transformed circuits. Since transformed circuits essentially correspond to redundant multiparty computations of the target functionality, they also allow reasonably efficient implementations, which can be further optimized if specialized to certain cryptographic primitives and security goals.

Wang, Aosen, Jin, Zhanpeng, Xu, Wenyao.  2016.  A Programmable Analog-to-Information Converter for Agile Biosensing. Proceedings of the 2016 International Symposium on Low Power Electronics and Design. :206–211.

In recent years, the analog-to-information converter (AIC), based on compressed sensing (CS) paradigm, is a promising solution to overcome the performance and energy-efficiency limitations of traditional analog-to-digital converters (ADC). Especially, AIC can enable sub-Nyquist signal sampling proportional to the intrinsic information in biomedical applications. However, the legacy AIC structure is tailored toward specific applications, which lacks of flexibility and prevents its universality. In this paper, we introduce a novel programmable AIC architecture, Pro-AIC, to enable effective configurability and reduce its energy overhead by integrating efficient multiplexing hardware design. To improve the quality and time-efficiency of Pro-AIC configuration, we also develop a rapid configuration algorithm, called RapSpiral, to quickly find the near-optimal parameter configuration in Pro-AIC architecture. Specifically, we present a design metric, trade-off penalty, to quantitatively evaluate the performance-energy trade-off. The RapSpiral controls a penalty-driven shrinking triangle to progressively approximate to the optimal trade-off. Our proposed RapSpiral is with log(n) complexity yet high accuracy, without pretraining and complex parameter tuning procedure. RapSpiral is also probable to avoid the local minimum pitfalls. Experimental results indicate that our RapSpiral algorithm can achieve more than 30x speedup compared with the brute force algorithm, with only about 3% trade-off compromise to the optimum in Pro-AIC. Furthermore, the scalability is also verified on larger size benchmarks.

Bortolotti, D., Bartolini, A., Benini, L., Pamula, V. Rajesh, Van Helleputte, N., Van Hoof, C., Verhelst, M., Gemmeke, T., Lopez, R. Braojos, Ansaloni, G. et al..  2016.  PHIDIAS: Ultra-low-power Holistic Design for Smart Bio-signals Computing Platforms. Proceedings of the ACM International Conference on Computing Frontiers. :309–314.

Emerging and future HealthCare policies are fueling up an application-driven shift toward long-term monitoring of biosignals by means of embedded ultra-low power Wireless Body Sensor Networks (WBSNs). In order to break out, these applications needed the emergence of new technologies to allow the development of extremely power-efficient bio-sensing nodes. The PHIDIAS project aims at unlocking the development of ultra-low power bio-sensing WBSNs by tackling multiple and interlocking technological breakthroughs: (i) the development of new signal processing models and methods based on the recently proposed Compressive Sampling paradigm, which allows the design of energy-minimal computational architectures and analog front-ends, (ii) the efficient hardware implementation of components, both analog and digital, building upon an innovative ultra-low-power signal processing front-end, (iii) the evaluation of the global power reduction using a system wide integration of hardware and software components focused on compressed-sensing-based bio-signals analysis. PHIDIAS brought together a mixed consortium of academic and industrial research partners representing pan-European excellence in different fields impacting the energy-aware optimization of WBSNs, including experts in signal processing and digital/analog IC design. In this way, PHIDIAS pioneered a unique holistic approach, ensuring that key breakthroughs worked out in a cooperative way toward the global objective of the project.

2017-09-06
Rahman, Akond, Pradhan, Priysha, Partho, Asif, Williams, Laurie.  2017.  Predicting Android Application Security and Privacy Risk with Static Code Metrics. Proceedings of the 4th International Conference on Mobile Software Engineering and Systems. :149–153.

Android applications pose security and privacy risks for end-users. These risks are often quantified by performing dynamic analysis and permission analysis of the Android applications after release. Prediction of security and privacy risks associated with Android applications at early stages of application development, e.g. when the developer (s) are writing the code of the application, might help Android application developers in releasing applications to end-users that have less security and privacy risk. The goal of this paper is to aid Android application developers in assessing the security and privacy risk associated with Android applications by using static code metrics as predictors. In our paper, we consider security and privacy risk of Android application as how susceptible the application is to leaking private information of end-users and to releasing vulnerabilities. We investigate how effectively static code metrics that are extracted from the source code of Android applications, can be used to predict security and privacy risk of Android applications. We collected 21 static code metrics of 1,407 Android applications, and use the collected static code metrics to predict security and privacy risk of the applications. As the oracle of security and privacy risk, we used Androrisk, a tool that quantifies the amount of security and privacy risk of an Android application using analysis of Android permissions and dynamic analysis. To accomplish our goal, we used statistical learners such as, radial-based support vector machine (r-SVM). For r-SVM, we observe a precision of 0.83. Findings from our paper suggest that with proper selection of static code metrics, r-SVM can be used effectively to predict security and privacy risk of Android applications.

2017-09-05
Preuveneers, Davy, Joosen, Wouter.  2016.  Privacy-enabled Remote Health Monitoring Applications for Resource Constrained Wearable Devices. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :119–124.

Recent computing paradigms like cloud computing and big data have become very appealing to outsource computation and storage, making it easier to realize personalized and patient centric healthcare through real-time analytics on user data. Although these technologies can significantly complement resource constrained mobile and wearable devices to store and process personal health information, privacy concerns are keeping patients from reaping the full benefits. In this paper, we present and evaluate a practical smart-watch based lifelog application for diabetics that leverages the cloud and homomorphic encryption for caregivers to analyze blood glucose, insulin values, and other parameters in a privacy friendly manner to ensure confidentiality such that even a curious cloud service provider remains oblivious of sensitive health data.

Lee, Kyunghun, Ben Salem, Haifa, Damarla, Thyagaraju, Stechele, Walter, Bhattacharyya, Shuvra S..  2016.  Prototyping Real-time Tracking Systems on Mobile Devices. Proceedings of the ACM International Conference on Computing Frontiers. :301–308.

In this paper, we address the design an implementation of low power embedded systems for real-time tracking of humans and vehicles. Such systems are important in applications such as activity monitoring and border security. We motivate the utility of mobile devices in prototyping the targeted class of tracking systems, and demonstrate a dataflow-based and cross-platform design methodology that enables efficient experimentation with key aspects of our tracking system design, including real-time operation, experimentation with advanced sensors, and streamlined management of design versions on host and mobile platforms. Our experiments demonstrate the utility of our mobile-device-targeted design methodology in validating tracking algorithm operation; evaluating real-time performance, energy efficiency, and accuracy of tracking system execution; and quantifying trade-offs involving use of advanced sensors, which offer improved sensing accuracy at the expense of increased cost and weight. Additionally, through application of a novel, cross-platform, model-based design approach, our design requires no change in source code when migrating from an initial, host-computer-based functional reference to a fully-functional implementation on the targeted mobile device.

Dang, Hung, Chong, Yun Long, Brun, Francois, Chang, Ee-Chien.  2016.  Practical and Scalable Sharing of Encrypted Data in Cloud Storage with Key Aggregation. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :69–80.

We study a sensor network setting in which samples are encrypted individually using different keys and maintained on a cloud storage. For large systems, e.g. those that generate several millions of samples per day, fine-grained sharing of encrypted samples is challenging. Existing solutions, such as Attribute-Based Encryption (ABE) and Key Aggregation Cryptosystem (KAC), can be utilized to address the challenge, but only to a certain extent. They are often computationally expensive and thus unlikely to operate at scale. We propose an algorithmic enhancement and two heuristics to improve KAC's key reconstruction cost, while preserving its provable security. The improvement is particularly significant for range and down-sampling queries – accelerating the reconstruction cost from quadratic to linear running time. Experimental study shows that for queries of size 32k samples, the proposed fast reconstruction techniques speed-up the original KAC by at least 90 times on range and down-sampling queries, and by eight times on general (arbitrary) queries. It also shows that at the expense of splitting the query into 16 sub-queries and correspondingly issuing that number of different aggregated keys, reconstruction time can be reduced by 19 times. As such, the proposed techniques make KAC more applicable in practical scenarios such as sensor networks or the Internet of Things.

Amar, Yousef, Haddadi, Hamed, Mortier, Richard.  2016.  Privacy-Aware Infrastructure for Managing Personal Data. Proceedings of the 2016 ACM SIGCOMM Conference. :571–572.

In recent times, we have seen a proliferation of personal data. This can be attributed not just to a larger proportion of our lives moving online, but also through the rise of ubiquitous sensing through mobile and IoT devices. Alongside this surge, concerns over privacy, trust, and security are expressed more and more as different parties attempt to take advantage of this rich assortment of data. The Databox seeks to enable all the advantages of personal data analytics while at the same time enforcing **accountability** and **control** in order to protect a user's privacy. In this work, we propose and delineate a personal networked device that allows users to **collate**, **curate**, and **mediate** their personal data.

2017-09-01
Santhosh Prabhu, University of Illinois at Urbana-Champaign, Ali Kheradmand, University of Illinois at Urbana-Champaign, Brighten Godfrey, University of Illinois at Urbana-Champaign, Matthew Caesar, University of Illinois at Urbana-Champaign.  2017.  Predicting Network Futures with Plankton. 1st Asia-Pacific Workshop on Networking (APNet).

Recent years have seen significant advancement in the field of formal network verification. Tools have been proposed for offline data plane verification, real-time data plane verification and configuration verification under arbitrary, but static sets of failures. However, due to the fundamental limitation of not treating the network as an evolving system, current verification platforms have significant constraints in terms of scope. In real-world networks, correctness policies may be violated only through a particular combination of environment events and protocol actions, possibly in a non-deterministic sequence. Moreover, correctness specifications themselves may often correlate multiple data plane states, particularly when dynamic data plane elements are present. Tools in existence today are not capable of reasoning about all the possible network events, and all the subsequent execution paths that are enabled by those events. We propose Plankton, a verification platform for identifying undesirable evolutions of networks. By combining symbolic modeling of data plane and control plane with explicit state exploration, Plankton
performs a goal-directed search on a finite-state transition system that captures the behavior of the network as well as the various events that can influence it. In this way, Plankton can automatically find policy violations that can occur due to a sequence of network events, starting from the current state. Initial experiments have successfully predicted scenarios like BGP Wedgies.

2017-08-22
Sengupta, Binanda, Ruj, Sushmita.  2016.  Publicly Verifiable Secure Cloud Storage for Dynamic Data Using Secure Network Coding. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :107–118.

Cloud service providers offer storage outsourcing facility to their clients. In a secure cloud storage (SCS) protocol, the integrity of the client's data is maintained. In this work, we construct a publicly verifiable secure cloud storage protocol based on a secure network coding (SNC) protocol where the client can update the outsourced data as needed. To the best of our knowledge, our scheme is the first SNC-based SCS protocol for dynamic data that is secure in the standard model and provides privacy-preserving audits in a publicly verifiable setting. Furthermore, we discuss, in details, about the (im)possibility of providing a general construction of an efficient SCS protocol for dynamic data (DSCS protocol) from an arbitrary SNC protocol. In addition, we modify an existing DSCS scheme (DPDP I) in order to support privacy-preserving audits. We also compare our DSCS protocol with other SCS schemes (including the modified DPDP I scheme). Finally, we figure out some limitations of an SCS scheme constructed using an SNC protocol.

Hubaux, Jean-Pierre.  2016.  Privacy Challenges in Mobile and Pervasive Networks. Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. :1–1.

This last decade has witnessed a wide adoption of connected mobile devices able to capture the context of their owners from embedded sensors (GPS, Wi-Fi, Bluetooth, accelerometers). The advent of mobile and pervasive computing has enabled rich social and contextual applications, but the use of such technologies raises severe privacy issues and challenges. The privacy threats come from diverse adversaries, ranging from curious service providers and other users of the same service to eavesdroppers and curious applications running on the device. The information that can be collected from mobile device owners includes their locations, their social relationships, and their current activity. All of this, once analyzed and combined together through inference, can be very telling about the users' private lives. In this talk, we will describe privacy threats in mobile and pervasive networks. We will also show how to quantify the privacy of the users of such networks and explain how information on co-location can be taken into account. We will describe the role that privacy enhancing technologies (PETs) can play and describe some of them. We will also explain how to prevent apps from sifting too many personal data under Android. We will conclude by mentioning the privacy and security challenges raised by the quantified self and digital medicine

Demertzis, Ioannis, Papadopoulos, Stavros, Papapetrou, Odysseas, Deligiannakis, Antonios, Garofalakis, Minos.  2016.  Practical Private Range Search Revisited. Proceedings of the 2016 International Conference on Management of Data. :185–198.

We consider a data owner that outsources its dataset to an untrusted server. The owner wishes to enable the server to answer range queries on a single attribute, without compromising the privacy of the data and the queries. There are several schemes on "practical" private range search (mainly in Databases venues) that attempt to strike a trade-off between efficiency and security. Nevertheless, these methods either lack provable security guarantees, or permit unacceptable privacy leakages. In this paper, we take an interdisciplinary approach, which combines the rigor of Security formulations and proofs with efficient Data Management techniques. We construct a wide set of novel schemes with realistic security/performance trade-offs, adopting the notion of Searchable Symmetric Encryption (SSE) primarily proposed for keyword search. We reduce range search to multi-keyword search using range covering techniques with tree-like indexes. We demonstrate that, given any secure SSE scheme, the challenge boils down to (i) formulating leakages that arise from the index structure, and (ii) minimizing false positives incurred by some schemes under heavy data skew. We analytically detail the superiority of our proposals over prior work and experimentally confirm their practicality.

Sambasivan, Raja R., Shafer, Ilari, Mace, Jonathan, Sigelman, Benjamin H., Fonseca, Rodrigo, Ganger, Gregory R..  2016.  Principled Workflow-centric Tracing of Distributed Systems. Proceedings of the Seventh ACM Symposium on Cloud Computing. :401–414.

Workflow-centric tracing captures the workflow of causally-related events (e.g., work done to process a request) within and among the components of a distributed system. As distributed systems grow in scale and complexity, such tracing is becoming a critical tool for understanding distributed system behavior. Yet, there is a fundamental lack of clarity about how such infrastructures should be designed to provide maximum benefit for important management tasks, such as resource accounting and diagnosis. Without research into this important issue, there is a danger that workflow-centric tracing will not reach its full potential. To help, this paper distills the design space of workflow-centric tracing and describes key design choices that can help or hinder a tracing infrastructures utility for important tasks. Our design space and the design choices we suggest are based on our experiences developing several previous workflow-centric tracing infrastructures.

2017-08-18
Thoma, Cory, Lee, Adam J., Labrinidis, Alexandros.  2016.  PolyStream: Cryptographically Enforced Access Controls for Outsourced Data Stream Processing. Proceedings of the 21st ACM on Symposium on Access Control Models and Technologies. :227–238.

With data becoming available in larger quantities and at higher rates, new data processing paradigms have been proposed to handle high-volume, fast-moving data. Data Stream Processing is one such paradigm wherein transient data streams flow through sets of continuous queries, only returning results when data is of interest to the querier. To avoid the large costs associated with maintaining the infrastructure required for processing these data streams, many companies will outsource their computation to third-party cloud services. This outsourcing, however, can lead to private data being accessed by parties that a data provider may not trust. The literature offers solutions to this confidentiality and access control problem but they have fallen short of providing a complete solution to these problems, due to either immense overheads or trust requirements placed on these third-party services. To address these issues, we have developed PolyStream, an enhancement to existing data stream management systems that enables data providers to specify attribute-based access control policies that are cryptographically enforced while simultaneously allowing many types of in-network data processing. We detail the access control models and mechanisms used by PolyStream, and describe a novel use of security punctuations that enables flexible, online policy management and key distribution. We detail how queries are submitted and executed using an unmodified Data Stream Management System, and show through an extensive evaluation that PolyStream yields a 550x performance gain versus the state-of-the-art system StreamForce in CODASPY 2014, while providing greater functionality to the querier.

Dang, Hung, Chong, Yun Long, Brun, Francois, Chang, Ee-Chien.  2016.  Practical and Scalable Sharing of Encrypted Data in Cloud Storage with Key Aggregation. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :69–80.

We study a sensor network setting in which samples are encrypted individually using different keys and maintained on a cloud storage. For large systems, e.g. those that generate several millions of samples per day, fine-grained sharing of encrypted samples is challenging. Existing solutions, such as Attribute-Based Encryption (ABE) and Key Aggregation Cryptosystem (KAC), can be utilized to address the challenge, but only to a certain extent. They are often computationally expensive and thus unlikely to operate at scale. We propose an algorithmic enhancement and two heuristics to improve KAC's key reconstruction cost, while preserving its provable security. The improvement is particularly significant for range and down-sampling queries – accelerating the reconstruction cost from quadratic to linear running time. Experimental study shows that for queries of size 32k samples, the proposed fast reconstruction techniques speed-up the original KAC by at least 90 times on range and down-sampling queries, and by eight times on general (arbitrary) queries. It also shows that at the expense of splitting the query into 16 sub-queries and correspondingly issuing that number of different aggregated keys, reconstruction time can be reduced by 19 times. As such, the proposed techniques make KAC more applicable in practical scenarios such as sensor networks or the Internet of Things.