Biblio
This paper proposes a method for segmentation of nuclei of single/isolated and overlapping/touching immature white blood cells from microscopic images of B-Lineage acute lymphoblastic leukemia (ALL) prepared from peripheral blood and bone marrow aspirate. We propose deep belief network approach for the segmentation of these nuclei. Simulation results and comparison with some of the existing methods demonstrate the efficacy of the proposed method.
Side-channel analysis and fault-injection attacks are known as major threats to any cryptographic implementation. Protecting cryptographic implementations with suitable countermeasures is thus essential before they are deployed in the wild. However, countermeasures for both threats are of completely different nature: Side-channel analysis is mitigated by techniques that hide or mask key-dependent information while resistance against fault-injection attacks can be achieved by redundancy in the computation for immediate error detection. Since already the integration of any single countermeasure in cryptographic hardware comes with significant costs in terms of performance and area, a combination of multiple countermeasures is expensive and often associated with undesired side effects. In this work, we introduce a countermeasure for cryptographic hardware implementations that combines the concept of a provably-secure masking scheme (i.e., threshold implementation) with an error detecting approach against fault injection. As a case study, we apply our generic construction to the lightweight LED cipher. Our LED instance achieves first-order resistance against side-channel attacks combined with a fault detection capability that is superior to that of simple duplication for most error distributions at an increased area demand of 4.3%.
Path prediction on the Internet has been a topic of research in the networking community for close to a decade. Applications of path prediction solutions have ranged from optimizing selection of peers in peer- to-peer networks to improving and debugging CDN predictions. Recently, revelations of traffic correlation and surveillance on the Internet have raised the topic of path prediction in the context of network security. Specifically, predicting network paths can allow us to identify and avoid given organizations on network paths (e.g., to avoid traffic correlation attacks in Tor) or to infer the impact of hijacks and interceptions when direct measurements are not available. In this poster we propose the design and implementation of PathCache which aims to reuse measurement data to estimate AS level paths on the Internet. Unlike similar systems, PathCache does not assume that routing on the Internet is destination based. Instead, we develop an algorithm to compute confidence in paths between ASes. These multiple paths ranked by their confidence values are returned to the user.
For authenticating time critical broadcast messages, IEEE 1609.2 security standard for Vehicular Ad hoc Networks (VANETs) suggests the use of secure Elliptic Curve Digital Signature Algorithm (ECDSA). Since ECDSA has an expensive verification in terms of time, most commonly suggested alternate algorithms are TESLA and signature amortization. Unfortunately, these algorithms lack immediate authentication and non-repudiation. Therefore, we introduce a probabilistic verification scheme for an ECDSA-based authentication protocol. Using ns2 simulation tools, we compare the performance of all above-mentioned broadcast authentication algorithms. The results show with our proposed scheme, there is an increase in packet processed ratio over that of all the other algorithms.
TCP congestion control has been known for its crucial role in stabilizing the Internet and preventing congestion collapses. However, with the rapid advancement in networking technologies, resulting in the emergence of challenging network environments such as data center networks (DCNs), the traditional TCP algorithm leads to several impairments. The shortcomings of TCP when deployed in DCNs have motivated the development of multiple new variants, including DCTCP, ICTCP, IA-TCP, and D2TCP, but all of these algorithms exhibit their advantages at the cost of a number of drawbacks in the Global Internet. Motivated by the belief that new innovations need to be established on top of a solid foundation with a thorough understanding of the existing, well-established algorithms, we have been working towards a comprehensive analysis of various conventional TCP algorithms in DCNs and other modern networks. This paper presents our first milestone towards the completion of our comparative study in which we present the results obtained by simulating multiple TCP variants: NewReno, Vegas, HighSpeed, Scalable, Westwood+, BIC, CUBIC, and YeAH using a fat tree architecture. Each protocol is evaluated in terms of queue length, number of dropped packets, average packet delay, and aggregate bandwidth as a percentage of the channel bandwidth.
We present new algorithms for Personalized PageRank estimation and Personalized PageRank search. First, for the problem of estimating Personalized PageRank (PPR) from a source distribution to a target node, we present a new bidirectional estimator with simple yet strong guarantees on correctness and performance, and 3x to 8x speedup over existing estimators in experiments on a diverse set of networks. Moreover, it has a clean algebraic structure which enables it to be used as a primitive for the Personalized PageRank Search problem: Given a network like Facebook, a query like "people named John," and a searching user, return the top nodes in the network ranked by PPR from the perspective of the searching user. Previous solutions either score all nodes or score candidate nodes one at a time, which is prohibitively slow for large candidate sets. We develop a new algorithm based on our bidirectional PPR estimator which identifies the most relevant results by sampling candidates based on their PPR; this is the first solution to PPR search that can find the best results without iterating through the set of all candidate results. Finally, by combining PPR sampling with sequential PPR estimation and Monte Carlo, we develop practical algorithms for PPR search, and we show via experiments that our algorithms are efficient on networks with billions of edges.
Emerging and future HealthCare policies are fueling up an application-driven shift toward long-term monitoring of biosignals by means of embedded ultra-low power Wireless Body Sensor Networks (WBSNs). In order to break out, these applications needed the emergence of new technologies to allow the development of extremely power-efficient bio-sensing nodes. The PHIDIAS project aims at unlocking the development of ultra-low power bio-sensing WBSNs by tackling multiple and interlocking technological breakthroughs: (i) the development of new signal processing models and methods based on the recently proposed Compressive Sampling paradigm, which allows the design of energy-minimal computational architectures and analog front-ends, (ii) the efficient hardware implementation of components, both analog and digital, building upon an innovative ultra-low-power signal processing front-end, (iii) the evaluation of the global power reduction using a system wide integration of hardware and software components focused on compressed-sensing-based bio-signals analysis. PHIDIAS brought together a mixed consortium of academic and industrial research partners representing pan-European excellence in different fields impacting the energy-aware optimization of WBSNs, including experts in signal processing and digital/analog IC design. In this way, PHIDIAS pioneered a unique holistic approach, ensuring that key breakthroughs worked out in a cooperative way toward the global objective of the project.
Extensible Cyber-Physical Systems (CPS) are loosely connected, multi-domain platforms that "virtualize" their resources to provide an open platform capable of hosting different cyber-physical applications. These cyber-physical platforms are extensible since resources and applications can be added or removed at any time. However, realizing such platform requires resolving challenges emanating from different properties; for this paper, we focus on resilience. Resilience is important for extensible CPS to make sure that extensibility of a system doesn't result in failures and anomalies.
Cyber Physical Systems (CPS) security testbeds serve as a platform for evaluating and validating novel CPS security tools and technologies, accelerating the transition of state-of-the-art research to industrial practice. The engineering of CPS security testbeds requires significant investments in money, time and modeling efforts to provide a scalable, high-fidelity, real-time attack-defense platform. Therefore, there is a strong need in academia and industry to create remotely accessible testbeds that support a range of use-cases pertaining to CPS security of the grid, including vulnerability assessments, impact analysis, product testing, attack-defense exercises, and operator training. This paper describes the implementation architecture, and capabilities of a remote access and experimental orchestration framework developed for the PowerCyber CPS security testbed at Iowa State University (ISU). The paper then describes several engineering challenges in the development of such remotely accessible testbeds for Smart Grid CPS security experimentation. Finally, the paper provides a brief case study with some screenshots showing a particular use case scenario on the remote access framework.
In cloud computing, computationally weak users are always willing to outsource costly computations to a cloud, and at the same time they need to check the correctness of the result provided by the cloud. Such activities motivate the occurrence of verifiable computation (VC). Recently, Parno, Raykova and Vaikuntanathan showed any VC protocol can be constructed from an attribute-based encryption (ABE) scheme for a same class of functions. In this paper, we propose two practical and efficient semi-adaptively secure key-policy attribute-based encryption (KP-ABE) schemes with constant-size ciphertexts. The semi-adaptive security requires that the adversary designates the challenge attribute set after it receives public parameters but before it issues any secret key query, which is stronger than selective security guarantee. Our first construction deals with small universe while the second one supports large universe. Both constructions employ the technique underlying the prime-order instantiation of nested dual system groups, which are based on the \$d\$-linear assumption including SXDH and DLIN assumptions. In order to evaluate the performance, we implement our ABE schemes using \$\textbackslashtextsf\Python\\$ language in Charm. Compared with previous KP-ABE schemes with constant-size ciphertexts, our constructions achieve shorter ciphertext and secret key sizes, and require low computation costs, especially under the SXDH assumption.
In cloud computing, computationally weak users are always willing to outsource costly computations to a cloud, and at the same time they need to check the correctness of the result provided by the cloud. Such activities motivate the occurrence of verifiable computation (VC). Recently, Parno, Raykova and Vaikuntanathan showed any VC protocol can be constructed from an attribute-based encryption (ABE) scheme for a same class of functions. In this paper, we propose two practical and efficient semi-adaptively secure key-policy attribute-based encryption (KP-ABE) schemes with constant-size ciphertexts. The semi-adaptive security requires that the adversary designates the challenge attribute set after it receives public parameters but before it issues any secret key query, which is stronger than selective security guarantee. Our first construction deals with small universe while the second one supports large universe. Both constructions employ the technique underlying the prime-order instantiation of nested dual system groups, which are based on the \$d\$-linear assumption including SXDH and DLIN assumptions. In order to evaluate the performance, we implement our ABE schemes using \$\textbackslashtextsf\Python\\$ language in Charm. Compared with previous KP-ABE schemes with constant-size ciphertexts, our constructions achieve shorter ciphertext and secret key sizes, and require low computation costs, especially under the SXDH assumption.
We consider a data owner that outsources its dataset to an untrusted server. The owner wishes to enable the server to answer range queries on a single attribute, without compromising the privacy of the data and the queries. There are several schemes on "practical" private range search (mainly in Databases venues) that attempt to strike a trade-off between efficiency and security. Nevertheless, these methods either lack provable security guarantees, or permit unacceptable privacy leakages. In this paper, we take an interdisciplinary approach, which combines the rigor of Security formulations and proofs with efficient Data Management techniques. We construct a wide set of novel schemes with realistic security/performance trade-offs, adopting the notion of Searchable Symmetric Encryption (SSE) primarily proposed for keyword search. We reduce range search to multi-keyword search using range covering techniques with tree-like indexes. We demonstrate that, given any secure SSE scheme, the challenge boils down to (i) formulating leakages that arise from the index structure, and (ii) minimizing false positives incurred by some schemes under heavy data skew. We analytically detail the superiority of our proposals over prior work and experimentally confirm their practicality.
Modern operating systems use hardware support to protect against control-flow hijacking attacks such as code-injection attacks. Typically, write access to executable pages is prevented and kernel mode execution is restricted to kernel code pages only. However, current CPUs provide no protection against code-reuse attacks like ROP. ASLR is used to prevent these attacks by making all addresses unpredictable for an attacker. Hence, the kernel security relies fundamentally on preventing access to address information. We introduce Prefetch Side-Channel Attacks, a new class of generic attacks exploiting major weaknesses in prefetch instructions. This allows unprivileged attackers to obtain address information and thus compromise the entire system by defeating SMAP, SMEP, and kernel ASLR. Prefetch can fetch inaccessible privileged memory into various caches on Intel x86. It also leaks the translation-level for virtual addresses on both Intel x86 and ARMv8-A. We build three attacks exploiting these properties. Our first attack retrieves an exact image of the full paging hierarchy of a process, defeating both user space and kernel space ASLR. Our second attack resolves virtual to physical addresses to bypass SMAP on 64-bit Linux systems, enabling ret2dir attacks. We demonstrate this from unprivileged user programs on Linux and inside Amazon EC2 virtual machines. Finally, we demonstrate how to defeat kernel ASLR on Windows 10, enabling ROP attacks on kernel and driver binary code. We propose a new form of strong kernel isolation to protect commodity systems incuring an overhead of only 0.06-5.09%.
Workflow-centric tracing captures the workflow of causally-related events (e.g., work done to process a request) within and among the components of a distributed system. As distributed systems grow in scale and complexity, such tracing is becoming a critical tool for understanding distributed system behavior. Yet, there is a fundamental lack of clarity about how such infrastructures should be designed to provide maximum benefit for important management tasks, such as resource accounting and diagnosis. Without research into this important issue, there is a danger that workflow-centric tracing will not reach its full potential. To help, this paper distills the design space of workflow-centric tracing and describes key design choices that can help or hinder a tracing infrastructures utility for important tasks. Our design space and the design choices we suggest are based on our experiences developing several previous workflow-centric tracing infrastructures.
Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources; everything from applications to data centers over the Internet. Cloud is used not only for storing data, but also the stored data can be shared by multiple users. Due to this, the integrity of cloud data is subject to doubt. Every time it is not possible for user to download all data and verify integrity, so proposed system contain Third Party Auditor (TPA) to verify the integrity of shared data. During auditing, the shared data is kept private from public verifiers, who are able to verify shared data integrity without downloading or retrieving the entire data file. Group signature is used to preserve identity privacy of group members from third party auditor. Privacy preserving is done to ensure that the TPA cannot derive user's data content from the information collected during the auditing process.
Provenance workflows capture movement and transformation of data in complex environments, such as document management in large organizations, content generation and sharing in in social media, scientific computations, etc. Sharing and processing of provenance workflows brings numerous benefits, e.g., improving productivity in an organization, understanding social media interaction patterns, etc. However, directly sharing provenance may also disclose sensitive information such as confidential business practices, or private details about participants in a social network. We propose an algorithm that privately extracts sequential association rules from provenance workflow datasets. Finding such rules has numerous practical applications, such as capacity planning or identifying hot-spots in provenance graphs. Our approach provides good accuracy and strong privacy, by leveraging on the exponential mechanism of differential privacy. We propose an heuristic that identifies promising candidate rules and makes judicious use of the privacy budget. Experimental results show that the our approach is fast and accurate, and clearly outperforms the state-of-the-art. We also identify influential factors in improving accuracy, which helps in choosing promising directions for future improvement.
Recent advances in differential privacy make it possible to guarantee user privacy while preserving the main characteristics of the data. However, most differential privacy mechanisms assume that the underlying dataset is clean. This paper explores the link between data cleaning and differential privacy in a framework we call PrivateClean. PrivateClean includes a technique for creating private datasets of numerical and discrete-valued attributes, a formalism for privacy-preserving data cleaning, and techniques for answering sum, count, and avg queries after cleaning. We show: (1) how the degree of privacy affects subsequent aggregate query accuracy, (2) how privacy potentially amplifies certain types of errors in a dataset, and (3) how this analysis can be used to tune the degree of privacy. The key insight is to maintain a bipartite graph relating dirty values to clean values and use this graph to estimate biases due to the interaction between cleaning and privacy. We validate these results on four datasets with a variety of well-studied cleaning techniques including using functional dependencies, outlier filtering, and resolving inconsistent attributes.
A number of small Open Source projects let independent providers measure different aspects of their quality that would otherwise be hard to see. This paper describes this observation as the pattern Quality Attestation. Quality Attestation belongs to a family of Open Source patterns written by various authors.
We present RamCrypt, a solution that allows unmodified Linux processes to transparently work on encrypted data. RamCrypt can be deployed and enabled on a per-process basis without recompiling user-mode applications. In every enabled process, data is only stored in cleartext for the moment it is processed, and otherwise stays encrypted in RAM. In particular, the required encryption keys do not reside in RAM, but are stored in CPU registers only. Hence, RamCrypt effectively thwarts memory disclosure attacks, which grant unauthorized access to process memory, as well as physical attacks such as cold boot and DMA attacks. In its default configuration, RamCrypt exposes only up to 4 memory pages in cleartext at the same time. For the nginx web server serving encrypted HTTPS pages under heavy load, the necessary TLS secret key is hidden for 97% of its time.
Mutex locks have traditionally been the most common mechanism for protecting shared data structures in parallel programs. However, the robustness of such locks against process failures has not been studied thoroughly. Most (user-level) mutex algorithms are designed around the assumption that processes are reliable, meaning that a process may not fail while executing the lock acquisition and release code, or while inside the critical section. If such a failure does occur, then the liveness properties of a conventional mutex lock may cease to hold until the application or operating system intervenes by cleaning up the internal structure of the lock. For example, a process that is attempting to acquire an otherwise starvation-free mutex may be blocked forever waiting for a failed process to release the critical section. Adding to the difficulty, if the failed process recovers and attempts to acquire the same mutex again without appropriate cleanup, then the mutex may become corrupted to the point where it loses safety, notably the mutual exclusion property. We address this challenge by formalizing the problem of recoverable mutual exclusion, and proposing several solutions that vary both in their assumptions regarding hardware support for synchronization, and in their time complexity. Compared to known solutions, our algorithms are more robust as they do not restrict where or when a process may crash, and provide stricter guarantees in terms of time complexity, which we define in terms of remote memory references.
Provenance for transactional updates is critical for many applications such as auditing and debugging of transactions. Recently, we have introduced MV-semirings, an extension of the semiring provenance model that supports updates and transactions. Furthermore, we have proposed reenactment, a declarative form of replay with provenance capture, as an efficient and non-invasive method for computing this type of provenance. However, this approach is limited to the snapshot isolation (SI) concurrency control protocol while many real world applications apply the read committed version of snapshot isolation (RC-SI) to improve performance at the cost of consistency. We present non trivial extensions of the model and reenactment approach to be able to compute provenance of RC-SI transactions efficiently. In addition, we develop techniques for applying reenactment across multiple RC-SI transactions. Our experiments demonstrate that our implementation in the GProM system supports efficient re-construction and querying of provenance.
Different wireless Peer-to-Peer (P2P) routing protocols rely on cooperative protocols of interaction among peers, yet, most of the surveyed provide little detail on how the peers can take into consideration the peers' reliability for improving routing efficiency in collaborative networks. Previous research has shown that in most of the trust and reputation evaluation schemes, the peers' rating behaviour can be improved to include the peers' attributes for understanding peers' reliability. This paper proposes a reliability based trust model for dynamic trust evaluation between the peers in P2P networks for collaborative routing. Since the peers' routing attributes vary dynamically, our proposed model must also accommodate the dynamic changes of peers' attributes and behaviour. We introduce peers' buffers as a scaling factor for peers' trust evaluation in the trust and reputation routing protocols. The comparison between reliability and non-reliability based trust models using simulation shows the improved performance of our proposed model in terms of delivery ratio and average message latency.
Different wireless Peer-to-Peer (P2P) routing protocols rely on cooperative protocols of interaction among peers, yet, most of the surveyed provide little detail on how the peers can take into consideration the peers' reliability for improving routing efficiency in collaborative networks. Previous research has shown that in most of the trust and reputation evaluation schemes, the peers' rating behaviour can be improved to include the peers' attributes for understanding peers' reliability. This paper proposes a reliability based trust model for dynamic trust evaluation between the peers in P2P networks for collaborative routing. Since the peers' routing attributes vary dynamically, our proposed model must also accommodate the dynamic changes of peers' attributes and behaviour. We introduce peers' buffers as a scaling factor for peers' trust evaluation in the trust and reputation routing protocols. The comparison between reliability and non-reliability based trust models using simulation shows the improved performance of our proposed model in terms of delivery ratio and average message latency.