Biblio
We propose a game theoretic framework for task allocation in mobile cloud computing that corresponds to offloading of compute tasks to a group of nearby mobile devices. Specifically, in our framework, a distributor node holds a multidimensional auction for allocating the tasks of a job among nearby mobile nodes based on their computational capabilities and also the cost of computation at these nodes, with the goal of reducing the overall job completion time. Our proposed auction also has the desired incentive compatibility property that ensures that mobile devices truthfully reveal their capabilities and costs and that those devices benefit from the task allocation. To deal with node mobility, we perform multiple auctions over adaptive time intervals. We develop a heuristic approach to dynamically find the best time intervals between auctions to minimize unnecessary auctions and the accompanying overheads. We evaluate our framework and methods using both real world and synthetic mobility traces. Our evaluation results show that our game theoretic framework improves the job completion time by a factor of 2-5 in comparison to the time taken for executing the job locally, while minimizing the number of auctions and the accompanying overheads. Our approach is also profitable for the nearby nodes that execute the distributor's tasks with these nodes receiving a compensation higher than their actual costs.
In recent years, the analog-to-information converter (AIC), based on compressed sensing (CS) paradigm, is a promising solution to overcome the performance and energy-efficiency limitations of traditional analog-to-digital converters (ADC). Especially, AIC can enable sub-Nyquist signal sampling proportional to the intrinsic information in biomedical applications. However, the legacy AIC structure is tailored toward specific applications, which lacks of flexibility and prevents its universality. In this paper, we introduce a novel programmable AIC architecture, Pro-AIC, to enable effective configurability and reduce its energy overhead by integrating efficient multiplexing hardware design. To improve the quality and time-efficiency of Pro-AIC configuration, we also develop a rapid configuration algorithm, called RapSpiral, to quickly find the near-optimal parameter configuration in Pro-AIC architecture. Specifically, we present a design metric, trade-off penalty, to quantitatively evaluate the performance-energy trade-off. The RapSpiral controls a penalty-driven shrinking triangle to progressively approximate to the optimal trade-off. Our proposed RapSpiral is with log(n) complexity yet high accuracy, without pretraining and complex parameter tuning procedure. RapSpiral is also probable to avoid the local minimum pitfalls. Experimental results indicate that our RapSpiral algorithm can achieve more than 30x speedup compared with the brute force algorithm, with only about 3% trade-off compromise to the optimum in Pro-AIC. Furthermore, the scalability is also verified on larger size benchmarks.
Named-Data Networking (NDN) is the most prominent proposal for a clean-slate proposal of Future Internet. Nevertheless, NDN routing schemes present scalability concerns due to the required number of stored routes and of control messages. In this work, we present a controller-based routing protocol using a formal method to unambiguously specify, and validate to prove its correctness. Our proposal codes signaling information on content names, avoiding control message overhead, and reduces router memory requirements, storing only the routes for simultaneously consumed prefixes. Additionally, the protocol installs a new route on all routers in a path with a single route request to the controller, avoiding replication of routing information and automating router provisioning. As a result, we provide a protocol proposal description using the Specification and Description Language and we validate the protocol, proving that CRoS behavior is free of dead or live locks. Furthermore, the protocol validation guarantees that the scheme ensures a valid working path from consumer to producer, even if it does not assure the shortest path.
As DNS packet are mostly UDP-based, make it as a perfect tool for hackers to launch a well-known type of distributed denial of service (DDoS). The purpose of this attack is to saturate the DNS server availability and resources. This type of attack usually utilizes a large number of botnet and perform spoofing on the IP address of the targeted victim. We take a different approach for IP spoofing detection and mitigation strategies to protect the DNS server by utilizing Software Defined Networking (SDN). In this paper, we present CAuth, a novel mechanism that autonomously block the spoofing query packet while authenticate the legitimate query. By manipulating Openflow control message, we design a collaborative approach between client and server network. Whenever a server controller receives query packet, it will send an authentication packet back to the client network and later the client controller also replies via authentication packet back to the server controller. The server controller will only forward the query to the DNS server if it receives the replied authentication packet from the client. From the evaluation, CAuth instantly manage to block spoofing query packet while authenticate the legitimate query as soon as the mechanism started. Most notably, our mechanism designed with no changes in existing DNS application and Openflow protocol.
Among the signature schemes most widely deployed in practice are the DSA (Digital Signature Algorithm) and its elliptic curves variant ECDSA. They are represented in many international standards, including IEEE P1363, ANSI X9.62, and FIPS 186-4. Their popularity stands in stark contrast to the absence of rigorous security analyses: Previous works either study modified versions of (EC)DSA or provide a security analysis of unmodified ECDSA in the generic group model. Unfortunately, works following the latter approach assume abstractions of non-algebraic functions over generic groups for which it remains unclear how they translate to the security of ECDSA in practice. For instance, it has been pointed out that prior results in the generic group model actually establish strong unforgeability of ECDSA, a property that the scheme de facto does not possess. As, further, no formal results are known for DSA, understanding the security of both schemes remains an open problem. In this work we propose GenericDSA, a signature framework that subsumes both DSA and ECDSA in unmodified form. It carefully models the "modulo q" conversion function of (EC)DSA as a composition of three independent functions. The two outer functions mimic algebraic properties in the function's domain and range, the inner one is modeled as a bijective random oracle. We rigorously prove results on the security of GenericDSA that indicate that forging signatures in (EC)DSA is as hard as solving discrete logarithms. Importantly, our proofs do not assume generic group behavior.
Among the signature schemes most widely deployed in practice are the DSA (Digital Signature Algorithm) and its elliptic curves variant ECDSA. They are represented in many international standards, including IEEE P1363, ANSI X9.62, and FIPS 186-4. Their popularity stands in stark contrast to the absence of rigorous security analyses: Previous works either study modified versions of (EC)DSA or provide a security analysis of unmodified ECDSA in the generic group model. Unfortunately, works following the latter approach assume abstractions of non-algebraic functions over generic groups for which it remains unclear how they translate to the security of ECDSA in practice. For instance, it has been pointed out that prior results in the generic group model actually establish strong unforgeability of ECDSA, a property that the scheme de facto does not possess. As, further, no formal results are known for DSA, understanding the security of both schemes remains an open problem. In this work we propose GenericDSA, a signature framework that subsumes both DSA and ECDSA in unmodified form. It carefully models the "modulo q" conversion function of (EC)DSA as a composition of three independent functions. The two outer functions mimic algebraic properties in the function's domain and range, the inner one is modeled as a bijective random oracle. We rigorously prove results on the security of GenericDSA that indicate that forging signatures in (EC)DSA is as hard as solving discrete logarithms. Importantly, our proofs do not assume generic group behavior.
Blind signature can be deployed to preserve user anonymity and is widely used in digital cash and e-voting. As an interactive protocol, blind signature schemes require high efficiency. In this paper, we propose a code-based blind signature scheme with high efficiency as it can produce a valid signature without many loops unlike existing code-based signature schemes. We then prove the security of our scheme in the random oracle model and analyze the efficiency of our scheme. Since a code-based signature scheme is post-quantum cryptography, therefore, the scheme is also able to resist quantum attacks.
Blind signature can be deployed to preserve user anonymity and is widely used in digital cash and e-voting. As an interactive protocol, blind signature schemes require high efficiency. In this paper, we propose a code-based blind signature scheme with high efficiency as it can produce a valid signature without many loops unlike existing code-based signature schemes. We then prove the security of our scheme in the random oracle model and analyze the efficiency of our scheme. Since a code-based signature scheme is post-quantum cryptography, therefore, the scheme is also able to resist quantum attacks.
Collecting and processing provenance, i.e., information describing the production process of some end product, is important in various applications, e.g., to assess quality, to ensure reproducibility, or to reinforce trust in the end product. In the past, different types of provenance meta-data have been proposed, each with a different scope. The first part of the proposed tutorial provides an overview and comparison of these different types of provenance. To put provenance to good use, it is essential to be able to interact with and present provenance data in a user-friendly way. Often, users interested in provenance are not necessarily experts in databases or query languages, as they are typically domain experts of the product and production process for which provenance is collected (biologists, journalists, etc.). Furthermore, in some scenarios, it is difficult to use solely queries for analyzing and exploring provenance data. The second part of this tutorial therefore focuses on enabling users to leverage provenance through adapted visualizations. To this end, we will present some fundamental concepts of visualization before we discuss possible visualizations for provenance.
One essential requirement for supporting analytics for Big Medical Data systems is the provision of a suitable level of traceability to data or processes ('Items') in large volumes of data. Systems should be designed from the outset to support usage of such Items across the spectrum of medical use and over time in order to promote traceability, to simplify maintenance and to assist analytics. The philosophy proposed in this paper is to design medical data systems using a 'description-driven' approach in which meta-data and the description of medical items are saved alongside the data, simplifying item re-use over time and thereby enabling the traceability of these items over time and their use in analytics. Details are given of a big data system in neuroimaging to demonstrate aspects of provenance data capture, collaborative analysis and longitudinal information traceability. Evidence is presented that the description-driven approach leads to simplicity of design and ease of maintenance following the adoption of a unified approach to Item management.
Defenders of enterprise networks have a critical need to quickly identify the root causes of malware and data leakage. Increasingly, USB storage devices are the media of choice for data exfiltration, malware propagation, and even cyber-warfare. We observe that a critical aspect of explaining and preventing such attacks is understanding the provenance of data (i.e., the lineage of data from its creation to current state) on USB devices as a means of ensuring their safe usage. Unfortunately, provenance tracking is not offered by even sophisticated modern devices. This work presents ProvUSB, an architecture for fine-grained provenance collection and tracking on smart USB devices. ProvUSB maintains data provenance by recording reads and writes at the block layer and reliably identifying hosts editing those blocks through attestation over the USB channel. Our evaluation finds that ProvUSB imposes a one-time 850 ms overhead during USB enumeration, but approaches nearly-bare-metal runtime performance (90% of throughput) on larger files during normal execution, and less than 0.1% storage overhead for provenance in real-world workloads. ProvUSB thus provides essential new techniques in the defense of computer systems and USB storage devices.
We propose pseudo random bit generator(PRBG) by constructing an ergodic orbit of logistic map generated using intermittent perturbation and sampling the orbit using Bernoulli shift map. The pseudo randomness of the key streams is tested with the standard NIST statistical test suite. The key streams generated by the proposed PRBG pass all the tests of the test suite. As an application of PRBG we propose chaotic stream ciphers. Goodness-of-Fit tests are conducted for these ciphers. Stream ciphers generated using PRBG are shown to generate ciphers in which the distribution of the characters show significant correlation to the uniform distribution. It is also shown that the ciphers exhibit good avalanche effect when the initial seed is varied infinitesimally and we can also clearly observe the avalanche effect at different points within the ciphertext as the ciphertext is being generated. Also we show that the proposed PRBG is on par with conventional PRBG RC4. This indeed vindicates our hypothesis that stream of bits obtained from orbits of logistic map using perturbation and Bernoulli shift map exhibits randomness.
Cloud service providers offer storage outsourcing facility to their clients. In a secure cloud storage (SCS) protocol, the integrity of the client's data is maintained. In this work, we construct a publicly verifiable secure cloud storage protocol based on a secure network coding (SNC) protocol where the client can update the outsourced data as needed. To the best of our knowledge, our scheme is the first SNC-based SCS protocol for dynamic data that is secure in the standard model and provides privacy-preserving audits in a publicly verifiable setting. Furthermore, we discuss, in details, about the (im)possibility of providing a general construction of an efficient SCS protocol for dynamic data (DSCS protocol) from an arbitrary SNC protocol. In addition, we modify an existing DSCS scheme (DPDP I) in order to support privacy-preserving audits. We also compare our DSCS protocol with other SCS schemes (including the modified DPDP I scheme). Finally, we figure out some limitations of an SCS scheme constructed using an SNC protocol.
Graph data publishing under node-differential privacy (node-DP) is challenging due to the huge sensitivity of queries. However, since a node in graph data oftentimes represents a person, node-DP is necessary to achieve personal data protection. In this paper, we investigate the problem of publishing the degree distribution of a graph under node-DP by exploring the projection approach to reduce the sensitivity. We propose two approaches based on aggregation and cumulative histogram to publish the degree distribution. The experiments demonstrate that our approaches greatly reduce the error of approximating the true degree distribution and have significant improvement over existing works. We also present the introspective analysis for understanding the factors of publishing the degree distribution with node-DP.
Generic multi-button controllers are the most common input devices used for video games. In contrast, dedicated game controllers and gestural interactions increase immersion and playability. Room-sized gaming has opened up possibilities to further enhance the immersive experience, and provides players with opportunities to use full-body movements as input. We present a purpose-centric approach to appropriating everyday objects as physical game controllers, for immersive room-sized gaming. Virtual manipulations supported by such physical controllers mimic real-world function and usage. Doing so opens up new possibilities for interactions that flow seamlessly from the physical into the virtual world. As a proof-of-concept, we present a 'Tower Defence' styled game, that uses four everyday household objects as game controllers, each of which serves as a weapon to defend the base of the players from enemy bots. Players can use 1) a mop (or a broom) to sweep away enemy bots directionally; 2) a fan to scatter them away; 3) a vacuum cleaner to suck them; 4) a mouse trap to destroy them. Each controller is tracked using a motion capture system. A physics engine is integrated in the game, and ensures virtual objects act as though they are manipulated by the actual physical controller, thus providing players with a highly-immersive gaming experience.
We address the problem of stabilizing control for complex queueing systems with known parameters but unobservable Markovian random environment. In such systems, the controller needs to assign servers to queues without having full information about the servers' states. A control challenge is to devise a policy that matches servers to queues in a way that takes state estimates into account. Maximally attainable stability regions are non-trivial. To handle these situations, we model the system under given decision rules. The model is using Quasi-Birth-and-Death (QBD) structure to find a matrix analytic expression for the stability bound. We use this formulation to illustrate how the stability region grows as the number of controller belief states increases.
With the developing understanding of Information Security and digital assets, IT technology has put on tremendous importance of network admission control (NAC). In NAC architecture, admission decisions and resource reservations are taken at edge devices, rather than resources or individual routers within the network. The NAC architecture enables resilient resource reservation, maintaining reservations even after failures and intra-domain rerouting. Admission Control Networks destiny is based on IP networks through its Security and Quality of Service (QoS) demands for real time multimedia application via advance resource reservation techniques. To achieve Security & QoS demands, in real time performance networks, admission control algorithm decides whether the new traffic flow can be admitted to the network or not. Secure allocation of Peer for multimedia traffic flows with required performance is a great challenge in resource reservation schemes. In this paper, we have proposed our model for VoIP networks in order to achieve security services along with QoS, where admission control decisions are taken place at edge routers. We have analyzed and argued that the measurement based admission control should be done at edge routers which employs on-demand probing parallel from both edge routers to secure the source and destination nodes respectively. In order to achieve Security and QoS for a new call, we choose various probe packet sizes for voice and video calls respectively. Similarly a technique is adopted to attain a security allocation approach for selecting an admission control threshold by proposing our admission control algorithm. All results are tested on NS2 based simulation to evalualate the network performance of edge router based upon network admission control in VoIP traffic.
The cooperative spectrum leasing process between the primary user (PU) and the secondary user (SU) in a cognitive radio network under the overlay approach and the decode and forward (DF) cooperative protocol is studied. Considering the Quality of Service (QoS) provisioning of both users, which participate in a three-phase leasing process, we investigate the maximization of PU's effective capacity subject to an average energy constraint for the SU under a heuristic power and time allocation mechanism. The aforementioned proposed scheme treats with the basic concepts of the convex optimization theory and outperforms a baseline allocation mechanism which is proven by the simulations. Finally, important remarks for the PU's and the SU's performance are extracted for different system parameters.
A number of small Open Source projects let independent providers measure different aspects of their quality that would otherwise be hard to see. This paper describes this observation as the pattern Quality Attestation. Quality Attestation belongs to a family of Open Source patterns written by various authors.
In the recent years, silicon based Physical Unclonable Function (PUF) has evolved as one of the popular hardware security primitives. PUFs are a class of circuits that use the inherent variations in the Integrated Circuit (IC) manufacturing process to create unique and unclonable IDs. There are various security related applications of PUFs such as IC counterfeiting, piracy detection, secure key management etc. In this paper, we are presenting a novel QUasi-Adiabatic Logic based PUF (QUALPUF) which is designed using energy recovery technique. To the best of our knowledge, this is the first work on the hardware design of PUF using adiabatic logic. The proposed design is energy efficient compared to recent designs of hardware PUFs proposed in the literature. Further, we are proposing a novel bit extraction method for our proposed PUF which improves the space set of challenge-response pairs. QUALPUF is evaluated in security metrics including reliability, uniqueness, uniformity and bit-aliasing. Power and area of QUALPUF is also presented. SPICE simulations show that QUALPUF consumes 0.39μ Watt of power to generate a response bit.
Let G be a finite connected graph, and let ρ be the spectral radius of its universal cover. For example, if G is k-regular then ρ=2√k−1. We show that for every r, there is an r-covering (a.k.a. an r-lift) of G where all the new eigenvalues are bounded from above by ρ. It follows that a bipartite Ramanujan graph has a Ramanujan r-covering for every r. This generalizes the r=2 case due to Marcus, Spielman and Srivastava (2013). Every r-covering of G corresponds to a labeling of the edges of G by elements of the symmetric group Sr. We generalize this notion to labeling the edges by elements of various groups and present a broader scenario where Ramanujan coverings are guaranteed to exist. In particular, this shows the existence of richer families of bipartite Ramanujan graphs than was known before. Inspired by Marcus-Spielman-Srivastava, a crucial component of our proof is the existence of interlacing families of polynomials for complex reflection groups. The core argument of this component is taken from Marcus-Spielman-Srivastava (2015). Another important ingredient of our proof is a new generalization of the matching polynomial of a graph. We define the r-th matching polynomial of G to be the average matching polynomial of all r-coverings of G. We show this polynomial shares many properties with the original matching polynomial. For example, it is real rooted with all its roots inside [−ρ,ρ].
We present RamCrypt, a solution that allows unmodified Linux processes to transparently work on encrypted data. RamCrypt can be deployed and enabled on a per-process basis without recompiling user-mode applications. In every enabled process, data is only stored in cleartext for the moment it is processed, and otherwise stays encrypted in RAM. In particular, the required encryption keys do not reside in RAM, but are stored in CPU registers only. Hence, RamCrypt effectively thwarts memory disclosure attacks, which grant unauthorized access to process memory, as well as physical attacks such as cold boot and DMA attacks. In its default configuration, RamCrypt exposes only up to 4 memory pages in cleartext at the same time. For the nginx web server serving encrypted HTTPS pages under heavy load, the necessary TLS secret key is hidden for 97% of its time.
Modern world has witnessed a dramatic increase in our ability to collect, transmit and distribute real-time monitoring and surveillance data from large-scale information systems and cyber-physical systems. Detecting system anomalies thus attracts significant amount of interest in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be a powerful way in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: 1) fault propagation in the network is ignored; 2) the root casual anomalies may not always be the nodes with a high-percentage of vanishing correlations; 3) temporal patterns of vanishing correlations are not exploited for robust detection. To address these limitations, in this paper we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network, and can perform joint inference on both the structural, and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations, and can compensate for unstructured measurement noise in the system. Extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets demonstrate the effectiveness of our approach.
Android is currently the most widely used mobile environment. This trend encourages malware writers to develop specific attacks targeting this platform with threats designed to covertly collect data or financially extort victims, the so-called ransomware. In this paper we use formal methods, in particular model checking, to automatically dissect ransomware samples. Starting from manual inspection of few samples, we define a set of rule in order to check whether the behaviours we find are representative of ransomware functionalities.