Biblio
Secure Data Sharing (SDS) enables users to share data in the cloud in a confidential and integrity-preserving manner. Many recent SDS approaches are based on Attribute-Based Encryption (ABE), leveraging the advantage that ABE allows to address a multitude of users with only one ciphertext. However, ABE approaches often come with the downside that they require a central fully-trusted entity that is able to decrypt any ciphertext in the system. In this paper, we investigate on whether ABE could be used to efficiently implement Decentralized Secure Data Sharing (D-SDS), which explicitly demands that the authorization and access control enforcement is carried out solely by the owner of the data, without the help of a fully-trusted third party. For this purpose, we did a comprehensive analysis of recent ABE approaches with regard to D-SDS requirements. We found one ABE approach to be suitable, and we show different alternatives to employ this ABE approach in a group-based D-SDS scenario. For a realistic estimation of the resource consumption, we give concrete resource consumption values for workloads taken from real-world system traces and exemplary up-to-date mobile devices. Our results indicate that for the most D-SDS operations, the resulting computation times and outgoing network traffic will be acceptable in many use cases. However, the computation times and outgoing traffic for the management of large groups might prevent using mobile devices.
In light of the prevalent trend towards dense HetNets, the conventional coupled user association, where mobile device uses the same base station (BS) for both uplink and downlink traffic, is being questioned and the alternative and more general downlink/uplink decoupling paradigm is emerging. We focus on designing an effective user association mechanism for HetNets with downlink/uplink decoupling, which has started to receive more attention. We use a combination of matching theory and stochastic geometry. We model the problem as a matching with contracts game by drawing an analogy with the hospital-doctor matching problem. In our model, we use stochastic geometry to derive a closed-form expression for matching utility function. Our model captures different objectives between users in the uplink/downlink directions and also from the perspective of BSs. Based on this game model, we present a matching algorithm for decoupled uplink/downlink user association that results in a stable allocation. Simulation results demonstrate that our approach provides close-to-optimal performance, and significant gains over alternative approaches for user association in the decoupled context as well as the traditional coupled user association; these gains are a result of the holistic nature of our approach that accounts for the additional cost associated with decoupling and inter-dependence between uplink and downlink associations. Our work is also the first in the wireless communications domain to employ matching with contracts approach.
In order to integrate equipment from different vendors, wireless sensor networks need to become more standardized. Using IP as the basis of low power radio networks, together with application layer standards designed for this purpose is one way forward. This research focuses on implementing and deploying a system using Contiki, 6LoWPAN over an 868 MHz radio network, together with CoAP as a standard application layer protocol. A system was deployed in the Cairngorm mountains in Scotland as an environmental sensor network, measuring streams, temperature profiles in peat and periglacial features. It was found that RPL provided an effective routing algorithm, and that the use of UDP packets with CoAP proved to be an energy efficient application layer. This combination of technologies can be very effective in large area sensor networks.
Traditional sensitive data disclosure analysis faces two challenges: to identify sensitive data that is not generated by specific API calls, and to report the potential disclosures when the disclosed data is recognized as sensitive only after the sink operations. We address these issues by developing BidText, a novel static technique to detect sensitive data disclosures. BidText formulates the problem as a type system, in which variables are typed with the text labels that they encounter (e.g., during key-value pair operations). The type system features a novel bi-directional propagation technique that propagates the variable label sets through forward and backward data-flow. A data disclosure is reported if a parameter at a sink point is typed with a sensitive text label. BidText is evaluated on 10,000 Android apps. It reports 4,406 apps that have sensitive data disclosures, with 4,263 apps having log based disclosures and 1,688 having disclosures due to other sinks such as HTTP requests. Existing techniques can only report 64.0% of what BidText reports. And manual inspection shows that the false positive rate for BidText is 10%.
This paper describes an approach for detecting the presence of domain name system (DNS) tunnels in network traffic. DNS tunneling is a common technique hackers use to establish command and control nodes and to exfiltrate data from networks. To generate the training data sufficient to build models to detect DNS tunneling activity, a penetration testing effort was employed. We extracted features from this data and trained random forest classifiers to distinguish normal DNS activity from tunneling activity. The classifiers successfully detected the presence of tunnels we trained on, and four other types of tunnels that were not a part of the training set.
Smart environments and security systems require automatic detection of human behaviors including approaching to or departing from an object. Existing human motion detection systems usually require human beings to carry special devices, which limits their applications. In this paper, we present a system called APID to detect arm reaching by analyzing backscatter communication signals from a passive RFID tag on the object. APID does not require human beings to carry any device. The idea is based on the influence of human movements to the vibration of backscattered tag signals. APID is compatible with commodity off-the-shelf devices and the EPCglobal Class-1 Generation-2 protocol. In APID an commercial RFID reader continuously queries tags through emitting RF signals and tags simply respond with their IDs. A USRP monitor passively analyzes the communication signals and reports the approach and departure behaviors. We have implemented the APID system for both single-object and multi-object scenarios in both horizontal and vertical deployment modes. The experimental results show that APID can achieve high detection accuracy.
We present D-ForenRIA, a distributed forensic tool to automatically reconstruct user-sessions in Rich Internet Applications (RIAs), using solely the full HTTP traces of the sessions as input. D-ForenRIA recovers automatically each browser state, reconstructs the DOMs and re-creates screenshots of what was displayed to the user. The tool also recovers every action taken by the user on each state, including the user-input data. Our application domain is security forensics, where sometimes months-old sessions must be quickly reconstructed for immediate inspection. We will demonstrate our tool on a series of RIAs, including a vulnerable banking application created by IBM Security for testing purposes. In that case study, the attacker visits the vulnerable web site, and exploits several vulnerabilities (SQL-injections, XSS...) to gain access to private information and to perform unauthorized transactions. D-ForenRIA can reconstruct the session, including screenshots of all pages seen by the hacker, DOM of each page and the steps taken for unauthorized login and the inputs hacker exploited for the SQL-injection attack. D-ForenRIA is made efficient by applying advanced reconstruction techniques and by using several browsers concurrently to speed up the reconstruction process. Although we developed D-ForenRIA in the context of security forensics, the tool can also be useful in other contexts such as aided RIAs debugging and automated RIAs scanning.
We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming language with constructs for performing Bayesian inference. Then, differential privacy of programs is established using a relational refinement type system, in which refinements on probability types are indexed by a metric on distributions. Our framework leverages recent developments in Bayesian inference, probabilistic programming languages, and in relational refinement types. We demonstrate the expressiveness of PrivInfer by verifying privacy for several examples of private Bayesian inference.
Digital artifacts on social media can challenge individuals during identity transitions, particularly those who prefer to delete, separate from, or hide data that are representative of a past identity. This work investigates concerns and practices reported by transgender people who transitioned while active on Facebook. We analyze open-ended survey responses from 283 participants, highlighting types of data considered problematic when separating oneself from a past identity, and challenges and strategies people engage in when managing personal data in a networked environment. We find that people shape their digital footprints in two ways: by editing the self-presentational data that is representative of a prior identity, and by managing the configuration of people who have access to that self-presentation. We outline the challenging interplay between shifting identities, social networks, and the data that suture them together. We apply these results to a discussion of the complexities of managing and forgetting the digital past.
Walrasian equilibrium prices have a remarkable property: they allow each buyer to purchase a bundle of goods that she finds the most desirable, while guaranteeing that the induced allocation over all buyers will globally maximize social welfare. However, this clean story has two caveats. * First, the prices may induce indifferences. In fact, the minimal equilibrium prices necessarily induce indifferences. Accordingly, buyers may need to coordinate with one another to arrive at a socially optimal outcome—the prices alone are not sufficient to coordinate the market. * Second, although natural procedures converge to Walrasian equilibrium prices on a fixed population, in practice buyers typically observe prices without participating in a price computation process. These prices cannot be perfect Walrasian equilibrium prices, but instead somehow reflect distributional information about the market. To better understand the performance of Walrasian prices when facing these two problems, we give two results. First, we propose a mild genericity condition on valuations under which the minimal Walrasian equilibrium prices induce allocations which result in low over-demand, no matter how the buyers break ties. In fact, under genericity the over-demand of any good can be bounded by 1, which is the best possible at the minimal prices. We demonstrate our results for unit demand valuations and give an extension to matroid based valuations (MBV), conjectured to be equivalent to gross substitute valuations (GS). Second, we use techniques from learning theory to argue that the over-demand and welfare induced by a price vector converge to their expectations uniformly over the class of all price vectors, with respective sample complexity linear and quadratic in the number of goods in the market. These results make no assumption on the form of the valuation functions. These two results imply that under a mild genericity condition, the exact Walrasian equilibrium prices computed in a market are guaranteed to induce both low over-demand and high welfare when used in a new market where agents are sampled independently from the same distribution, whenever the number of agents is larger than the number of commodities in the market.
We present a study examining the impact of physical and cognitive challenge on reported immersion for a mixed reality game called Beach Pong. Contrary to prior findings for desktop games, we find significantly higher reported immersion among players who engage physically, regardless of their actual game performance. Building a mental map of the real, virtual, and sensed world is a cognitive challenge for novices, and this appears to influence immersion: in our study, participants who actively attended to both physical and virtual game elements reported higher immersion levels than those who attended mainly or exclusively to virtual elements. Without an integrated mental map, in-game cognitive challenges were ignored or offloaded to motor response when possible in order to achieve the minimum required goals of the game. From our results we propose a model of immersion in mixed reality gaming that is useful for designers and researchers in this space.
Wireless sensor networks (WSNs) are playing a vital role in collecting data about a natural or built environment. WSNs have attractive advantages such as low-cost, low maintains and flexible arrangements for applications. Wireless sensor network has been used for many different applications such as military implementations in a battlefield, an environmental monitoring, and multifunction in health sector. In order to ensure its functionality, especially in malicious environments, security mechanisms become essential. Especially internal attacks have gained prominence and pose most challenging threats to all WSNs. Although, a number of works have been done to discuss a WSN under the internal attacks it has gained little attention. For example, the conventional cryptographic technique does not give the appropriated security to save the network from internal attack that causes by abnormally behaviour at the legitimate nodes in a network. In this paper, we propose an effective algorithm to make an evaluation for detecting internal attack by multi-criteria in real time. This protecting is based on the combination of the multiple pieces of evidences collected from the nodes under an internal attacker in a network. A theory of the decision is carefully discussed based on the Dempster-Shafer Theory (DST). If you really wanted to make sure the designed network works exactly works as you expected, you will be benefited from this algorithm. The advantage of this proposed method is not just its performance in real-time but also it is effective as it does not need the knowledge about the normal or malicious node in advance with very high average accuracy that is close to 100%. It also can be used as one of maintaining tools for the regulations of the deployed WSNs.
This paper introduces a design and implementation of a security scheme for the Internet of Things (IoT) based on ECQV Implicit Certificates and Datagram Transport Layer Security (DTLS) protocol. In this proposed security scheme, Elliptic curve cryptography based ECQV implicit certificate plays a key role allowing mutual authentication and key establishment between two resource-constrained IoT devices. We present how IoT devices get ECQV implicit certificates and use them for authenticated key exchange in DTLS. An evaluation of execution time of the implementation is also conducted to assess the efficiency of the solution.
The microelectronics industry seeks screening tools that can be used to verify the origin of and track integrated circuits (ICs) throughout their lifecycle. Embedded circuits that measure process variation of an IC are well known. This paper adds to previous work using these circuits for studying manufacturer characteristics on final product ICs, particularly for the purpose of developing and verifying a signature for a microelectronics manufacturing facility (fab). We present the design, measurements and analysis of 159 silicon ICs which were built as a proof of concept for this purpose. 80 copies of our proof of concept IC were built at one fab, and 80 more copies were built across two lots at a second fab. Using these ICs, our prototype circuits allowed us to distinguish these two fabs with up to 98.7% accuracy and also distinguish the two lots from the second fab with up to 98.8% accuracy.
The microelectronics industry seeks screening tools that can be used to verify the origin of and track integrated circuits (ICs) throughout their lifecycle. Embedded circuits that measure process variation of an IC are well known. This paper adds to previous work using these circuits for studying manufacturer characteristics on final product ICs, particularly for the purpose of developing and verifying a signature for a microelectronics manufacturing facility (fab). We present the design, measurements and analysis of 159 silicon ICs which were built as a proof of concept for this purpose. 80 copies of our proof of concept IC were built at one fab, and 80 more copies were built across two lots at a second fab. Using these ICs, our prototype circuits allowed us to distinguish these two fabs with up to 98.7% accuracy and also distinguish the two lots from the second fab with up to 98.8% accuracy.
The output of 3D volume segmentation is crucial to a wide range of endeavors. Producing accurate segmentations often proves to be both inefficient and challenging, in part due to lack of imaging data quality (contrast and resolution), and because of ambiguity in the data that can only be resolved with higher-level knowledge of the structure and the context wherein it resides. Automatic and semi-automatic approaches are improving, but in many cases still fail or require substantial manual clean-up or intervention. Expert manual segmentation and review is therefore still the gold standard for many applications. Unfortunately, existing tools (both custom-made and commercial) are often designed based on the underlying algorithm, not the best method for expressing higher-level intention. Our goal is to analyze manual (or semi-automatic) segmentation to gain a better understanding of both low-level (perceptual tasks and actions) and high-level decision making. This can be used to produce segmentation tools that are more accurate, efficient, and easier to use. Questioning or observation alone is insufficient to capture this information, so we utilize a hybrid capture protocol that blends observation, surveys, and eye tracking. We then developed, and validated, data coding schemes capable of discerning low-level actions and overall task structures.
Embedded virtualization has emerged as a valuable way to reduce costs, improve software quality, and decrease design time. Additionally, virtualization can enforce the overall system's security from several perspectives. One is security due to separation, where the hypervisor ensures that one domain does not compromise the execution of other domains. At the same time, the advances in the development of IoT applications opened discussions about the security flaws that were introduced by IoT devices. In a few years, billions of these devices will be connected to the cloud exchanging information. This is an opportunity for hackers to exploit their vulnerabilities, endangering applications connected to such devices. At this point, it is inevitable to consider virtualization as a possible approach for IoT security. In this paper we discuss how embedded virtualization could take place on IoT devices as a sound solution for security.