Biblio
Bitcoin, a decentralized cryptographic currency that has experienced proliferating popularity over the past few years, is the common denominator in a wide variety of cybercrime. We perform a measurement analysis of CryptoLocker, a family of ransomware that encrypts a victim's files until a ransom is paid, within the Bitcoin ecosystem from September 5, 2013 through January 31, 2014. Using information collected from online fora, such as reddit and BitcoinTalk, as an initial starting point, we generate a cluster of 968 Bitcoin addresses belonging to CryptoLocker. We provide a lower bound for CryptoLocker's economy in Bitcoin and identify 795 ransom payments totalling 1,128.40 BTC (\$310,472.38), but show that the proceeds could have been worth upwards of \$1.1 million at peak valuation. By analyzing ransom payment timestamps both longitudinally across CryptoLocker's operating period and transversely across times of day, we detect changes in distributions and form conjectures on CryptoLocker that corroborate information from previous efforts. Additionally, we construct a network topology to detail CryptoLocker's financial infrastructure and obtain auxiliary information on the CryptoLocker operation. Most notably, we find evidence that suggests connections to popular Bitcoin services, such as Bitcoin Fog and BTC-e, and subtle links to other cybercrimes surrounding Bitcoin, such as the Sheep Marketplace scam of 2013. We use our study to underscore the value of measurement analyses and threat intelligence in understanding the erratic cybercrime landscape.
The emphasis on exhaustive passive capturing of images using wearable cameras like Autographer, which is often known as lifelogging has brought into foreground the challenge of preserving privacy, in addition to presenting the vast amount of images in a meaningful way. In this paper, we present a user-study to understand the importance of an array of factors that are likely to influence the lifeloggers to share their lifelog images in their online circle. The findings are a step forward in the emerging area intersecting HCI, and privacy, to help in exploring design directions for privacy mediating techniques in lifelogging applications.
Machine-based tracking is a type of behavior that extracts information on a user's machine, which can then be used for fingerprinting, tracking, or profiling purposes. In this paper, we focus on JavaScript-oriented machine-based tracking as JavaScript is widely accessible in all browsers. We find that coarse features related to JavaScript access, cookie access, and URL length subdomain information can perform well in creating a classifier that can identify these machine-based trackers with 97.7% accuracy. We then use the classifier on real-world datasets based on 30-minute website crawls of different types of websites – including websites that target children and websites that target a popular audience – and find 85%+ of all websites utilize machine-based tracking, even when they target a regulated group (children) as their primary audience.
The Internet of Things (IoT) systems are designed and developed either as standalone applications from the ground-up or with the help of IoT middleware platforms. They are designed to support different kinds of scenarios, such as smart homes and smart cities. Thus far, privacy concerns have not been explicitly considered by IoT applications and middleware platforms. This is partly due to the lack of systematic methods for designing privacy that can guide the software development process in IoT. In this paper, we propose a set of guidelines, a privacy by-design framework, that can be used to assess privacy capabilities and gaps of existing IoT applications as well as middleware platforms. We have evaluated two open source IoT middleware platforms, namely OpenIoT and Eclipse SmartHome, to demonstrate how our framework can be used in this way.
Mobile Devices are part of our lives and we store a lot of private information on it as well as use services that handle sensitive information (e.g. mobile health apps). Whenever users install an application on their smartphones they have to decide whether to trust the applications and share private and sensitive data with at least the developer-owned services. But almost all modern apps not only transmit data to the developer owned servers but also send information to advertising-, analyzing and tracking partners. This paper presents an approach for a "privacy- proxy" which enables to filter unwanted data traffic to third party services without installing additional applications on the smartphone. It is based on a firewall using a black list of tracking- and analyzing networks which is automatically updated on a daily basis. The proof of concept has been implemented with open source components on a Raspberry Pi.
The lifelogging activity enables a user, the lifelogger, to passively capture multimodal records from a first-person perspective and ultimately create a visual diary encompassing every possible aspect of her life with unprecedented details. In recent years it has gained popularity among different groups of users. However, the possibility of ubiquitous presence of lifelogging devices especially in private spheres has raised serious concerns with respect to personal privacy. Different practitioners and active researchers in the field of lifelogging have analysed the issue of privacy in lifelogging and proposed different mitigation strategies. However, none of the existing works has considered a well-defined privacy threat model in the domain of lifelogging. Without a proper threat model, any analysis and discussion of privacy threats in lifelogging remains incomplete. In this paper we aim to fill in this gap by introducing a first-ever privacy threat model identifying several threats with respect to lifelogging. We believe that the introduced threat model will be an essential tool and will act as the basis for any further research within this domain.
The widespread adoption of Location-Based Services (LBSs) has come with controversy about privacy. While leveraging location information leads to improving services through geo-contextualization, it rises privacy concerns as new knowledge can be inferred from location records, such as home/work places, habits or religious beliefs. To overcome this problem, several Location Privacy Protection Mechanisms (LPPMs) have been proposed in the literature these last years. However, every mechanism comes with its own configuration parameters that directly impact the privacy guarantees and the resulting utility of protected data. In this context, it can be difficult for a non-expert system designer to choose appropriate configuration parameters to use according to the expected privacy and utility. In this paper, we present a framework enabling the easy configuration of LPPMs. To achieve that, our framework performs an offline, in-depth automated analysis of LPPMs to provide the formal relationship between their configuration parameters and both privacy and the utility metrics. This framework is modular: by using different metrics, a system designer is able to fine-tune her LPPM according to her expected privacy and utility guarantees (i.e., the guarantee itself and the level of this guarantee). To illustrate the capability of our framework, we analyse Geo-Indistinguishability (a well known differentially private LPPM) and we provide the formal relationship between its &epsis; configuration parameter and two privacy and utility metrics.
The theoretical construct of a Trusted Third Party (TTP) has the potential to solve many security and privacy challenges. In particular, a TTP is an ideal way to achieve secure multiparty computation—a privacy-enhancing technique in which mutually distrusting participants jointly compute a function over their private inputs without revealing these inputs. Although there exist cryptographic protocols to achieve this, their performance often limits them to the two-party case, or to a small number of participants. However, many real-world applications involve thousands or tens of thousands of participants. Examples of this type of many-party application include privacy-preserving energy metering, location-based services, and mobile network roaming. Challenging the notion that a trustworthy TTP does not exist, recent research has shown how trusted hardware and remote attestation can be used to establish a sufficient level of assurance in a real system such that it can serve as a trustworthy remote entity (TRE). We explore the use of Intel SGX, the most recent and arguably most promising trusted hardware technology, as the basis for a TRE for many-party applications. Using privacy-preserving energy metering as a case study, we design and implement a prototype TRE using SGX, and compare its performance to a previous system based on the Trusted Platform Module (TPM). Our results show that even without specialized optimizations, SGX provides comparable performance to the optimized TPM system, and therefore has significant potential for large-scale many-party applications.
In content-based security, encrypted content as well as wrapped access keys are made freely available by an Information Centric Network: Only those clients which are able to unwrap the encryption key can access the protected content. In this paper we extend this model to computation chains where derived data (e.g. produced by a Named Function Network) also has to comply to the content-based security approach. A central problem to solve is the synchronized on-demand publishing of encrypted results and wrapped keys as well as defining the set of consumers which are authorized to access the derived data. In this paper we introduce "content-attendant policies" and report on a running prototype that demonstrates how to enforce data owner-defined access control policies despite fully decentralized and arbitrarily long computation chains.
We initiate the study of a quantity that we call coordination complexity. In a distributed optimization problem, the information defining a problem instance is distributed among n parties, who need to each choose an action, which jointly will form a solution to the optimization problem. The coordination complexity represents the minimal amount of information that a centralized coordinator, who has full knowledge of the problem instance, needs to broadcast in order to coordinate the n parties to play a nearly optimal solution. We show that upper bounds on the coordination complexity of a problem imply the existence of good jointly differentially private algorithms for solving that problem, which in turn are known to upper bound the price of anarchy in certain games with dynamically changing populations. We show several results. We fully characterize the coordination complexity for the problem of computing a many-to-one matching in a bipartite graph. Our upper bound in fact extends much more generally to the problem of solving a linearly separable convex program. We also give a different upper bound technique, which we use to bound the coordination complexity of coordinating a Nash equilibrium in a routing game, and of computing a stable matching.
Anonymous communication systems are vulnerable to long term passive "intersection attacks". Not all users of an anonymous communication system will be online at the same time, this leaks some information about who is talking to who. A global passive adversary observing all communications can learn the set of potential recipients of a message with more and more confidence over time. Nearly all deployed anonymous communication tools offer no protection against such attacks. In this work, we introduce TASP, a protocol used by an anonymous communication system that mitigates intersection attacks by intelligently grouping clients together into anonymity sets. We find that with a bandwidth overhead of just 8% we can dramatically extend the time necessary to perform a successful intersection attack.
Randomness extractors and error correcting codes are fundamental objects in computer science. Recently, there have been several natural generalizations of these objects, in the context and study of tamper resilient cryptography. These are seeded non-malleable extractors, introduced by Dodis and Wichs; seedless non-malleable extractors, introduced by Cheraghchi and Guruswami; and non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs. Besides being interesting on their own, they also have important applications in cryptography, e.g, privacy amplification with an active adversary, explicit non-malleable codes etc, and often have unexpected connections to their non-tampered analogues. However, the known constructions are far behind their non-tampered counterparts. Indeed, the best known seeded non-malleable extractor requires min-entropy rate at least 0.49; while explicit constructions of non-malleable two-source extractors were not known even if both sources have full min-entropy, and was left as an open problem by Cheraghchi and Guruswami. In this paper we make progress towards solving the above problems and other related generalizations. Our contributions are as follows. (1) We construct an explicit seeded non-malleable extractor for polylogarithmic min-entropy. This dramatically improves all previous results and gives a simpler 2-round privacy amplification protocol with optimal entropy loss, matching the best known result. In fact, we construct more general seeded non-malleable extractors (that can handle multiple adversaries) which were used in the recent construction of explicit two-source extractors for polylogarithmic min-entropy. (2) We construct the first explicit non-malleable two-source extractor for almost full min-entropy thus resolving the open question posed by Cheraghchi and Guruswami. (3) We motivate and initiate the study of two natural generalizations of seedless non-malleable extractors and non-malleable codes, where the sources or the codeword may be tampered many times. By using the connection found by Cheraghchi and Guruswami and providing efficient sampling algorithms, we obtain the first explicit non-malleable codes with tampering degree t, with near optimal rate and error. We call these stronger notions one-many and many-manynon-malleable codes. This provides a stronger information theoretic analogue of a primitive known as continuous non-malleable codes. Our basic technique used in all of our constructions can be seen as inspired, in part, by the techniques previously used to construct cryptographic non-malleable commitments.
Performance analysis of newly designed solutions is essential for efficient Internet of Things and Wireless Sensor Network (WSN) deployments. Simulation and experimental evaluation practices are vital steps for the development process of protocols and applications for wireless technologies. Nowadays, the new solutions can be tested at a very large scale over both simulators and testbeds. In this paper, we first discuss the importance of repeatable experimental setups for reproducible performance evaluation results. To this aim, we present FIT IoT-LAB, a very large-scale and experimental testbed, i.e., consists of 2769 low-power wireless devices and 127 mobile robots. We then demonstrate through a number of experiments conducted on FIT IoT-LAB testbed, how to conduct meaningful experiments under real-world conditions. Finally, we discuss to what extent results obtained from experiments could be considered as scientific, i.e., reproducible by the community.
Security of Constrained application protocol(COAP) used instead of HTTP in Internet of Thing s(IoT) is achieved using DTLS which uses the Internet key exchange protocol for key exchange and management. In this work a novel key exchange and authentication protocol is proposed. CLIKEv2 protcol is a certificate less and light weight version of the existing protocol. The protocol design is tested with the formal protcol verification tool Scyther, where no named attacks are identified for the propsed protocol. Compared to the existing IKE protocol the CLIKEv2 protocol reduces the computation time, key sizes and ultimately reduces energy consumption.
Induction is a successful approach for verification of hardware and software systems. A common practice is to model a system using logical formulas, and then use a decision procedure to verify that some logical formula is an inductive safety invariant for the system. A key ingredient in this approach is coming up with the inductive invariant, which is known as invariant inference. This is a major difficulty, and it is often left for humans or addressed by sound but incomplete abstract interpretation. This paper is motivated by the problem of inductive invariants in shape analysis and in distributed protocols. This paper approaches the general problem of inferring first-order inductive invariants by restricting the language L of candidate invariants. Notice that the problem of invariant inference in a restricted language L differs from the safety problem, since a system may be safe and still not have any inductive invariant in L that proves safety. Clearly, if L is finite (and if testing an inductive invariant is decidable), then inferring invariants in L is decidable. This paper presents some interesting cases when inferring inductive invariants in L is decidable even when L is an infinite language of universal formulas. Decidability is obtained by restricting L and defining a suitable well-quasi-order on the state space. We also present some undecidability results that show that our restrictions are necessary. We further present a framework for systematically constructing infinite languages while keeping the invariant inference problem decidable. We illustrate our approach by showing the decidability of inferring invariants for programs manipulating linked-lists, and for distributed protocols.
Sego is a hypervisor-based system that gives strong privacy and integrity guarantees to trusted applications, even when the guest operating system is compromised or hostile. Sego verifies operating system services, like the file system, instead of replacing them. By associating trusted metadata with user data across all system devices, Sego verifies system services more efficiently than previous systems, especially services that depend on data contents. We extensively evaluate Sego's performance on real workloads and implement a kernel fault injector to validate Sego's file system-agnostic crash consistency and recovery protocol.
Parallel and distributed systems rely on intricate protocols to manage shared resources and synchronize, i.e., to manage how many processes are in a particular state. Effective verification of such systems requires universally quantification to reason about parameterized state and cardinalities tracking sets of processes, messages, failures to adequately capture protocol logic. In this paper we present Tool, an automatic invariant synthesis method that integrates cardinality-based reasoning and universal quantification. The resulting increase of expressiveness allows Tool to verify, for the first time, a representative collection of intricate parameterized protocols.
Cryptographic protocols and algorithms are the strength of digital era in which we are living. Unluckily, the security of many confidential information and credentials has been compromised due to ignorance of required security services. As a result, various attacks have been introduced by talented attackers and many security issues like as financial loss, violations of personal privacy, and security threats to democracy. This research paper provides the secure design and architecture of cryptographic protocols and expedites the authentication of cryptographic system. Designing and developing a secure cryptographic system is like a game in which designer or developer tries to maintain the security while attacker tries to penetrate the security features to perform successful attack.
Yamata-no-Orochi is an authentication and authorization infrastructure across multiple service domains and provides Internet services with unified authentication and authorization mechanisms. In this paper, Yamata-no-Orochi is incorporated into a video distribution system to verify its general versatility as a multi-domain authentication and authorization infrastructure for Internet services. This paper also reduces the authorization time of Yamata-no-Orochi to fulfill the processing time constrains of the video distribution system. The evaluation results show that all the authentication and authorization processes work correctly and the performance of Yamata-no-Orochi is practical for the video distribution system.