Biblio
With the boom of ride-sharing platforms, there has been a growing debate on ride-sharing regulations. In particular, allegations of rape against ride-sharing drivers put sexual assault at the center of this debate. However, there is no systematic and society-wide evidence regarding ride-sharing and sexual assault. Building on a theory of crime victimization, this study examines the effect of ride-sharing on sexual assault incidents using comprehensive data on Uber transactions and crime incidents in New York City over the period from January to March 2015. Our findings demonstrate that the Uber availability is negatively associated with the likelihood of rape, after controlling for endogeneity. Moreover, the deterrent effect of Uber on sexual assault is entirely driven by the taxi-sparse areas, namely outside Manhattan. This study sheds light on the potential of ride-sharing platforms and sharing economy to improve social welfare beyond economic gains.
Content-based routing (CBR) is a powerful model that supports scalable asynchronous communication among large sets of geographically distributed nodes. Yet, preserving privacy represents a major limitation for the wide adoption of CBR, notably when the routers are located in public clouds. Indeed, a CBR router must see the content of the messages sent by data producers, as well as the filters (or subscriptions) registered by data consumers. This represents a major deterrent for companies for which data is a key asset, as for instance in the case of financial markets or to conduct sensitive business-to-business transactions. While there exists some techniques for privacy-preserving computation, they are either prohibitively slow or too limited to be usable in real systems. In this paper, we follow a different strategy by taking advantage of trusted hardware extensions that have just been introduced in off-the-shelf processors and provide a trusted execution environment. We exploit Intel's new software guard extensions (SGX) to implement a CBR engine in a secure enclave. Thanks to the hardware-based trusted execution environment (TEE), the compute-intensive CBR operations can operate on decrypted data shielded by the enclave and leverage efficient matching algorithms. Extensive experimental evaluation shows that SGX adds only limited overhead to insecure plaintext matching outside secure enclaves while providing much better performance and more powerful filtering capabilities than alternative software-only solutions. To the best of our knowledge, this work is the first to demonstrate the practical benefits of SGX for privacy-preserving CBR.
Security is concerned with protecting assets. The aspects of security can be applied to any situation- defense, detection and deterrence. Network security plays important role of protecting information, hardware and software on a computer network. Denial of service (DOS) attacks causes great impacts on the internet world. These attacks attempt to disrupt legitimate user's access to services. By exploiting computer's vulnerabilities, attackers easily consume victim's resources. Many special techniques have been developed to protest against DOS attacks. Some organizations constitute several defense mechanism tools to tackle the security problems. This paper has proposed various types of attacks and solutions associated with each layers of OSI model. These attacks and solutions have different impacts on the different environment. Thus the rapid growth of new technologies may constitute still worse impacts of attacks in the future.
Organisers of large-scale crowdsourcing initiatives need to consider how to produce outcomes with their projects, but also how to build volunteer capacity. The initial project experience of contributors plays an important role in this, particularly when the contribution process requires some degree of expertise. We propose three analytical dimensions to assess first-time contributor engagement based on readily available public data: cohort analysis, task analysis, and observation of contributor performance. We apply these to a large-scale study of remote mapping activities coordinated by the Humanitarian OpenStreetMap Team, a global volunteer effort with thousands of contributors. Our study shows that different coordination practices can have a marked impact on contributor retention, and that complex task designs can be a deterrent for certain contributor groups. We close by providing recommendations about how to build and sustain volunteer capacity in these and comparable crowdsourcing systems.
DDoS-for-hire services, also known as booters, have commoditized DDoS attacks and enabled abusive subscribers of these services to cheaply extort, harass and intimidate businesses and people by taking them offline. However, due to the underground nature of these booters, little is known about their underlying technical and business structure. In this paper, we empirically measure many facets of their technical and payment infrastructure. We also perform an analysis of leaked and scraped data from three major booters–-Asylum Stresser, Lizard Stresser and VDO–-which provides us with an in-depth view of their customers and victims. Finally, we conduct a large-scale payment intervention in collaboration with PayPal and evaluate its effectiveness as a deterrent to their operations. Based on our analysis, we show that these booters are responsible for hundreds of thousands of DDoS attacks and identify potentially promising methods to undermine these services by increasing their costs of operation.
My work that is being recognized by the 2015 ACM A. M. Turing Award is in cybersecurity, while my primary interest for the last thirty-five years is concerned with reducing the risk that nuclear deterrence will fail and destroy civilization. This Turing Lecture draws connections between those seemingly disparate areas as well as Alan Turing's elegant proof that the computable real numbers, while denumerable, are not effectively denumerable.
Texting while driving has emerged as a significant threat to citizen safety. In this study, we utilize general deterrence theory (GDT), protection motivation theory and personality traits to evaluate texting while driving (TWD) compliance intentions among teenage drivers. This paper presents the results of our pilot study. We administered an online survey to 105 teenage and young adult drivers. The potential implications for research and practice and policy are discussed.
This study examines the effectiveness of virtual reality technology at creating an immersive user experience in which participants experience first hand the extreme negative consequences of smartphone use while driving. Research suggests that distracted driving caused by smartphones is related to smartphone addiction and causes fatalities. Twenty-two individuals participated in the virtual reality user experience (VRUE) in which they were asked to drive a virtual car using a Oculus Rift headset, LeapMotion hand tracking device, and a force feedback steering wheel and pedals. While driving in the simulation participants were asked to interact with a smartphone and after a period of time trying to manage both tasks a vehicle appears before them and they are involved in a head-on collision. Initial results indicated a strong sense of presence was felt by participants and a change or re-enforcement of the participant's perception of the dangers of smartphone use while driving was observed.
In today's world, the security of companies' data is given a very big emphasis than ever. Despite huge investments made by companies to keep their systems safe, there are many information systems security breaches that infiltrate companies' systems and consequently affect their economic capacity, reputation, and customers' confidence. The literature suggests that almost all investments in information systems security have been focused only on technological solutions. However, having this partial view on the complex information systems security problem is found to be insufficient and hence there is an increasing call for researchers to include social factors into the solution space. One of such social factor is culture. Thus, in this research we studied how national culture influence employees' intention to violate or comply their company ISS policy. We construct and test an empirical model by using a survey data obtained from employees who are working in Ethiopia.
REMAN is a reputation management infrastructure for composite Web services. It supports the aggregation of client feedback on the perceived QoS of external services, using reputation mechanisms to build service rankings. Changes in rankings are pro-actively notified to composite service clients to enable self-tuning properties in their execution.
In this paper we propose a mechanism of prediction of domestic human activity in a smart home context. We use those predictions to adapt the behavior of home appliances whose impact on the environment is delayed (for example the heating). The behaviors of appliances are built by a reinforcement learning mechanism. We compare the behavior built by the learning approach with both a merely reactive behavior and a state-remanent behavior.
A brief review is given of the memory properties of non-linear ferroelectric materials in terms of the direction of polarization. A sensitive pulse method has been developed for obtaining static remanent polarization data of ferroelectric materials. This method has been applied to study the effect of pulse duration and amplitude and decay of polarization on ferroelectric ceramic materials with fairly high crystalline orientation. These studies indicate that ferroelectric memory devices can be operated in the megacycle ranges. Attempts have been made to develop electrostatically induced memory devices using ferroelectric substances as a medium for storing information. As an illustration, a ferroelectric memory using a new type of switching matrix is presented having a selection ratio 50 or more.
Contrary to widespread assumption, dynamic RAM (DRAM), the main memory in most modern computers, retains its contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard. Although DRAM becomes less reliable when it is not refreshed, it is not immediately erased, and its contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access to a machine. It poses a particular threat to laptop users who rely on disk encryption: we demonstrate that it could be used to compromise several popular disk encryption products without the need for any special devices or materials. We experimentally characterize the extent and predictability of memory retention and report that remanence times can be increased dramatically with simple cooling techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for mitigating these risks, we know of no simple remedy that would eliminate them.
As non-volatile memory (NVM) technologies are expected to replace DRAM in the near future, new challenges have emerged. For example, NVMs have slow and power-consuming writes, and limited write endurance. In addition, NVMs have a data remanence vulnerability, i.e., they retain data for a long time after being powered off. NVM encryption alleviates the vulnerability, but exacerbates the limited endurance by increasing the number of writes to memory. We observe that, in current systems, a large percentage of main memory writes result from data shredding in operating systems, a process of zeroing out physical pages before mapping them to new processes, in order to protect previous processes' data. In this paper, we propose Silent Shredder, which repurposes initialization vectors used in standard counter mode encryption to completely eliminate the data shredding writes. Silent Shredder also speeds up reading shredded cache lines, and hence reduces power consumption and improves overall performance. To evaluate our design, we run three PowerGraph applications and 26 multi-programmed workloads from the SPEC 2006 suite, on a gem5-based full system simulator. Silent Shredder eliminates an average of 48.6% of the writes in the initialization and graph construction phases. It speeds up main memory reads by 3.3 times, and improves the number of instructions per cycle (IPC) by 6.4% on average. Finally, we discuss several use cases, including virtual machines' data isolation and user-level large data initialization, where Silent Shredder can be used effectively at no extra cost.
Smart mobile devices are becoming the main vessel of personal privacy information. While they carry valuable information, data erasure is somehow much more vulnerable than was predicted. The security mechanisms provided by the Android system are not flexible enough to thoroughly delete sensitive data. In addition to the weakness among several provided data-erasing and file-deleting mechanisms, we also target the Android OS design flaws in data erasure, and unveil that the design of the Android OS contradicts some secure data-erasure demands. We present the data-erasure flaws in three typical scenarios on mainstream Android devices, such as the data clearing flaw, application uninstallation flaw, and factory reset flaw. Some of these flaws are inherited data-deleting security issues from the Linux kernel, and some are new vulnerabilities in the Android system. Those scenarios reveal the data leak points in Android systems. Moreover, we reveal that the data remanence on the disk is rarely affected by the user’s daily operation, such as file deletion and app installation and uninstallation, by a real-world data deletion latency experiment. After one volunteer used the Android phone for 2 months, the data remanence amount was still considerable. Then, we proposed DataRaider for file recovering from disk fragments. It adopts a file-carving technique and is implemented as an automated sensitive information recovering framework. DataRaider is able to extract private data in a raw disk image without any file system information, and the recovery rate is considerably high in the four test Android phones. We propose some mitigation for data remanence issues, and give the users some suggestions on data protection in Android systems.
Sensing platforms are becoming batteryless to enable the vision of the Internet of Things, where trillions of devices collect data, interact with each other, and interact with people. However, these batteryless sensing platforms—that rely purely on energy harvesting—are rarely able to maintain a sense of time after a power failure. This makes working with sensor data that is time sensitive especially difficult. We propose two novel, zero-power timekeepers that use remanence decay to measure the time elapsed between power failures. Our approaches compute the elapsed time from the amount of decay of a capacitive device, either on-chip Static Random-Access Memory (SRAM) or a dedicated capacitor. This enables hourglass-like timers that give intermittently powered sensing devices a persistent sense of time. Our evaluation shows that applications using either timekeeper can keep time accurately through power failures as long as 45s with low overhead.
In this paper we consider recovering data from USB Flash memory sticks after they have been damaged or electronically erased. We describe the physical structure and theory of operation of Flash memories; review the literature of Flash memory data recovery; and report results of new experiments in which we damage USB Flash memory sticks and attempt to recover their contents. The experiments include smashing and shooting memory sticks, incinerating them in petrol and cooking them in a microwave oven.
BootJacker is a proof-of-concept attack tool which demonstrates that authentication mechanisms employed by an operating system can be bypassed by obtaining physical access and simply forcing a restart. The key insight that enables this attack is that the contents of memory on some machines are fully preserved across a warm boot. Upon a reboot, BootJacker uses this residual memory state to revive the original host operating system environment and run malicious payloads. Using BootJacker, an attacker can break into a locked user session and gain access to open encrypted disks, web browser sessions or other secure network connections. BootJacker's non-persistent design makes it possible for an attacker to leave no traces on the victim machine.
Current post-mortem cyber-forensic techniques may cause significant disruption to the evidence gathering process by breaking active network connections and unmounting encrypted disks. Although newer live forensic analysis tools can preserve active state, they may taint evidence by leaving footprints in memory. To help address these concerns we present Forenscope, a framework that allows an investigator to examine the state of an active system without the effects of taint or forensic blurriness caused by analyzing a running system. We show how Forenscope can fit into accepted workflows to improve the evidence gathering process. Forenscope preserves the state of the running system and allows running processes, open files, encrypted filesystems and open network sockets to persist during the analysis process. Forenscope has been tested on live systems to show that it does not operationally disrupt critical processes and that it can perform an analysis in less than 15 seconds while using only 125 KB of memory. We show that Forenscope can detect stealth rootkits, neutralize threats and expedite the investigation process by finding evidence in memory.
We study the value of data privacy in a game-theoretic model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The private data of each individual represents her knowledge about an underlying state, which is the information that the data collector desires to learn. Different from most of the existing work on privacy-aware surveys, our model does not assume the data collector to be trustworthy. Then, an individual takes full control of its own data privacy and reports only a privacy-preserving version of her data. In this paper, the value of ε units of privacy is measured by the minimum payment of all nonnegative payment mechanisms, under which an individual's best response at a Nash equilibrium is to report the data with a privacy level of ε. The higher ε is, the less private the reported data is. We derive lower and upper bounds on the value of privacy which are asymptotically tight as the number of data subjects becomes large. Specifically, the lower bound assures that it is impossible to use less amount of payment to buy ε units of privacy, and the upper bound is given by an achievable payment mechanism that we designed. Based on these fundamental limits, we further derive lower and upper bounds on the minimum total payment for the data collector to achieve a given learning accuracy target, and show that the total payment of the designed mechanism is at most one individual's payment away from the minimum.
We introduce a model for differentially private analysis of weighted graphs in which the graph topology (υ,ε) is assumed to be public and the private information consists only of the edge weights ω : ε → R+. This can express hiding congestion patterns in a known system of roads. Differential privacy requires that the output of an algorithm provides little advantage, measured by privacy parameters ε and δ, for distinguishing between neighboring inputs, which are thought of as inputs that differ on the contribution of one individual. In our model, two weight functions w,w' are considered to be neighboring if they have l1 distance at most one. We study the problems of privately releasing a short path between a pair of vertices and of privately releasing approximate distances between all pairs of vertices. We are concerned with the approximation error, the difference between the length of the released path or released distance and the length of the shortest path or actual distance. For the problem of privately releasing a short path between a pair of vertices, we prove a lower bound of Ω(textbarυtextbar) on the additive approximation error for fixed privacy parameters ε,δ. We provide a differentially private algorithm that matches this error bound up to a logarithmic factor and releases paths between all pairs of vertices, not just a single pair. The approximation error achieved by our algorithm can be bounded by the number of edges on the shortest path, so we achieve better accuracy than the worst-case bound for pairs of vertices that are connected by a low-weight path consisting of o(textbarυtextbar) vertices. For the problem of privately releasing all-pairs distances, we show that for trees we can release all-pairs distances with approximation error \$O(log2.5textbarυtextbar) for fixed privacy parameters. For arbitrary bounded-weight graphs with edge weights in [0,M] we can brelease all distances with approximation error Õ(√textgreater(textbarυtextbarM).
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of high-performance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.
In the capacitated k-median (CKM) problem, we are given a set F of facilities, each facility i ∈ F with a capacity ui, a set C of clients, a metric d over F ∪ C and an integer k. The goal is to open k facilities in F and connect the clients C to the open facilities such that each facility i is connected by at most ui clients, so as to minimize the total connection cost. In this paper, we give the first constant approximation for CKM, that only violates the cardinality constraint by a factor of 1 + ε. This generalizes the result of [Li15], which only works for the uniform capacitated case. Moreover, the approximation ratio we obtain is O([EQUATION] log [EQUATION]), which is an exponential improvement over the ratio of exp (O([EQUATION])) in [Li15]. The natural LP relaxation for the problem, which almost all previous algorithms for CKM are based on, has unbounded integrality gap even if (2 – ε)k facilities can be opened. We introduce a novel configuration LP for the problem, that overcomes this integrality gap. On the downside, each facility may be opened twice by our algorithm.
This article derives trade-offs between three basic costs of a parallel algorithm: synchronization, data movement, and computational cost. These trade-offs are lower bounds on the execution time of the algorithm that are independent of the number of processors but dependent on the problem size. Therefore, they provide lower bounds on the execution time of any parallel schedule of an algorithm computed by a system composed of any number of homogeneous processors, each with associated computational, communication, and synchronization costs. We employ a theoretical model that measures the amount of work and data movement as a maximum over that incurred along any execution path during the parallel computation. By considering this metric rather than the total communication volume over the whole machine, we obtain new insights into the characteristics of parallel schedules for algorithms with nontrivial dependency structures. We also present reductions from BSP and LogGP algorithms to our execution model, extending our lower bounds to these two models of parallel computation. We first develop our results for general dependency graphs and hypergraphs based on their expansion properties, and then we apply the theorem to a number of specific algorithms in numerical linear algebra, namely triangular substitution, Cholesky factorization, and stencil computations. We represent some of these algorithms as families of dependency graphs. We derive their communication lower bounds by studying the communication requirements of the hypergraph structures shared by these dependency graphs. In addition to these lower bounds, we introduce a new communication-efficient parallelization for stencil computation algorithms, which is motivated by results of our lower bound analysis and the properties of previously existing parallelizations of the algorithms.
Ever-growing performance of supercomputers nowadays brings demanding requirements of energy efficiency and resilience, due to rapidly expanding size and duration in use of the large-scale computing systems. Many application/architecture-dependent parameters that determine energy efficiency and resilience individually have causal effects with each other, which directly affect the trade-offs among performance, energy efficiency and resilience at scale. To enable high-efficiency management for large-scale High-Performance Computing (HPC) systems nowadays, quantitatively understanding the entangled effects among performance, energy efficiency, and resilience is thus required. While previous work focuses on exploring energy-saving and resilience-enhancing opportunities separately, little has been done to theoretically and empirically investigate the interplay between energy efficiency and resilience at scale. In this article, by extending the Amdahl’s Law and the Karp-Flatt Metric, taking resilience into consideration, we quantitatively model the integrated energy efficiency in terms of performance per Watt and showcase the trade-offs among typical HPC parameters, such as number of cores, frequency/voltage, and failure rates. Experimental results for a wide spectrum of HPC benchmarks on two HPC systems show that the proposed models are accurate in extrapolating resilience-aware performance and energy efficiency, and capable of capturing the interplay among various energy-saving and resilience factors. Moreover, the models can help find the optimal HPC configuration for the highest integrated energy efficiency, in the presence of failures and applied resilience techniques.