Biblio
Machine learning is widely used in security-sensitive settings like spam and malware detection, although it has been shown that malicious data can be carefully modified at test time to evade detection. To overcome this limitation, adversary-aware learning algorithms have been developed, exploiting robust optimization and game-theoretical models to incorporate knowledge of potential adversarial data manipulations into the learning algorithm. Despite these techniques have been shown to be effective in some adversarial learning tasks, their adoption in practice is hindered by different factors, including the difficulty of meeting specific theoretical requirements, the complexity of implementation, and scalability issues, in terms of computational time and space required during training. In this work, we aim to develop secure kernel machines against evasion attacks that are not computationally more demanding than their non-secure counterparts. In particular, leveraging recent work on robustness and regularization, we show that the security of a linear classifier can be drastically improved by selecting a proper regularizer, depending on the kind of evasion attack, as well as unbalancing the cost of classification errors. We then discuss the security of nonlinear kernel machines, and show that a proper choice of the kernel function is crucial. We also show that unbalancing the cost of classification errors and varying some kernel parameters can further improve classifier security, yielding decision functions that better enclose the legitimate data. Our results on spam and PDF malware detection corroborate our analysis.
Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain. We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a "wealthy" block to "steal" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest. We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.
Decoy routing is a promising new approach for censorship circumvention that relies on traffic re-direction by volunteer autonomous systems. Decoy routing is subject to a fundamental censorship attack, called routing around decoy (RAD), in which the censors re-route their clients' Internet traffic in order to evade decoy routing autonomous systems. Recently, there has been a heated debate in the community on the real-world feasibility of decoy routing in the presence of the RAD attack. Unfortunately, previous studies rely their analysis on heuristic-based mechanisms for decoy placement strategies as well as ad hoc strategies for the implementation of the RAD attack by the censors. In this paper, we perform the first systematic analysis of decoy routing in the presence of the RAD attack. We use game theory to model the interactions between decoy router deployers and the censors in various settings. Our game-theoretic analysis finds the optimal decoy placement strategies–-as opposed to heuristic-based placements–-in the presence of RAD censors who take their optimal censorship actions–-as opposed to some ad hoc implementation of RAD. That is, we investigate the best decoy placement given the best RAD censorship. We consider two business models for the real-world deployment of decoy routers: a central deployment that resembles that of Tor and a distributed deployment where autonomous systems individually decide on decoy deployment based on their economic interests. Through extensive simulation of Internet routes, we derive the optimal strategies in the two models for various censoring countries and under different assumptions about the budget and preferences of the censors and decoy deployers. We believe that our study is a significant step forward in understanding the practicality of the decoy routing circumvention approach.
One of the main concerns for smartphone users is the quality of apps they download. Before installing any app from the market, users first check its rating and reviews. However, these ratings are not computed by experts and most times are not associated with malicious behavior. In this work, we present an IDS/rating system based on a game theoretic model with crowdsourcing. Our results show that, with minor control over the error in categorizing users and the fraction of experts in the crowd, our system provides proper ratings while flagging all malicious apps.
The security game is a basic model for resource allocation in adversarial environments. Here there are two players, a defender and an attacker. The defender wants to allocate her limited resources to defend critical targets and the attacker seeks his most favorable target to attack. In the past decade, there has been a surge of research interest in analyzing and solving security games that are motivated by applications from various domains. Remarkably, these models and their game-theoretic solutions have led to real-world deployments in use by major security agencies like the LAX airport, the US Coast Guard and Federal Air Marshal Service, as well as non-governmental organizations. Among all these research and applications, equilibrium computation serves as a foundation. This paper examines security games from a theoretical perspective and provides a unified view of various security game models. In particular, each security game can be characterized by a set system E which consists of the defender's pure strategies; The defender's best response problem can be viewed as a combinatorial optimization problem over E. Our framework captures most of the basic security game models in the literature, including all the deployed systems; The set system E arising from various domains encodes standard combinatorial problems like bipartite matching, maximum coverage, min-cost flow, packing problems, etc. Our main result shows that equilibrium computation in security games is essentially a combinatorial problem. In particular, we prove that, for any set system \$E\$, the following problems can be reduced to each other in polynomial time: (0) combinatorial optimization over E; (1) computing the minimax equilibrium for zero-sum security games over E; (2) computing the strong Stackelberg equilibrium for security games over E; (3) computing the best or worst (for the defender) Nash equilibrium for security games over E. Therefore, the hardness [polynomial solvability] of any of these problems implies the hardness [polynomial solvability] of all the others. Here, by "games over E" we mean the class of security games with arbitrary payoff structures, but a fixed set E of defender pure strategies. This shows that the complexity of a security game is essentially determined by the set system E. We view drawing these connections as an important conceptual contribution of this paper.
Distributed denial-of-service attacks are an increasing problem facing web applications, for which many defense techniques have been proposed, including several moving-target strategies. These strategies typically work by relocating targeted services over time, increasing uncertainty for the attacker, while trying not to disrupt legitimate users or incur excessive costs. Prior work has not shown, however, whether and how a rational defender would choose a moving-target method against an adaptive attacker, and under what conditions. We formulate a denial-of-service scenario as a two-player game, and solve a restricted-strategy version of the game using the methods of empirical game-theoretic analysis. Using agent-based simulation, we evaluate the performance of strategies from prior literature under a variety of attacks and environmental conditions. We find evidence for the strategic stability of various proposed strategies, such as proactive server movement, delayed attack timing, and suspected insider blocking, along with guidelines for when each is likely to be most effective.
This paper presents a supervisory control and data acquisition (SCADA) testbed recently built at the University of New Orleans. The testbed consists of models of three industrial physical processes: a gas pipeline, a power transmission and distribution system, and a wastewater treatment plant–these systems are fully-functional and implemented at small-scale. It utilizes real-world industrial equipment such as transformers, programmable logic controllers (PLC), aerators, etc., bringing it closer to modeling real-world SCADA systems. Sensors, actuators, and PLCs are deployed at each physical process system for local control and monitoring, and the PLCs are also connected to a computer running human-machine interface (HMI) software for monitoring the status of the physical processes. The testbed is a useful resource for cybersecurity research, forensic research, and education on different aspects of SCADA systems such as PLC programming, protocol analysis, and demonstration of cyber attacks.
As the use of social media technologies proliferates in organizations, it is important to understand the nefarious behaviors, such as cyberbullying, that may accompany such technology use and how to discourage these behaviors. We draw from neutralization theory and the criminological theory of general deterrence to develop and empirically test a research model to explain why cyberbullying may occur and how the behavior may be discouraged. We created a research model of three second-order formative constructs to examine their predictive influence on intentions to cyberbully. We used PLS- SEM to analyze the responses of 174 Facebook users in two different cyberbullying scenarios. Our model suggests that neutralization techniques enable cyberbullying behavior and while sanction certainty is an important deterrent, sanction severity appears ineffective. We discuss the theoretical and practical implications of our model and results.
With the boom of ride-sharing platforms, there has been a growing debate on ride-sharing regulations. In particular, allegations of rape against ride-sharing drivers put sexual assault at the center of this debate. However, there is no systematic and society-wide evidence regarding ride-sharing and sexual assault. Building on a theory of crime victimization, this study examines the effect of ride-sharing on sexual assault incidents using comprehensive data on Uber transactions and crime incidents in New York City over the period from January to March 2015. Our findings demonstrate that the Uber availability is negatively associated with the likelihood of rape, after controlling for endogeneity. Moreover, the deterrent effect of Uber on sexual assault is entirely driven by the taxi-sparse areas, namely outside Manhattan. This study sheds light on the potential of ride-sharing platforms and sharing economy to improve social welfare beyond economic gains.
Content-based routing (CBR) is a powerful model that supports scalable asynchronous communication among large sets of geographically distributed nodes. Yet, preserving privacy represents a major limitation for the wide adoption of CBR, notably when the routers are located in public clouds. Indeed, a CBR router must see the content of the messages sent by data producers, as well as the filters (or subscriptions) registered by data consumers. This represents a major deterrent for companies for which data is a key asset, as for instance in the case of financial markets or to conduct sensitive business-to-business transactions. While there exists some techniques for privacy-preserving computation, they are either prohibitively slow or too limited to be usable in real systems. In this paper, we follow a different strategy by taking advantage of trusted hardware extensions that have just been introduced in off-the-shelf processors and provide a trusted execution environment. We exploit Intel's new software guard extensions (SGX) to implement a CBR engine in a secure enclave. Thanks to the hardware-based trusted execution environment (TEE), the compute-intensive CBR operations can operate on decrypted data shielded by the enclave and leverage efficient matching algorithms. Extensive experimental evaluation shows that SGX adds only limited overhead to insecure plaintext matching outside secure enclaves while providing much better performance and more powerful filtering capabilities than alternative software-only solutions. To the best of our knowledge, this work is the first to demonstrate the practical benefits of SGX for privacy-preserving CBR.
Security is concerned with protecting assets. The aspects of security can be applied to any situation- defense, detection and deterrence. Network security plays important role of protecting information, hardware and software on a computer network. Denial of service (DOS) attacks causes great impacts on the internet world. These attacks attempt to disrupt legitimate user's access to services. By exploiting computer's vulnerabilities, attackers easily consume victim's resources. Many special techniques have been developed to protest against DOS attacks. Some organizations constitute several defense mechanism tools to tackle the security problems. This paper has proposed various types of attacks and solutions associated with each layers of OSI model. These attacks and solutions have different impacts on the different environment. Thus the rapid growth of new technologies may constitute still worse impacts of attacks in the future.
Organisers of large-scale crowdsourcing initiatives need to consider how to produce outcomes with their projects, but also how to build volunteer capacity. The initial project experience of contributors plays an important role in this, particularly when the contribution process requires some degree of expertise. We propose three analytical dimensions to assess first-time contributor engagement based on readily available public data: cohort analysis, task analysis, and observation of contributor performance. We apply these to a large-scale study of remote mapping activities coordinated by the Humanitarian OpenStreetMap Team, a global volunteer effort with thousands of contributors. Our study shows that different coordination practices can have a marked impact on contributor retention, and that complex task designs can be a deterrent for certain contributor groups. We close by providing recommendations about how to build and sustain volunteer capacity in these and comparable crowdsourcing systems.
DDoS-for-hire services, also known as booters, have commoditized DDoS attacks and enabled abusive subscribers of these services to cheaply extort, harass and intimidate businesses and people by taking them offline. However, due to the underground nature of these booters, little is known about their underlying technical and business structure. In this paper, we empirically measure many facets of their technical and payment infrastructure. We also perform an analysis of leaked and scraped data from three major booters–-Asylum Stresser, Lizard Stresser and VDO–-which provides us with an in-depth view of their customers and victims. Finally, we conduct a large-scale payment intervention in collaboration with PayPal and evaluate its effectiveness as a deterrent to their operations. Based on our analysis, we show that these booters are responsible for hundreds of thousands of DDoS attacks and identify potentially promising methods to undermine these services by increasing their costs of operation.
My work that is being recognized by the 2015 ACM A. M. Turing Award is in cybersecurity, while my primary interest for the last thirty-five years is concerned with reducing the risk that nuclear deterrence will fail and destroy civilization. This Turing Lecture draws connections between those seemingly disparate areas as well as Alan Turing's elegant proof that the computable real numbers, while denumerable, are not effectively denumerable.
Texting while driving has emerged as a significant threat to citizen safety. In this study, we utilize general deterrence theory (GDT), protection motivation theory and personality traits to evaluate texting while driving (TWD) compliance intentions among teenage drivers. This paper presents the results of our pilot study. We administered an online survey to 105 teenage and young adult drivers. The potential implications for research and practice and policy are discussed.
This study examines the effectiveness of virtual reality technology at creating an immersive user experience in which participants experience first hand the extreme negative consequences of smartphone use while driving. Research suggests that distracted driving caused by smartphones is related to smartphone addiction and causes fatalities. Twenty-two individuals participated in the virtual reality user experience (VRUE) in which they were asked to drive a virtual car using a Oculus Rift headset, LeapMotion hand tracking device, and a force feedback steering wheel and pedals. While driving in the simulation participants were asked to interact with a smartphone and after a period of time trying to manage both tasks a vehicle appears before them and they are involved in a head-on collision. Initial results indicated a strong sense of presence was felt by participants and a change or re-enforcement of the participant's perception of the dangers of smartphone use while driving was observed.
In today's world, the security of companies' data is given a very big emphasis than ever. Despite huge investments made by companies to keep their systems safe, there are many information systems security breaches that infiltrate companies' systems and consequently affect their economic capacity, reputation, and customers' confidence. The literature suggests that almost all investments in information systems security have been focused only on technological solutions. However, having this partial view on the complex information systems security problem is found to be insufficient and hence there is an increasing call for researchers to include social factors into the solution space. One of such social factor is culture. Thus, in this research we studied how national culture influence employees' intention to violate or comply their company ISS policy. We construct and test an empirical model by using a survey data obtained from employees who are working in Ethiopia.
Cloud computing provides a shared pool of resources for large-scale distributed applications. Recent trends such as fog computing and edge computing spread the workload of clouds closer towards the edge of the network and the users. Exploiting the edge resources efficiently requires managing the resources and directing user traffic to the correct edge servers. In this paper we propose to profile and group users according to their interest profiles. We consider edge caching as an example and through our evaluation show the potential benefits of directing users from the same group to the same caches. We investigate a range of workloads and parameters and the same conclusions apply. Our results highlight the importance of grouping users and demonstrate the potential benefits of this approach.
Vehicular users are expected to consume large amounts of data, for both entertainment and navigation purposes. This will put a strain on cellular networks, which will be able to cope with such a load only if proper caching is in place; this in turn begs the question of which caching architecture is the best-suited to deal with vehicular content consumption. In this paper, we leverage a large-scale, crowd-sourced trace to (i) characterize the vehicular traffic demand, in terms of overall magnitude and content breakup; (ii) assess how different caching approaches perform against such a real-world load; (iii) study the effect of recommendation systems and local content items. We define a price-of-fog metric, expressing the additional caching capacity to deploy when moving from traditional, centralized caching architectures to a "fog computing" approach, where caches are closer to the network edge. We find that for location-specific items, such as the ones that vehicular users are most likely to request, such a price almost disappears. Vehicular networks thus make a strong case for the adoption of mobile-edge caching, as we are able to reap the benefit thereof – including a reduction in the distance travelled by data, within the core network – with little or none of the associated disadvantages.
The exploitation of the opportunistic infrastructure via Device-to-Device (D2D) communication is a critical component towards the adoption of new paradigms such as edge and fog computing. While a lot of work has demonstrated the great potential of D2D communication, it is still unclear whether the benefits of the D2D approach can really be leveraged in practice. In this paper, we develop a software sensor, namely Detector, which senses the infrastructure in proximity of a mobile user. We analyze and evaluate D2D on the wild, i.e., not in simulations. We found that in a realistic environment, a mobile is always co-located in proximity to at least one other mobile device throughout the day. This suggests that a device can schedule tasks processing in coordination with other devices, potentially more powerful, instead of handling the processing of the tasks by itself.
In the Internet of Things (IoT), Internet-connected things provide an influx of data and resources that offer unlimited possibility for applications and services. Smart City IoT systems refer to the things that are distributed over wide physical areas covering a whole city. While the new breed of data and resources looks promising, building applications in such large scale IoT systems is a difficult task due to the distributed and dynamic natures of entities involved, such as sensing, actuating devices, people and computing resources. In this paper, we explore the process of developing Smart City IoT applications from a coordination-based perspective. We show that a distributed coordination model that oversees such a large group of distributed components is necessary in building Smart City IoT applications. In particular, we propose Adaptive Distributed Dataflow, a novel Dataflow-based programming model that focuses on coordinating city-scale distributed systems that are highly heterogeneous and dynamic.
Geo-distributed Situation Awareness applications are large in scale and are characterized by 24/7 data generation from mobile and stationary sensors (such as cameras and GPS devices); latency-sensitivity for converting sensed data to actionable knowledge; and elastic and bursty needs for computational resources. Fog computing [7] envisions providing computational resources close to the edge of the network, consequently reducing the latency for the sense-process-actuate cycle that exists in these applications. We propose Foglets, a programming infrastructure for the geo-distributed computational continuum represented by fog nodes and the cloud. Foglets provides APIs for a spatio-temporal data abstraction for storing and retrieving application generated data on the local nodes, and primitives for communication among the resources in the computational continuum. Foglets manages the application components on the Fog nodes. Algorithms are presented for launching application components and handling the migration of these components between Fog nodes, based on the mobility pattern of the sensors and the dynamic computational needs of the application. Evaluation results are presented for a Fog network consisting of 16 nodes using a simulated vehicular network as the workload. We show that the discovery and deployment protocol can be executed in 0.93 secs, and joining an already deployed application can be as quick as 65 ms. Also, QoS-sensitive proactive migration can be accomplished in 6 ms.
The Internet of Things (IoT) is slowly, but steadily, changing the way we interact with our surrounding. Smart cities, smart environments, smart buildings are just a few macroscopic examples of how smart ecosystems are increasingly involved in our daily life, each one offering a different set of information. This information's decentralization and scattering can be exploited, optimizing mobile nodes on-demand information retrieval process. We propose an approach focused on defining competence domains in smart systems where the responsibility of providing a specific information to a mobile node is defined by spatial constraints. By exploiting the interplay and duality of Cloud Computing and Fog Computing we introduce an approach to exploit data spatial allocation in smart systems to optimize mobile nodes information retrieval.
Smart Transportation applications by nature are examples of Vehicular Ad-hoc Network (VANETs) applications where mobile vehicles, roadside units and transportation infrastructure interplay with one another to provide value added services. While there are abundant researches that focused on the communication aspect of such Mobile Ad-hoc Networks, there are few research bodies that target the development of VANET applications. Among the popular VANET applications, a dominant direction is to leverage Cloud infrastructure to execute and deliver applications and services. Recent studies showed that Cloud Computing is not sufficient for many VANET applications due to the mobility of vehicles and the latency sensitive requirements they impose. To this end, Fog Computing has been proposed to leverage computation infrastructure that is closer to the network edge to compliment Cloud Computing in providing latency-sensitive applications and services. However, applications development in Fog environment is much more challenging than in the Cloud due to the distributed nature of Fog systems. In this paper, we investigate how Smart Transportation applications are developed following Fog Computing approach, their challenges and possible mitigation from the state of the arts.
The notion of edge computing introduces new computing functions away from centralized locations and closer to the network edge and thus facilitating new applications and services. This enhanced computing paradigm is provides new opportunities to applications developers, not available otherwise. In this talk, I will discuss why placing computation functions at the extreme edge of our network infrastructure, i.e., in wireless Access Points and home set-top boxes, is particularly beneficial for a large class of emerging applications. I will discuss a specific approach, called ParaDrop, to implement such edge computing functionalities, and use examples from different domains – smarter homes, sustainability, and intelligent transportation – to illustrate the new opportunities around this concept.