Biblio
We study the problem of allocating limited security countermeasures to protect network data from cyber-attacks, for scenarios modeled by Bayesian attack graphs. We consider multi-stage interactions between a network administrator and cybercriminals, formulated as a security game. This formulation is capable of representing security environments with significant dynamics and uncertainty, and very large strategy spaces. For the game model, we propose parameterized heuristic strategies for both players. Our heuristics exploit the topological structure of the attack graphs and employ different sampling methodologies to overcome the computational complexity in determining players' actions. Given the complexity of the game, we employ a simulation-based methodology, and perform empirical game analysis over an enumerated set of these heuristic strategies. Finally, we conduct experiments based on a variety of game settings to demonstrate the advantages of our heuristics in obtaining effective defense strategies which are robust to the uncertainty of the security environment.
Bitcoin has not only attracted many users but also been considered as a technical breakthrough by academia. However, the expanding potential of Bitcoin is largely untapped due to its limited throughput. The Bitcoin community is now facing its biggest crisis in history as the community splits on how to increase the throughput. Among various proposals, Bitcoin Unlimited recently became the most popular candidate, as it allows miners to collectively decide the block size limit according to the real network capacity. However, the security of BU is heatedly debated and no consensus has been reached as the issue is discussed in different miner incentive models. In this paper, we systematically evaluate BU's security with three incentive models via testing the two major arguments of BU supporters: the block validity consensus is not necessary for BU's security; such consensus would emerge in BU out of economic incentives. Our results invalidate both arguments and therefore disprove BU's security claims. Our paper further contributes to the field by addressing the necessity of a prescribed block validity consensus for cryptocurrencies.
Never Alone (2016) is a generative large-scale urban screen video-sound installation, which presents the idea of generative choreographies amongst multiple video agents, or "digital performers". This generative installation questions how we navigate in urban spaces and the ubiquity and disruptive nature of encounters within the cities' landscapes. The video agents explore precarious movement paths along the façade inhabiting landscapes that are both architectural and emotional.
Link quality protocols employ link quality estimators to collect statistics on the wireless link either independently or cooperatively among the sensor nodes. Furthermore, link quality routing protocols for wireless sensor networks may modify an estimator to meet their needs. Link quality estimators are vulnerable against malicious attacks that can exploit them. A malicious node may share false information with its neighboring sensor nodes to affect the computations of their estimation. Consequently, malicious node may behave maliciously such that its neighbors gather incorrect statistics about their wireless links. This paper aims to detect malicious nodes that manipulate the link quality estimator of the routing protocol. In order to accomplish this task, MINTROUTE and CTP routing protocols are selected and updated with intrusion detection schemes (IDSs) for further investigations with other factors. It is proved that these two routing protocols under scrutiny possess inherent susceptibilities, that are capable of interrupting the link quality calculations. Malicious nodes that abuse such vulnerabilities can be registered through operational detection mechanisms. The overall performance of the new LQR protocol with IDSs features is experimented, validated and represented via the detection rates and false alarm rates.
Data clustering is an important topic in data science in general, but also in user modeling and recommendation systems. Some clustering algorithms like K-means require the adjustment of many parameters, and force the clustering without considering the clusterability of the dataset. Others, like DBSCAN, are adjusted to a fixed density threshold, so can't detect clusters with different densities. In this paper we propose a new clustering algorithm based on the mutual vote, which adjusts itself automatically to the dataset, demands a minimum of parameterizing, and is able to detect clusters with different densities in the same dataset. We test our algorithm and compare it to other clustering algorithms for clustering users, and predict their purchases in the context of recommendation systems.
Cyber-induced dependent failures are important to be considered in composite system reliability evaluation. Because of the complexity and dimensionality, Monte Carlo simulation is a preferred method for composite system reliability evaluation. The non-sequential Monte Carlo or sampling generally requires less computational and storage resources than sequential techniques and is generally preferred for large systems where components are independent or only a limited dependency exists. However, cyber-induced events involve dependent failures, making it difficult to use sampling methods. The difficulties of using sampling with dependent failures are discussed and a solution is proposed. The basic idea is to generate a representative state space from which states can be sampled. The probabilities of representative state space provide an approximation of the joint distribution and are generated by a sequential simulation in this paper but it may be possible to find alternative means of achieving this objective. The proposed method preserves the dependent features of cyber-induced events and also improves the efficiency. Although motivated by cyber-induced failures, the technique can be used for other types of dependent failures as well. A comparative study between a purely sequential methodology and the proposed method is presented on an extended Roy Billinton Test System.
The Internet of Things (IoT) has become ubiquitous in our daily life as billions of devices are connected through the Internet infrastructure. However, the rapid increase of IoT devices brings many non-traditional challenges for system design and implementation. In this paper, we focus on the hardware security vulnerabilities and ultra-low power design requirement of IoT devices. We briefly survey the existing design methods to address these issues. Then we propose an approximate computing based information hiding approach that provides security with low power. We demonstrate that this security primitive can be applied for security applications such as digital watermarking, fingerprinting, device authentication, and lightweight encryption.
Because the authentication method based username-password has the disadvantage of easy disclosure and low reliability, and also the excess password management degrades the user experience tremendously, the user is eager to get rid of the bond of the password in order to seek a new way of authentication. Therefore, the multifactor biometrics-based user authentication wins the favor of people with advantages of simplicity, convenience and high reliability, especially in the mobile payment environment. Unfortunately, in the existing scheme, biometric information is stored on the server side. As thus, once the server is hacked by attackers to cause the leakage of the fingerprint information, it will take a deadly threat to the user privacy. Aim at the security problem due to the fingerprint information in the mobile payment environment, we propose a novel multifactor two-server authentication scheme under mobile computing (MTSAS). In the MTSAS, it divides the authentication method and authentication means, in the meanwhile, the user's biometric characteristics cannot leave the user device. And also, MTSAS chooses the different authentication factors depending on the privacy level of the authentication, and then provides the authentication based on the different security levels. BAN logic's result proves that MTSAS has achieved the purpose of authentication, and meets the security requirements. In comparison with other schemes, the analysis shows that the proposed scheme MTSAS not only has the reasonable computational efficiency, but also keeps the superior communication cost.
Multifactor authentication presents a robust security method, but typically requires multiple steps on the part of the user resulting in a high cost to usability and limiting adoption. Furthermore, a truly usable system must be unobtrusive and inconspicuous. Here, we present a system that provides all three factors of authentication (knowledge, possession, and inherence) in a single step in the form of an earpiece which implements brain-based authentication via custom-fit, in-ear electroencephalography (EEG). We demonstrate its potential by collecting EEG data using manufactured custom-fit earpieces with embedded electrodes. Across 7 participants, we are able to achieve perfect performance, mean 0% false acceptance (FAR) and 0% false rejection rates (FRR), using participants' best performing tasks collected in one session by one earpiece with three electrodes. Our results indicate that a single earpiece with embedded electrodes could provide a discreet, convenient, and robust method for secure one-step, three-factor authentication.
Given social media users' plethora of interactions, appropriately controlling access to such information becomes a challenging task for users. Selecting the appropriate audience, even from within their own friend network, can be fraught with difficulties. PACMAN is a potential solution for this dilemma problem. It's a personal assistant agent that recommends personalized access control decisions based on the social context of any information disclosure by incorporating communities generated from the user's network structure and utilizing information in the user's profile. PACMAN provides accurate recommendations while minimizing intrusiveness.
Web Application becomes the leading solution for the utilization of systems that need access globally, distributed, cost-effective, as well as the diversity of the content that can run on this technology. At the same time web application security have always been a major issue that must be considered due to the fact that 60% of Internet attacks targeting web application platform. One of the biggest impacts on this technology is Cross Site Scripting (XSS) attack, the most frequently occurred and are always in the TOP 10 list of Open Web Application Security Project (OWASP). Vulnerabilities in this attack occur in the absence of checking, testing, and the attention about secure coding practices. There are several alternatives to prevent the attacks that associated with this threat. Network Intrusion Detection System can be used as one solution to prevent the influence of XSS Attack. This paper investigates the XSS attack recognition and detection using regular expression pattern matching and a preprocessing method. Experiments are conducted on a testbed with the aim to reveal the behaviour of the attack.
In the last decade, numerous fake websites have been developed on the World Wide Web to mimic trusted websites, with the aim of stealing financial assets from users and organizations. This form of online attack is called phishing, and it has cost the online community and the various stakeholders hundreds of million Dollars. Therefore, effective counter measures that can accurately detect phishing are needed. Machine learning (ML) is a popular tool for data analysis and recently has shown promising results in combating phishing when contrasted with classic anti-phishing approaches, including awareness workshops, visualization and legal solutions. This article investigates ML techniques applicability to detect phishing attacks and describes their pros and cons. In particular, different types of ML techniques have been investigated to reveal the suitable options that can serve as anti-phishing tools. More importantly, we experimentally compare large numbers of ML techniques on real phishing datasets and with respect to different metrics. The purpose of the comparison is to reveal the advantages and disadvantages of ML predictive models and to show their actual performance when it comes to phishing attacks. The experimental results show that Covering approach models are more appropriate as anti-phishing solutions, especially for novice users, because of their simple yet effective knowledge bases in addition to their good phishing detection rate.
Phishers often exploit users' trust on the appearance of a site by using webpages that are visually similar to an authentic site. In the past, various research studies have tried to identify and classify the factors contributing towards the detection of phishing websites. The focus of this research is to establish a strong relationship between those identified heuristics (content-based) and the legitimacy of a website by analyzing training sets of websites (both phishing and legitimate websites) and in the process analyze new patterns and report findings. Many existing phishing detection tools are often not very accurate as they depend mostly on the old database of previously identified phishing websites. However, there are thousands of new phishing websites appearing every year targeting financial institutions, cloud storage/file hosting sites, government websites, and others. This paper presents a framework called Phishing-Detective that detects phishing websites based on existing and newly found heuristics. For this framework, a web crawler was developed to scrape the contents of phishing and legitimate websites. These contents were analyzed to rate the heuristics and their contribution scale factor towards the illegitimacy of a website. The data set collected from Web Scraper was then analyzed using a data mining tool to find patterns and report findings. A case study shows how this framework can be used to detect a phishing website. This research is still in progress but shows a new way of finding and using heuristics and the sum of their contributing weights to effectively and accurately detect phishing websites. Further development of this framework is discussed at the end of the paper.
Genetic Algorithms are group of mathematical models in computational science by exciting evolution in AI techniques nowadays. These algorithms preserve critical information by applying data structure with simple chromosome recombination operators by encoding solution to a specific problem. Genetic algorithms they are optimizer, in which range of problems applied to it are quite broad. Genetic Algorithms with its global search includes basic principles like selection, crossover and mutation. Data structures, algorithms and human brain inspiration are found for classification of data and for learning which works using Neural Networks. Artificial Intelligence (AI) it is a field, where so many tasks performed naturally by a human. When AI conventional methods are used in a computer it was proved as a complicated task. Applying Neural Networks techniques will create an internal structure of rules by which a program can learn by examples, to classify different inputs than mining techniques. This paper proposes a phishing websites classifier using improved polynomial neural networks in genetic algorithm.
Authentication is one of the key aspects of securing applications and systems alike. While in most existing systems this is achieved using usernames and passwords it has been continuously shown that this authentication method is not secure. Studies that have been conducted have shown that these systems have vulnerabilities which lead to cases of impersonation and identity theft thus there is need to improve such systems to protect sensitive data. In this research, we explore the combination of the user's location together with traditional usernames and passwords as a multi factor authentication system to make authentication more secure. The idea involves comparing a user's mobile device location with that of the browser and comparing the device's Bluetooth key with the key used during registration. We believe by leveraging existing technologies such as Bluetooth and GPS we can reduce implementation costs whilst improving security.
Honeypots constitute an invaluable piece of technology that allows researchers and security practitioners to track the evolution of break-in techniques by attackers and discover new malicious IP addresses, hosts, and victims. Even though there has been a wealth of research where researchers deploy honeypots for a period of time and report on their findings, there is little work that attempts to understand how the underlying properties of a compromised system affect the actions of attackers. In this paper, we report on a four-month long study involving 102 medium-interaction honeypots where we vary a honeypot's location, difficulty of break-in, and population of files, observing how these differences elicit different behaviors from attackers. Moreover, we purposefully leak the credentials of dedicated, hard-to-brute-force, honeypots to hacking forums and paste-sites and monitor the actions of the incoming attackers. Among others, we find that, even though bots perform specific environment-agnostic actions, human attackers are affected by the underlying environment, e.g., executing more commands on honeypots with realistic files and folder structures. Based on our findings, we provide guidance for future honeypot deployments and motivate the need for having multiple intrusion-detection systems.
Homomorphic encryption technology can settle a dispute of data privacy security in cloud environment, but there are many problems in the process of access the data which is encrypted by a homomorphic algorithm in the cloud. In this paper, on the premise of attribute encryption, we propose a fully homomorphic encrypt scheme which based on attribute encryption with LSSS matrix. This scheme supports fine-grained cum flexible access control along with "Query-Response" mechanism to enable users to efficiently retrieve desired data from cloud servers. In addition, the scheme should support considerable flexibility to revoke system privileges from users without updating the key client, it reduces the pressure of the client greatly. Finally, security analysis illustrates that the scheme can resist collusion attack. A comparison of the performance from existing CP-ABE scheme, indicates that our scheme reduces the computation cost greatly for users.
Existing access control mechanisms are based on the concept of identity enrolment and recognition and assume that recognized identity is a synonym to ethical actions, yet statistics over the years show that the most severe security breaches are the results of trusted, identified, and legitimate users who turned into malicious insiders. Insider threat damages vary from intellectual property loss and fraud to information technology sabotage. As insider threat incidents evolve, there exist demands for a nonidentity-based authentication measure that rejects access to authorized individuals who have mal-intents of access. In this paper, we study the possibility of using the user's intention as an access control measure using the involuntary electroencephalogram reactions toward visual stimuli. We propose intent-based access control (IBAC) that detects the intentions of access based on the existence of knowledge about an intention. IBAC takes advantage of the robustness of the concealed information test to assess access risk. We use the intent and intent motivation level to compute the access risk. Based on the calculated risk and risk accepted threshold, the system makes the decision whether to grant or deny access requests. We assessed the model using experiments on 30 participants that proved the robustness of the proposed solution.
Trustworthy and safe operation of the power grid critical infrastructures relies on secure execution of low-level substation controller devices such as programmable logic controllers (PLCs). Currently, there are very few security protection solutions deployed on these devices to ensure provenance control: to execute controller code on the device that is developed by trusted parties and complies with safety/security policies that are defined by the code developer as well as the power grid operators. Resource-limited PLC controllers have been becoming increasingly popular among not only legitimate system operators, but also malicious adversaries such as the most recent Stuxnet and BlackEnergy malware that caused various damages such as unauthorized infrastructural safety and integrity violations. We present PLCtrust, a domain-specific solution that deploys virtual micro security-perimeters, so-called capsules, and the corresponding device-level runtime power system-safety policy enforcement dynamically. PLCtrust makes use of data taint analysis to monitor and control data flow among the capsules based on data owner-defined policies. PLCtrust provides the operators with a transparent and lightweight solution to address various safety-critical data protection requirements. PLCtrust also provides the legitimate third-party controller code developers with a taint-aware programming interface to develop applications in compliance with the dynamic power system safety/security policies. Our experimental results on real-world settings show that PLCtrust is transparent to the end-users while ensuring the power grid safety maintenance with minimal performance overhead.
As a consequence of the recent development of situational awareness technologies for smart grids, the gathering and analysis of data from multiple sources offer a significant opportunity for enhanced fault diagnosis. In order to achieve improved accuracy for both fault detection and classification, a novel combined data analytics technique is presented and demonstrated in this paper. The proposed technique is based on a segmented approach to Bayesian modelling that provides probabilistic graphical representations of both electrical power and data communication networks. In this manner, the reliability of both the data communication and electrical power networks are considered in order to improve overall power system transmission line fault diagnosis.
Free text keystroke dynamics is a behavioral biometric that has the strong potential to offer unobtrusive and continuous user authentication. Unfortunately, due to the limited data availability, free text keystroke dynamics have not been tested adequately. Based on a novel large dataset of free text keystrokes from our ongoing data collection using behavior in natural settings, we present the first study to evaluate keystroke dynamics while respecting the temporal order of the data. Specifically, we evaluate the performance of different ways of forming a test sample using sessions, as well as a form of continuous authentication that is based on a sliding window on the keystroke time series. Instead of accumulating a new test sample of keystrokes, we update the previous sample with keystrokes that occur in the immediate past sliding window of n minutes. We evaluate sliding windows of 1 to 5, 10, and 30 minutes. Our best performer using a sliding window of 1 minute, achieves an FAR of 1% and an FRR of 11.5%. Lastly, we evaluate the sensitivity of the keystroke dynamics algorithm to short quick insider attacks that last only several minutes, by artificially injecting different portions of impostor keystrokes into the genuine test samples. For example, the evaluated algorithm is found to be able to detect insider attacks that last 2.5 minutes or longer, with a probability of 98.4%.
Data provenance describes how data came to be in its present form. It includes data sources and the transformations that have been applied to them. Data provenance has many uses, from forensics and security to aiding the reproducibility of scientific experiments. We present CamFlow, a whole-system provenance capture mechanism that integrates easily into a PaaS offering. While there have been several prior whole-system provenance systems that captured a comprehensive, systemic and ubiquitous record of a system's behavior, none have been widely adopted. They either A) impose too much overhead, B) are designed for long-outdated kernel releases and are hard to port to current systems, C) generate too much data, or D) are designed for a single system. CamFlow addresses these shortcoming by: 1) leveraging the latest kernel design advances to achieve efficiency; 2) using a self-contained, easily maintainable implementation relying on a Linux Security Module, NetFilter, and other existing kernel facilities; 3) providing a mechanism to tailor the captured provenance data to the needs of the application; and 4) making it easy to integrate provenance across distributed systems. The provenance we capture is streamed and consumed by tenant-built auditor applications. We illustrate the usability of our implementation by describing three such applications: demonstrating compliance with data regulations; performing fault/intrusion detection; and implementing data loss prevention. We also show how CamFlow can be leveraged to capture meaningful provenance without modifying existing applications.
Each year, thousands of software vulnerabilities are discovered and reported to the public. Unpatched known vulnerabilities are a significant security risk. It is imperative that software vendors quickly provide patches once vulnerabilities are known and users quickly install those patches as soon as they are available. However, most vulnerabilities are never actually exploited. Since writing, testing, and installing software patches can involve considerable resources, it would be desirable to prioritize the remediation of vulnerabilities that are likely to be exploited. Several published research studies have reported moderate success in applying machine learning techniques to the task of predicting whether a vulnerability will be exploited. These approaches typically use features derived from vulnerability databases (such as the summary text describing the vulnerability) or social media posts that mention the vulnerability by name. However, these prior studies share multiple methodological shortcomings that inflate predictive power of these approaches. We replicate key portions of the prior work, compare their approaches, and show how selection of training and test data critically affect the estimated performance of predictive models. The results of this study point to important methodological considerations that should be taken into account so that results reflect real-world utility.
Cloud computing emerges as an endowment technological data for the longer term and increasing on one of the standards of utility computing is most likely claimed to symbolize a wholly new paradigm for viewing and getting access to computational assets. As a result of protection problem many purchasers hesitate in relocating their touchy data on the clouds, regardless of gigantic curiosity in cloud-based computing. Security is a tremendous hassle, considering the fact that so much of firms present a alluring goal for intruders and the particular considerations will pursue to lower the advancement of distributed computing if not located. Hence, this recent scan and perception is suitable to honeypot. Distributed Denial of Service (DDoS) is an assault that threats the availability of the cloud services. It's fundamental investigate the most important features of DDoS Defence procedures. This paper provides exact techniques that been carried out to the DDoS attack. These approaches are outlined in these paper and use of applied sciences for special kind of malfunctioning within the cloud.
Nowadays we are witnessing an unprecedented evolution in how we gather and process information. Technological advances in mobile devices as well as ubiquitous wireless connectivity have brought about new information processing paradigms and opportunities for virtually all kinds of scientific and business activity. These new paradigms rest on three pillars: i) numerous powerful portable devices operated by human intelligence, ubiquitous in space and available, most of the time, ii) unlimited environment sensing capabilities of the devices, and iii) fast networks connecting the devices to Internet information processing platforms and services. These pillars implement the concepts of crowdsourcing and collective intelligence. These concepts describe online services that are based on the massive participation of users and the capabilities of their devices.in order to produce results and information which are "more than the sum of the part". The EU project Privacy Flag relies exactly on these two concepts in order to mobilize roaming citizens to contribute, through crowdsourcing, information about risky applications and dangerous web sites whose processing may produce emergent threat patterns, not evident in the contributed information alone, reelecting a collective intelligence action. Crowdsourcing and collective intelligence, in this context, has numerous advantages, such as raising privacy-awareness among people. In this paper we summarize our work in this project and describe the capabilities and functionalities of the Privacy Flag Platform.