Biblio
In this paper, we quantify the effectiveness of third-party tracker blockers on a large scale. First, we analyze the architecture of various state-of-the-art blocking solutions and discuss the advantages and disadvantages of each method. Second, we perform a two-part measurement study on the effectiveness of popular tracker-blocking tools. Our analysis quantifies the protection offered against trackers present on more than 100,000 popular websites and 10,000 popular Android applications. We provide novel insights into the ongoing arms race between trackers and developers of blocking tools as well as which tools achieve the best results under what circumstances. Among others, we discover that rule-based browser extensions outperform learning-based ones, trackers with smaller footprints are more successful at avoiding being blocked, and CDNs pose a major threat towards the future of tracker-blocking tools. Overall, the contributions of this paper advance the field of web privacy by providing not only the largest study to date on the effectiveness of tracker-blocking tools, but also by highlighting the most pressing challenges and privacy issues of third-party tracking.
Electronic computers have evolved from exiguous experimental enterprises in the 1940s to prolific practical data processing systems in the 1980s. As we have come to rely on these systems to process and store data, we have also come to wonder about their ability to protect valuable data.
Data security is the science and study of methods of protecting data in computer and communication systems from unauthorized disclosure and modification. The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks. The book is for students and professionals seeking an introduction to these principles. There are many references for those who would like to study specific topics further.
Data security has evolved rapidly since 1975. We have seen exciting developments in cryptography: public-key encryption, digital signatures, the Data Encryption Standard (DES), key safeguarding schemes, and key distribution protocols. We have developed techniques for verifying that programs do not leak confidential data, or transmit classified data to users with lower security clearances. We have found new controls for protecting data in statistical databases--and new methods of attacking these databases. We have come to a better understanding of the theoretical and practical limitations to security.
This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.
Cyber infrastructures are highly vulnerable to intrusions and other threats. The main challenges in cloud computing are failure of data centres and recovery of lost data and providing a data security system. This paper has proposed a Virtualization and Data Recovery to create a virtual environment and recover the lost data from data servers and agents for providing data security in a cloud environment. A Cloud Manager is used to manage the virtualization and to handle the fault. Erasure code algorithm is used to recover the data which initially separates the data into n parts and then encrypts and stores in data servers. The semi trusted third party and the malware changes made in data stored in data centres can be identified by Artificial Intelligent methods using agents. Java Agent Development Framework (JADE) is a tool to develop agents and facilitates the communication between agents and allows the computing services in the system. The framework designed and implemented in the programming language JAVA as gateway or firewall to recover the data loss.
6LoWPAN technology realizes the IPv6 packet transmission in the IEEE 802.15.4 based WSN. And 6LoWPAN is regarded as one of the ideal technologies to realize the interconnection between WSN and Internet, which is the key to build the IoT. Contiki is an open source and highly portable multitasking operating system, in which the 6LoWPAN has been implemented. In contiki, only several K Bytes of code and a few hundred bytes of memory are required to provide a multitasking environment and built-in TCP/IP support. This makes it especially suitable for memory constrained embedded platforms. In this paper, a lightweight 6LoWPAN gateway based on Contiki is designed and its designs of hardware and software are described. A complex experiment environment is presented, in which the gateway's capability of accessing the Internet is verified, and its performance about the average network delay and jitter are analyzed. The experimental results show that the gateway designed in this paper can not only realize the interconnection between 6LoWPAN networks and Internet, but also have good network adaptability and stability.
In recent years, the damage caused by unauthorized access using bots has increased. Compared with attacks on conventional login screens, the success rate is higher and detection of them is more difficult. CAPTCHA is commonly utilized as a technology for avoiding attacks by bots. But user's experience declines as the difficulty of CAPTCHA becomes higher corresponding to the advancement of the bot. As a solution, adaptive difficulty setting of CAPTCHA combining with bot detection technologies is considered. In this research, we focus on Capy puzzle CAPTCHA, which is widely used in commercial service. We use a supervised machine learning approach to detect bots. As a training data, we use access logs to several Web services, and add flags to attacks by bots detected in the past. We have extracted vectors fields like HTTP-User-Agent and some information from IP address (e.g. geographical information) from the access logs, and the dataset is investigated using supervised learning. By using XGBoost and LightGBM, we have achieved high ROC-AUC score more than 0.90, and further have detected suspicious accesses from some ISPs that has no bot discrimination flag.
If you’re involved in cybersecurity as a software developer, forensic investigator, or network administrator, this practical guide shows you how to apply the scientific method when assessing techniques for protecting your information systems. You’ll learn how to conduct scientific experiments on everyday tools and procedures, whether you’re evaluating corporate security systems, testing your own security product, or looking for bugs in a mobile game.
Once author Josiah Dykstra gets you up to speed on the scientific method, he helps you focus on standalone, domain-specific topics, such as cryptography, malware analysis, and system security engineering. The latter chapters include practical case studies that demonstrate how to use available tools to conduct domain-specific scientific experiments.
- Learn the steps necessary to conduct scientific experiments in cybersecurity
- Explore fuzzing to test how your software handles various inputs
- Measure the performance of the Snort intrusion detection system
- Locate malicious “needles in a haystack” in your network and IT environment
- Evaluate cryptography design and application in IoT products
- Conduct an experiment to identify relationships between similar malware binaries
- Understand system-level security requirements for enterprise networks and web services
Security is often treated as secondary or a non- functional feature of software which influences the approach of vendors and developers when describing their products often in terms of what it can do (Use Cases) or offer customers. However, tides are beginning to change as more experienced customers are beginning to demand for more secure and reliable software giving priority to confidentiality, integrity and privacy while using these applications. This paper presents the MOTH (Modeling Threats with Hybrid Techniques) framework designed to help organizations secure their software assets from attackers in order to prevent any instance of SQL Injection Attacks (SQLIAs). By focusing on the attack vectors and vulnerabilities exploited by the attackers and brainstorming over possible attacks, developers and security experts can better strategize and specify security requirements required to create secure software impervious to SQLIAs. A live web application was considered in this research work as a case study and results obtained from the hybrid models extensively exposes the vulnerabilities deep within the application and proposed resolution plans for blocking those security holes exploited by SQLIAs.
Data generation and its utilization in important decision applications has been growing an extremely fast pace, which has made data a valuable resource that needs to be rigorously protected from attackers. Cloud storage systems claim to offer the promise of secure and elastic data storage services that can adapt to changing storage requirements. Despite diligent efforts being made to protect data, recent successful attacks highlight the need for going beyond the existing approaches centered on intrusion prevention, detection and recovery mechanisms. However, most security mechanisms have finite rate of failure, and with intrusion becoming more sophisticated and stealthy, the failure rate appears to be rising. In this paper we propose the use data fragmentation, followed by coding that introduces redundant fragments and dispersing fragments to multiple and independent cloud storage systems with each cloud handling only a single fragments. The paper proposes a multi-cloud fragmented cloud storage system architecture and design of the related software code. Probabilistic analysis is carried to quantify its intrusion tolerance abilities.
Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks- including those based on geolocation, device type, or JavaScript- were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms- but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.
From the Preface
As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].
In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.
Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly.
Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner.
Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.
From the Preface
As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].
In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.
Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly.
Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner.
Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.
The most popular CAPTCHA service in use today is Google reCAPTCHA v2, whose main offering is an image-based CAPTCHA challenge. This paper looks into the security measures used in reCAPTCHA v2's image challenges and proposes a deep learning-based solution that can be used to automatically solve them. The proposed method is tested with both a custom object- detection deep learning model as well as Google's own Cloud Vision API, in conjunction with human mimicking mouse movements to bypass the challenges. The paper also suggests some potential defense measures to increase overall security and other additional attack directions for reCAPTCHA v2.
Due to the significant increase in the size of the internet and the number of users on this platform there has been a tremendous increase in load on various websites and web-based applications. This load is from the user end which causes unforeseen conditions which leads to unacceptable consequences such as crash or a data loss scenario at the webserver end. Therefore, there is a need to reduce the load on the server as well as the chances of network attacks that increase with the increased user base. The undue consequences such as data loss and server crash are caused due to two main reasons: the first one being an overload of users and the second due to an increased number of automatic programs or robots. A technique can be utilized to overcome this scenario by introducing a delay in the operation speed on the user end through the use of a CAPTCHA mechanism. Most of the classical approaches use a single method for the generation of the CAPTCHA, to overcome this proposed model uses the versatile image CAPTCHA generation mechanism. We have introduced a system that utilizes manualbased, face detection-based, colour based and random object insertion technique to generate 4 different random types of CAPTCHA. The proposed methodology implements a region of interest and convolutional neural networks to achieve the generation of the CAPTCHA effectively.
Data-driven verification methods utilize execution data together with models for establishing safety requirements. These are often the only tools available for analyzing complex, nonlinear cyber-physical systems, for which purely model-based analysis is currently infeasible. In this chapter, we outline the key concepts and algorithmic approaches for data-driven verification and discuss the guarantees they provide. We introduce some of the software tools that embody these ideas and present several practical case studies demonstrating their application in safety analysis of autonomous vehicles, advanced driver assist systems (ADAS), satellite control, and engine control systems.