Biblio
The dynamically changing landscape of DDoS threats increases the demand for advanced security solutions. The rise of massive IoT botnets enables attackers to mount high-intensity short-duration ”volatile ephemeral” attack waves in quick succession. Therefore the standard human-in-the-loop security center paradigm is becoming obsolete. To battle the new breed of volatile DDoS threats, the intrusion detection system (IDS) needs to improve markedly, at least in reaction times and in automated response (mitigation). Designing such an IDS is a daunting task as network operators are traditionally reluctant to act - at any speed - on potentially false alarms. The primary challenge of a low reaction time detection system is maintaining a consistently low false alarm rate. This paper aims to show how a practical FPGA-based DDoS detection and mitigation system can successfully address this. Besides verifying the model and algorithms with real traffic ”in the wild”, we validate the low false alarm ratio. Accordingly, we describe a methodology for determining the false alarm ratio for each involved threat type, then we categorize the causes of false detection, and provide our measurement results. As shown here, our methods can effectively mitigate the volatile ephemeral DDoS attacks, and accordingly are usable both in human out-of-loop and on-the-loop next-generation security solutions.
Distributed denial-of-service (DDoS) attack remains an exceptional security risk, alleviating these digital attacks are for all intents and purposes extremely intense to actualize, particularly when it faces exceptionally well conveyed attacks. The early disclosure of these attacks, through testing, is critical to ensure safety of end-clients and the wide-ranging expensive network resources. With respect to DDoS attacks - its hypothetical establishment, engineering, and calculations of a honeypot have been characterized. At its core, the honeypot consists of an intrusion prevention system (Interruption counteractive action framework) situated in the Internet Service Providers level. The IPSs then create a safety net to protect the hosts by trading chosen movement data. The evaluation of honeypot promotes broad reproductions and an absolute dataset is introduced, indicating honeypot's activity and low overhead. The honeypot anticipates such assaults and mitigates the servers. The prevailing IDS are generally modulated to distinguish known authority level system attacks. This spontaneity makes the honeypot system powerful against uncommon and strange vindictive attacks.
At the present, the security on the Internet is very sensitive and important. Most of the computer science curricula in universities and institutes of higher education provides this knowledge in term of computer and network security. Therefore, students studying in the information technology area need to have some basic knowledge about the security in order to prevent the potential attacks and protect themselves from hackers or intruders. Unfortunately, the network security concept is moderately abstract when students learn in the traditional lecture-based class. In this paper, to motivate and help students to perceive better than in the traditional classroom, we propose a security game called “Lord of Secure”, which is a virtual reality (VR) game on Android for education. It is an alternative learning materials for learners to gain the knowledge about the network security effectively. The game composes of main topics of the network security such as Firewall, IDS, IPS, and Honey pot. Moreover, the game will give the players knowledge about network security through the virtual world. The game also contains several quizzes including pretest and posttest, so players will know how much they gain more knowledge about network security by comparing scores before and after playing the game.
The large adoption of cloud services in many business domains dramatically increases the need for effective solutions to improve the security of deployed services. The adoption of Security Service Level Agreements (Security SLAs) represents an effective solution to state formally the security guarantees that a cloud service is able to provide. Even if security policies declared by the service provider are properly implemented before the service is deployed and launched, the actual security level tends to degrade over time, due to the knowledge on the exposed attack surface that the attackers are progressively able to gain. In this paper, we present a Security SLA-driven MTD framework that allows MTD strategies to be applied to a cloud application by automatically switching among different admissible application configurations, in order to confuse the attackers and nullify their reconnaissance effort, while preserving the application Security SLA across reconfigurations.
Compile-time specialization and feature pruning through static binary rewriting have been proposed repeatedly as techniques for reducing the attack surface of large programs, and for minimizing the trusted computing base. We propose a new approach to attack surface reduction: dynamic binary lifting and recompilation. We present BinRec, a binary recompilation framework that lifts binaries to a compiler-level intermediate representation (IR) to allow complex transformations on the captured code. After transformation, BinRec lowers the IR back to a "recovered" binary, which is semantically equivalent to the input binary, but does have its unnecessary features removed. Unlike existing approaches, which are mostly based on static analysis and rewriting, our framework analyzes and lifts binaries dynamically. The crucial advantage is that we can not only observe the full program including all of its dependencies, but we can also determine which program features the end-user actually uses. We evaluate the correctness and performance of BinRec, and show that our approach enables aggressive pruning of unwanted features in COTS binaries.
The use of real-time video streaming is increasing day-by-day, and its security has become a serious issue now. Video encryption is a challenging task because of its large frame size. Video encryption can be done with symmetric key as well as asymmetric key encryption. Among different asymmetric key encryption technique, ECC performs better than other algorithms like RSA in terms of smaller key size and faster encryption and decryption operation. In this work, we have analyzed the performance of 18 different ECC curves and suggested some suitable curves for real-time video encryption.
With the recent explosion of data breaches and data misuse cases, there is more demand than ever for secure system designs that fundamentally tackle today's data trust models. One promising alternative to today's trust model is true end-to-end encryption without however compromising user experience nor data utility. Fully homomorphic encryption (FHE) provides a powerful tool in empowering users with more control over their data, while still benefiting from computing services of remote services, though without trusting them with plaintext data. However, due to the complexity of fully homomorphic encryption, it has remained reserved exclusively for a small group of domain experts. With our system Marble, we make FHE accessible to the broader community of researchers and developers. Marble takes away the complexity of setup and configuration associated with FHE schemes. It provides a familiar programming environment. Marble allows rapid feasibility assessment and development of FHE-based applications. More importantly, Marble benchmarks the overall performance of an FHE-based application, as part of the feasibility assessment. With real-world application case-studies, we show the practicality of Marble.
In this vision paper, we focus on a key aspect of the modern software developer's potential to write secure software: their (lack of) success in securely using cryptography APIs. In particular, we note that most ongoing research tends to focus on identifying concrete problems software developers experience, and providing workable solutions, but that such solutions still require developers to identify the appropriate API calls to make and, worse, to be familiar with and configure sometimes obscure parameters of such calls. In contrast, we envision identifying and employing targeted visual metaphors to allow developers to simply select the most appropriate cryptographic functionality they need.
Deprecation is a language feature that allows API producers to mark a feature as obsolete. We aim to gain a deep understanding of the needs of API producers and consumers alike regarding deprecation. To that end, we investigate why API producers deprecate features, whether they remove deprecated features, how they expect consumers to react, and what prompts an API consumer to react to deprecation. To achieve this goal we conduct semi-structured interviews with 17 third-party Java API producers and survey 170 Java developers. We observe that the current deprecation mechanism in Java and the proposal to enhance it does not address all the needs of a developer. This leads us to propose and evaluate three further enhancements to the deprecation mechanism.
Reuse of pre-existing industry datasets for research purposes requires a multi-stakeholder solution that balances the researcher's analysis objectives with the need to engage the industry data custodian, whilst respecting the privacy rights of human data subjects. Current methods place the burden on the data custodian, whom may not be sufficiently trained to fully appreciate the nuances of data de-identification. Through modelling of functional, quality, and emotional goals, we propose a de-identification in the cloud approach whereby the researcher proposes analyses along with the extraction and de-identification operations, while engaging the industry data custodian with secure control over authorising the proposed analyses. We demonstrate our approach through implementation of a de-identification portal for sports club data.
First standardized by the IETF in the 1990's, SSL/TLS is the most widely-used encryption protocol on the Internet. This makes it imperative to study its usage across different platforms and applications to ensure proper usage and robustness against attacks and vulnerabilities. While previous efforts have focused on the usage of TLS in the desktop ecosystem, there have been no studies of TLS usage by mobile apps at scale. In our study, we use anonymized data collected by the Lumen mobile measurement app to analyze TLS usage by Android apps in the wild. We analyze and fingerprint handshake messages to characterize the TLS APIs and libraries that apps use, and evaluate their weaknesses. We find that 84% of apps use the default TLS libraries provided by the operating system, and the remaining apps use other TLS libraries for various reasons such as using TLS extensions and features that are not supported by the Android TLS libraries, some of which are also not standardized by the IETF. Our analysis reveals the strengths and weaknesses of each approach, demonstrating that the path to improving TLS security in the mobile platform is not straightforward. Based on work published at: Abbas Razaghpanah, Arian Akhavan Niaki, Narseo Vallina-Rodriguez, Srikanth Sundaresan, Johanna Amann, and Phillipa Gill. 2017. Studying TLS Usage in Android Apps. In Proceedings of CoNEXT '17. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3143361.3143400
Adversaries are conducting attack campaigns with increasing levels of sophistication. Additionally, with the prevalence of out-of-the-box toolkits that simplify attack operations during different stages of an attack campaign, multiple new adversaries and attack groups have appeared over the past decade. Characterizing the behavior and the modus operandi of different adversaries is critical in identifying the appropriate security maneuver to detect and mitigate the impact of an ongoing attack. To this end, in this paper, we study two characteristics of an adversary: Risk-averseness and Experience level. Risk-averse adversaries are more cautious during their campaign while fledgling adversaries do not wait to develop adequate expertise and knowledge before launching attack campaigns. One manifestation of these characteristics is through the adversary's choice and usage of attack tools. To detect these characteristics, we present multi-level machine learning (ML) models that use network data generated while under attack by different attack tools and usage patterns. In particular, for risk-averseness, we considered different configurations for scanning tools and trained the models in a testbed environment. The resulting model was used to predict the cautiousness of different red teams that participated in the Cyber Shield ‘16 exercise. The predictions matched the expected behavior of the red teams. For Experience level, we considered publicly-available remote access tools and usage patterns. We developed a Markov model to simulate usage patterns of attackers with different levels of expertise and through experiments on CyberVAN, we showed that the ML model has a high accuracy.
We present a novel method for static analysis in which we combine data-flow analysis with machine learning to detect SQL injection (SQLi) and Cross-Site Scripting (XSS) vulnerabilities in PHP applications. We assembled a dataset from the National Vulnerability Database and the SAMATE project, containing vulnerable PHP code samples and their patched versions in which the vulnerability is solved. We extracted features from the code samples by applying data-flow analysis techniques, including reaching definitions analysis, taint analysis, and reaching constants analysis. We used these features in machine learning to train various probabilistic classifiers. To demonstrate the effectiveness of our approach, we built a tool called WIRECAML, and compared our tool to other tools for vulnerability detection in PHP code. Our tool performed best for detecting both SQLi and XSS vulnerabilities. We also tried our approach on a number of open-source software applications, and found a previously unknown vulnerability in a photo-sharing web application.
Software vulnerabilities are a primary concern in the IT security industry, as malicious hackers who discover these vulnerabilities can often exploit them for nefarious purposes. However, complex programs, particularly those written in a relatively low-level language like C, are difficult to fully scan for bugs, even when both manual and automated techniques are used. Since analyzing code and making sure it is securely written is proven to be a non-trivial task, both static analysis and dynamic analysis techniques have been heavily investigated, and this work focuses on the former. The contribution of this paper is a demonstration of how it is possible to catch a large percentage of bugs by extracting text features from functions in C source code and analyzing them with a machine learning classifier. Relatively simple features (character count, character diversity, entropy, maximum nesting depth, arrow count, "if" count, "if" complexity, "while" count, and "for" count) were extracted from these functions, and so were complex features (character n-grams, word n-grams, and suffix trees). The simple features performed unexpectedly better compared to the complex features (74% accuracy compared to 69% accuracy).
Although the importance of mobile applications grows every day, recent vulnerability reports argue the application's deficiency to meet modern security standards. Testing strategies alleviate the problem by identifying security violations in software implementations. This paper proposes a novel testing methodology that applies state machine learning of mobile Android applications in combination with algorithms that discover attack paths in the learned state machine. The presence of an attack path evidences the existence of a vulnerability in the mobile application. We apply our methods to real-life apps and show that the novel methodology is capable of identifying vulnerabilities.
Browser extensions are a way through which third party developers provide a set of additional functionalities on top of the traditional functionalities provided by a browser. It has been identified that the browser extension platform can be used by hackers to carry out attacks of sophisticated kinds. These attacks include phishing, spying, DDoS, email spamming, affiliate fraud, mal-advertising, payment frauds etc. In this paper, we showcase the vulnerability of the current browsers to these attacks by taking Google Chrome as the case study as it is a popular browser. The paper also discusses the technical reason which makes it possible for the attackers to launch such attacks via browser extensions. A set of suggestions and solutions that can thwart the attack possibilities has been discussed.
Continuous Integration (CI) services, which can automatically build, test, and deploy software projects, are an invaluable asset in distributed teams, increasing productivity and helping to maintain code quality. Prior work has shown that CI pipelines can be sophisticated, and choosing and configuring a CI system involves tradeoffs. As CI technology matures, new CI tool offerings arise to meet the distinct wants and needs of software teams, as they negotiate a path through these tradeoffs, depending on their context. In this paper, we begin to uncover these nuances, and tell the story of open-source projects falling out of love with Travis, the earliest and most popular cloud-based CI system. Using logistic regression, we quantify the effects that open-source community factors and project technical factors have on the rate of Travis abandonment. We find that increased build complexity reduces the chances of abandonment, that larger projects abandon at higher rates, and that a project's dominant language has significant but varying effects. Finally, we find the surprising result that metrics of configuration attempts and knowledge dispersion in the project do not affect the rate of abandonment.
To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.
The current AI revolution provides us with many new, but often very complex algorithmic systems. This complexity does not only limit understanding, but also acceptance of e.g. deep learning methods. In recent years, explainable AI (XAI) has been proposed as a remedy. However, this research is rarely supported by publications on explanations from social sciences. We suggest a bottom-up approach to explanations for (game) AI, by starting from a baseline definition of understandability informed by the concept of limited human working memory. We detail our approach and demonstrate its application to two games from the GVGAI framework. Finally, we discuss our vision of how additional concepts from social sciences can be integrated into our proposed approach and how the results can be generalised.