Visible to the public Biblio

Found 7504 results

Filters: Keyword is Metrics  [Clear All Filters]
2017-05-30
Bhatti, Saleem N., Phoomikiattisak, Ditchaphong, Simpson, Bruce.  2016.  IP Without IP Addresses. Proceedings of the 12th Asian Internet Engineering Conference. :41–48.

We discuss a key engineering challenge in implementing the Identifier- Locator Network Protocol (ILNP), as described in IRTF Experimental RFCs 6740–6748: enabling legacy applications that use the C sockets API. We have built the first two OS kernel implementations of ILNPv6 (ILNP as a superset of IPv6), in both the Linux OS kernel and the FreeBSD OS kernel. Our evaluation is in comparison with IPv6, in the context of a topical and challenging scenario: host mobility implemented as a purely end-to-end function. Our experiments show that ILNPv6 has excellent potential for deployment using existing IPv6 infrastructure, whilst offering the new properties and functionality of ILNP.

Bajpai, Vaibhav, Schönwälder, Jürgen.  2016.  Measuring the Effects of Happy Eyeballs. Proceedings of the 2016 Applied Networking Research Workshop. :38–44.

The IETF has developed protocols that promote a healthy IPv4 and IPv6 co-existence. The Happy Eyeballs (HE) algorithm, for instance, prevents bad user experience in situations where IPv6 connectivity is broken. Using an active test (happy) that measures TCP connection establishment times, we evaluate the effects of the HE algorithm. The happy test measures against ALEXA top 10K websites from 80 SamKnows probes connected to dual-stacked networks representing 58 different ASes. Using a 3-years long (2013 - 2016) dataset, we show that TCP connect times to popular websites over IPv6 have considerably improved over time. As of May 2016, 18% of these websites are faster over IPv6 with 91% of the rest at most 1 ms slower. The historical trend shows that only around 1% of the TCP connect times over IPv6 were ever above the HE timer value (300 ms), which leaves around 2% chance for IPv4 to win a HE race towards these websites. As such, 99% of these websites prefer IPv6 connections more than 98% of the time. We show that although absolute TCP connect times (in ms) are not that far apart in both address families, HE with a 300 ms timer value tends to prefer slower IPv6 connections in around 90% of the cases. We show that lowering the HE timer value to 150 ms gives us a margin benefit of 10% while retaining same preference levels over IPv6.

Ikram, Muhammad, Vallina-Rodriguez, Narseo, Seneviratne, Suranga, Kaafar, Mohamed Ali, Paxson, Vern.  2016.  An Analysis of the Privacy and Security Risks of Android VPN Permission-enabled Apps. Proceedings of the 2016 Internet Measurement Conference. :349–364.

Millions of users worldwide resort to mobile VPN clients to either circumvent censorship or to access geo-blocked content, and more generally for privacy and security purposes. In practice, however, users have little if any guarantees about the corresponding security and privacy settings, and perhaps no practical knowledge about the entities accessing their mobile traffic. In this paper we provide a first comprehensive analysis of 283 Android apps that use the Android VPN permission, which we extracted from a corpus of more than 1.4 million apps on the Google Play store. We perform a number of passive and active measurements designed to investigate a wide range of security and privacy features and to study the behavior of each VPN-based app. Our analysis includes investigation of possible malware presence, third-party library embedding, and traffic manipulation, as well as gauging user perception of the security and privacy of such apps. Our experiments reveal several instances of VPN apps that expose users to serious privacy and security vulnerabilities, such as use of insecure VPN tunneling protocols, as well as IPv6 and DNS traffic leakage. We also report on a number of apps actively performing TLS interception. Of particular concern are instances of apps that inject JavaScript programs for tracking, advertising, and for redirecting e-commerce traffic to external partners.

Li, Jason, Yackoski, Justin, Evancich, Nicholas.  2016.  Moving Target Defense: A Journey from Idea to Product. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :69–79.

In today's enterprise networks, there are many ways for a determined attacker to obtain a foothold, bypass current protection technologies, and attack the intended target. Over several years we have developed the Self-shielding Dynamic Network Architecture (SDNA) technology, which prevents an attacker from targeting, entering, or spreading through an enterprise network by adding dynamics that present a changing view of the network over space and time. SDNA was developed with the support of government sponsored research and development and corporate internal resources. The SDNA technology was purchased by Cryptonite, LLC in 2015 and has been developed into a robust product offering called Cryptonite NXT. In this paper, we describe the journey and lessons learned along the course of feasibility demonstration, technology development, security testing, productization, and deployment in a production network.

Vaughn, Jr., Rayford B., Morris, Tommy.  2016.  Addressing Critical Industrial Control System Cyber Security Concerns via High Fidelity Simulation. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :12:1–12:4.

This paper outlines a set of 10 cyber security concerns associated with Industrial Control Systems (ICS). The concerns address software and hardware development, implementation, and maintenance practices, supply chain assurance, the need for cyber forensics in ICS, a lack of awareness and training, and finally, a need for test beds which can be used to address the first 9 cited concerns. The concerns documented in this paper were developed based on the authors' combined experience conducting research in this field for the US Department of Homeland Security, the National Science Foundation, and the Department of Defense. The second half of this paper documents a virtual test bed platform which is offered as a tool to address the concerns listed in the first half of the paper. The paper discusses various types of test beds proposed in literature for ICS research, provides an overview of the virtual test bed platform developed by the authors, and lists future works required to extend the existing test beds to serve as a development platform.

Lacroix, Jesse, El-Khatib, Khalil, Akalu, Rajen.  2016.  Vehicular Digital Forensics: What Does My Vehicle Know About Me? Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications. :59–66.

A major component of modern vehicles is the infotainment system, which interfaces with its drivers and passengers. Other mobile devices, such as handheld phones and laptops, can relay information to the embedded infotainment system through Bluetooth and vehicle WiFi. The ability to extract information from these systems would help forensic analysts determine the general contents that is stored in an infotainment system. Based off the data that is extracted, this would help determine what stored information is relevant to law enforcement agencies and what information is non-essential when it comes to solving criminal activities relating to the vehicle itself. This would overall solidify the Intelligent Transport System and Vehicular Ad Hoc Network infrastructure in combating crime through the use of vehicle forensics. Additionally, determining the content of these systems will allow forensic analysts to know if they can determine anything about the end-user directly and/or indirectly.

Gu, Yufei, Lin, Zhiqiang.  2016.  Derandomizing Kernel Address Space Layout for Memory Introspection and Forensics. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :62–72.

Modern OS kernels including Windows, Linux, and Mac OS all have adopted kernel Address Space Layout Randomization (ASLR), which shifts the base address of kernel code and data into different locations in different runs. Consequently, when performing introspection or forensic analysis of kernel memory, we cannot use any pre-determined addresses to interpret the kernel events. Instead, we must derandomize the address space layout and use the new addresses. However, few efforts have been made to derandomize the kernel address space and yet there are many questions left such as which approach is more efficient and robust. Therefore, we present the first systematic study of how to derandomize a kernel when given a memory snapshot of a running kernel instance. Unlike the derandomization approaches used in traditional memory exploits in which only remote access is available, with introspection and forensics applications, we can use all the information available in kernel memory to generate signatures and derandomize the ASLR. In other words, there exists a large volume of solutions for this problem. As such, in this paper we examine a number of typical approaches to generate strong signatures from both kernel code and data based on the insight of how kernel code and data is updated, and compare them from efficiency (in terms of simplicity, speed etc.) and robustness (e.g., whether the approach is hard to be evaded or forged) perspective. In particular, we have designed four approaches including brute-force code scanning, patched code signature generation, unpatched code signature generation, and read-only pointer based approach, according to the intrinsic behavior of kernel code and data with respect to kernel ASLR. We have gained encouraging results for each of these approaches and the corresponding experimental results are reported in this paper.

Jadhao, Ankita R., Agrawal, Avinash J..  2016.  A Digital Forensics Investigation Model for Social Networking Site. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :130:1–130:4.

Social Networking is fundamentally shifting the way we communicate, sharing idea and form opinions. All people try to use social media for there need, people from every age group are involved in social media site or e-commerce site. Nowadays almost every illegal activity is happened using the social network and instant messages. It means that present system is not capable to found all suspicious words. In this paper, we provided a brief description of problem and review on the different framework developed so far. Propose a better system which can be indentify criminal activity through social networking more efficiently. Use Ontology Based Information Extraction (OBIE) technique to identify domain of word and Association Rule mining to generate rules. Heuristic method checks in user database for malicious users according to predefine elements and Naïve Bayes method is use to identify the context behind the message or post. The experimental result is used for further action on victim by cyber crime department.

AL-ATHAMNEH, M., KURUGOLLU, F., CROOKES, D., FARID, M..  2016.  Video Authentication Based on Statistical Local Information. Proceedings of the 9th International Conference on Utility and Cloud Computing. :388–391.

With the outgrowth of video editing tools, video information trustworthiness becomes a hypersensitive field. Today many devices have the capability of capturing digital videos such as CCTV, digital cameras and mobile phones and these videos may transmitted over the Internet or any other non secure channel. As digital video can be used to as supporting evidence, it has to be protected against manipulation or tampering. As most video authentication techniques are based on watermarking and digital signatures, these techniques are effectively used in copyright purposes but difficult to implement in other cases such as video surveillance or in videos captured by consumer's cameras. In this paper we propose an intelligent technique for video authentication which uses the video local information which makes it useful for real world applications. The proposed algorithm relies on the video's statistical local information which was applied on a dataset of videos captured by a range of consumer video cameras. The results show that the proposed algorithm has potential to be a reliable intelligent technique in digital video authentication without the need to use for SVM classifier which makes it faster and less computationally expensive in comparing with other intelligent techniques.

Xu, Zhang, Wu, Zhenyu, Li, Zhichun, Jee, Kangkook, Rhee, Junghwan, Xiao, Xusheng, Xu, Fengyuan, Wang, Haining, Jiang, Guofei.  2016.  High Fidelity Data Reduction for Big Data Security Dependency Analyses. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :504–516.

Intrusive multi-step attacks, such as Advanced Persistent Threat (APT) attacks, have plagued enterprises with significant financial losses and are the top reason for enterprises to increase their security budgets. Since these attacks are sophisticated and stealthy, they can remain undetected for years if individual steps are buried in background "noise." Thus, enterprises are seeking solutions to "connect the suspicious dots" across multiple activities. This requires ubiquitous system auditing for long periods of time, which in turn causes overwhelmingly large amount of system audit events. Given a limited system budget, how to efficiently handle ever-increasing system audit logs is a great challenge. This paper proposes a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high-quality forensic analysis. In particular, we first propose an aggregation algorithm that preserves the dependency of events during data reduction to ensure the high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. To validate the efficacy of our proposed approach, we conduct a comprehensive evaluation on real-world auditing systems using log traces of more than one month. Our evaluation results demonstrate that our approach can significantly reduce the size of system logs and improve the efficiency of forensic analysis without losing accuracy.

Pasquini, Cecilia, Schöttle, Pascal, Böhme, Rainer, Boato, Giulia, Pèrez-Gonzàlez, Fernando.  2016.  Forensics of High Quality and Nearly Identical JPEG Image Recompression. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :11–21.

We address the known problem of detecting a previous compression in JPEG images, focusing on the challenging case of high and very high quality factors (textgreater= 90) as well as repeated compression with identical or nearly identical quality factors. We first revisit the approaches based on Benford–Fourier analysis in the DCT domain and block convergence analysis in the spatial domain. Both were originally conceived for specific scenarios. Leveraging decision tree theory, we design a combined approach complementing the discriminatory capabilities. We obtain a set of novel detectors targeted to high quality grayscale JPEG images.

Xu, Guanshuo, Wu, Han-Zhou, Shi, Yun Q...  2016.  Ensemble of CNNs for Steganalysis: An Empirical Study. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :103–107.

There has been growing interest in using convolutional neural networks (CNNs) in the fields of image forensics and steganalysis, and some promising results have been reported recently. These works mainly focus on the architectural design of CNNs, usually, a single CNN model is trained and then tested in experiments. It is known that, neural networks, including CNNs, are suitable to form ensembles. From this perspective, in this paper, we employ CNNs as base learners and test several different ensemble strategies. In our study, at first, a recently proposed CNN architecture is adopted to build a group of CNNs, each of them is trained on a random subsample of the training dataset. The output probabilities, or some intermediate feature representations, of each CNN, are then extracted from the original data and pooled together to form new features ready for the second level of classification. To make best use of the trained CNN models, we manage to partially recover the lost information due to spatial subsampling in the pooling layers when forming feature vectors. Performance of the ensemble methods are evaluated on BOSSbase by detecting S-UNIWARD at 0.4 bpp embedding rate. Results have indicated that both the recovery of the lost information, and learning from intermediate representation in CNNs instead of output probabilities, have led to performance improvement.

Shelke, Priya M., Prasad, Rajesh S..  2016.  Improving JPEG Image Anti-forensics. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :75:1–75:5.

This paper proposes a forensic method for identifying whether an image was previously compressed by JPEG and also proposes an improved anti-forensics method to enhance the quality of noise added image. Stamm and Liu's anti-forensics method disable the detection capabilities of various forensics methods proposed in the literature, used for identifying the compressed images. However, it also degrades the quality of the image. First, we analyze the anti-forensics method and then use the decimal histogram of the coefficients to distinguish the never compressed images from the previously compressed; even the compressed image processed anti-forensically. After analyzing the noise distribution in the AF image, we propose a method to remove the Gaussian noise caused by image dithering which in turn enhances the image quality. The paper is organized in the following manner: Section I is the introduction, containing previous literature. Section II briefs Anti-forensic method proposed by Stamm et al. In section III, we have proposed a forensic approach and section IV comprises of improved anti-forensic approach. Section V covers details of experimentation followed by the conclusion.

Cuzzocrea, Alfredo, Pirrò, Giuseppe.  2016.  A Semantic-web-technology-based Framework for Supporting Knowledge-driven Digital Forensics. Proceedings of the 8th International Conference on Management of Digital EcoSystems. :58–66.

The usage of Information and Communication Technologies (ICTs) pervades everyday's life. If it is true that ICT contributed to improve the quality of our life, it is also true that new forms of (cyber)crime have emerged in this setting. The diversity and amount of information forensic investigators need to cope with, when tackling a cyber-crime case, call for tools and techniques where knowledge is the main actor. Current approaches leave to the investigator the chore of integrating the diverse sources of evidence relevant for a case thus hindering the automatic generation of reusable knowledge. This paper describes an architecture that lifts the classical phases of a digital forensic investigation to a knowledge-driven setting. We discuss how the usage of languages and technologies originating from the Semantic Web proposal can complement digital forensics tools so that knowledge becomes a first-class citizen. Our architecture enables to perform in an integrated way complex forensic investigations and, as a by-product, build a knowledge base that can be consulted to gain insights from previous cases. Our proposal has been inspired by real-world scenarios emerging in the context of an Italian research project about cyber security.

2017-05-22
Castle, Sam, Pervaiz, Fahad, Weld, Galen, Roesner, Franziska, Anderson, Richard.  2016.  Let's Talk Money: Evaluating the Security Challenges of Mobile Money in the Developing World. Proceedings of the 7th Annual Symposium on Computing for Development. :4:1–4:10.

Digital money drives modern economies, and the global adoption of mobile phones has enabled a wide range of digital financial services in the developing world. Where there is money, there must be security, yet prior work on mobile money has identified discouraging vulnerabilities in the current ecosystem. We begin by arguing that the situation is not as dire as it may seem–-many reported issues can be resolved by security best practices and updated mobile software. To support this argument, we diagnose the problems from two directions: (1) a large-scale analysis of existing financial service products and (2) a series of interviews with 7 developers and designers in Africa and South America. We frame this assessment within a novel, systematic threat model. In our large-scale analysis, we evaluate 197 Android apps and take a deeper look at 71 products to assess specific organizational practices. We conclude that although attack vectors are present in many apps, service providers are generally making intentional, security-conscious decisions. The developer interviews support these findings, as most participants demonstrated technical competency and experience, and all worked within established organizations with regimented code review processes and dedicated security teams.

Ramokapane, Kopo M., Rashid, Awais, Such, Jose M..  2016.  Assured Deletion in the Cloud: Requirements, Challenges and Future Directions. Proceedings of the 2016 ACM on Cloud Computing Security Workshop. :97–108.

Inadvertent exposure of sensitive data is a major concern for potential cloud customers. Much focus has been on other data leakage vectors, such as side channel attacks, while issues of data disposal and assured deletion have not received enough attention to date. However, data that is not properly destroyed may lead to unintended disclosures, in turn, resulting in heavy financial penalties and reputational damage. In non-cloud contexts, issues of incomplete deletion are well understood. To the best of our knowledge, to date, there has been no systematic analysis of assured deletion challenges in public clouds. In this paper, we aim to address this gap by analysing assured deletion requirements for the cloud, identifying cloud features that pose a threat to assured deletion, and describing various assured deletion challenges. Based on this discussion, we identify future challenges for research in this area and propose an initial assured deletion architecture for cloud settings. Altogether, our work offers a systematization of requirements and challenges of assured deletion in the cloud, and a well-founded reference point for future research in developing new solutions to assured deletion.

Alrwais, Sumayah, Yuan, Kan, Alowaisheq, Eihal, Liao, Xiaojing, Oprea, Alina, Wang, XiaoFeng, Li, Zhou.  2016.  Catching Predators at Watering Holes: Finding and Understanding Strategically Compromised Websites. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :153–166.

Unlike a random, run-of-the-mill website infection, in a strategic web attack, the adversary carefully chooses the target frequently visited by an organization or a group of individuals to compromise, for the purpose of gaining a step closer to the organization or collecting information from the group. This type of attacks, called "watering hole", have been increasingly utilized by APT actors to get into the internal networks of big companies and government agencies or monitor politically oriented groups. With its importance, little has been done so far to understand how the attack works, not to mention any concrete step to counter this threat. In this paper, we report our first step toward better understanding this emerging threat, through systematically discovering and analyzing new watering hole instances and attack campaigns. This was made possible by a carefully designed methodology, which repeatedly monitors a large number potential watering hole targets to detect unusual changes that could be indicative of strategic compromises. Running this system on the HTTP traffic generated from visits to 61K websites for over 5 years, we are able to discover and confirm 17 watering holes and 6 campaigns never reported before. Given so far there are merely 29 watering holes reported by blogs and technical reports, the findings we made contribute to the research on this attack vector, by adding 59% more attack instances and information about how they work to the public knowledge. Analyzing the new watering holes allows us to gain deeper understanding of these attacks, such as repeated compromises of political websites, their long lifetimes, unique evasion strategy (leveraging other compromised sites to serve attack payloads) and new exploit techniques (no malware delivery, web only information gathering). Also, our study brings to light interesting new observations, including the discovery of a recent JSONP attack on an NGO website that has been widely reported and apparently forced the attack to stop.

Manzoor, Emaad, Milajerdi, Sadegh M., Akoglu, Leman.  2016.  Fast Memory-efficient Anomaly Detection in Streaming Heterogeneous Graphs. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1035–1044.

Given a stream of heterogeneous graphs containing different types of nodes and edges, how can we spot anomalous ones in real-time while consuming bounded memory? This problem is motivated by and generalizes from its application in security to host-level advanced persistent threat (APT) detection. We propose StreamSpot, a clustering based anomaly detection approach that addresses challenges in two key fronts: (1) heterogeneity, and (2) streaming nature. We introduce a new similarity function for heterogeneous graphs that compares two graphs based on their relative frequency of local substructures, represented as short strings. This function lends itself to a vector representation of a graph, which is (a) fast to compute, and (b) amenable to a sketched version with bounded size that preserves similarity. StreamSpot exhibits desirable properties that a streaming application requires: it is (i) fully-streaming; processing the stream one edge at a time as it arrives, (ii) memory-efficient; requiring constant space for the sketches and the clustering, (iii) fast; taking constant time to update the graph sketches and the cluster summaries that can process over 100,000 edges per second, and (iv) online; scoring and flagging anomalies in real time. Experiments on datasets containing simulated system-call flow graphs from normal browser activity and various attack scenarios (ground truth) show that StreamSpot is high-performance; achieving above 95% detection accuracy with small delay, as well as competitive time and memory usage.  

Nema, Aditi, Tiwari, Basant, Tiwari, Vivek.  2016.  Improving Accuracy for Intrusion Detection Through Layered Approach Using Support Vector Machine with Feature Reduction. Proceedings of the ACM Symposium on Women in Research 2016. :26–31.

Digital information security is the field of information technology which deal with all about identification and protection of information. Whereas, identification of the threat of any Intrusion Detection System (IDS) in the most challenging phase. Threat detection become most promising because rest of the IDS system phase depends on the solely on "what is identified". In this view, a multilayered framework has been discussed which handles the underlying features for the identification of various attack (DoS, R2L, U2R, Probe). The experiments validates the use SVM with genetic approach is efficient.

Liu, Daiping, Hao, Shuai, Wang, Haining.  2016.  All Your DNS Records Point to Us: Understanding the Security Threats of Dangling DNS Records. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1414–1425.

In a dangling DNS record (Dare), the resources pointed to by the DNS record are invalid, but the record itself has not yet been purged from DNS. In this paper, we shed light on a largely overlooked threat in DNS posed by dangling DNS records. Our work reveals that Dare can be easily manipulated by adversaries for domain hijacking. In particular, we identify three attack vectors that an adversary can harness to exploit Dares. In a large-scale measurement study, we uncover 467 exploitable Dares in 277 Alexa top 10,000 domains and 52 edu zones, showing that Dare is a real, prevalent threat. By exploiting these Dares, an adversary can take full control of the (sub)domains and can even have them signed with a Certificate Authority (CA). It is evident that the underlying cause of exploitable Dares is the lack of authenticity checking for the resources to which that DNS record points. We then propose three defense mechanisms to effectively mitigate Dares with little human effort.

Lima, Antonio, Rocha, Francisco, Völp, Marcus, Esteves-Verissimo, Paulo.  2016.  Towards Safe and Secure Autonomous and Cooperative Vehicle Ecosystems. Proceedings of the 2Nd ACM Workshop on Cyber-Physical Systems Security and Privacy. :59–70.

Semi-autonomous driver assists are already widely deployed and fully autonomous cars are progressively leaving the realm of laboratories. This evolution coexists with a progressive connectivity and cooperation, creating important safety and security challenges, the latter ranging from casual hackers to highly-skilled attackers, requiring a holistic analysis, under the perspective of fully-fledged ecosystems of autonomous and cooperative vehicles. This position paper attempts at contributing to a better understanding of the global threat plane and the specific threat vectors designers should be attentive to. We survey paradigms and mechanisms that may be used to overcome or at least mitigate the potential risks that may arise through the several threat vectors analyzed.

Potteiger, Bradley, Martins, Goncalo, Koutsoukos, Xenofon.  2016.  Software and Attack Centric Integrated Threat Modeling for Quantitative Risk Assessment. Proceedings of the Symposium and Bootcamp on the Science of Security. :99–108.

One step involved in the security engineering process is threat modeling. Threat modeling involves understanding the complexity of the system and identifying all of the possible threats, regardless of whether or not they can be exploited. Proper identification of threats and appropriate selection of countermeasures reduces the ability of attackers to misuse the system. This paper presents a quantitative, integrated threat modeling approach that merges software and attack centric threat modeling techniques. The threat model is composed of a system model representing the physical and network infrastructure layout, as well as a component model illustrating component specific threats. Component attack trees allow for modeling specific component contained attack vectors, while system attack graphs illustrate multi-component, multi-step attack vectors across the system. The Common Vulnerability Scoring System (CVSS) is leveraged to provide a standardized method of quantifying the low level vulnerabilities in the attack trees. As a case study, a railway communication network is used, and the respective results using a threat modeling software tool are presented.

Hooshmand, Salman, Mahmud, Akib, Bochmann, Gregor V., Faheem, Muhammad, Jourdan, Guy-Vincent, Couturier, Russ, Onut, Iosif-Viorel.  2016.  D-ForenRIA: Distributed Reconstruction of User-Interactions for Rich Internet Applications. Proceedings of the 25th International Conference Companion on World Wide Web. :211–214.

We present D-ForenRIA, a distributed forensic tool to automatically reconstruct user-sessions in Rich Internet Applications (RIAs), using solely the full HTTP traces of the sessions as input. D-ForenRIA recovers automatically each browser state, reconstructs the DOMs and re-creates screenshots of what was displayed to the user. The tool also recovers every action taken by the user on each state, including the user-input data. Our application domain is security forensics, where sometimes months-old sessions must be quickly reconstructed for immediate inspection. We will demonstrate our tool on a series of RIAs, including a vulnerable banking application created by IBM Security for testing purposes. In that case study, the attacker visits the vulnerable web site, and exploits several vulnerabilities (SQL-injections, XSS...) to gain access to private information and to perform unauthorized transactions. D-ForenRIA can reconstruct the session, including screenshots of all pages seen by the hacker, DOM of each page and the steps taken for unauthorized login and the inputs hacker exploited for the SQL-injection attack. D-ForenRIA is made efficient by applying advanced reconstruction techniques and by using several browsers concurrently to speed up the reconstruction process. Although we developed D-ForenRIA in the context of security forensics, the tool can also be useful in other contexts such as aided RIAs debugging and automated RIAs scanning.

Medeiros, Ibéria, Beatriz, Miguel, Neves, Nuno, Correia, Miguel.  2016.  Hacking the DBMS to Prevent Injection Attacks. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :295–306.

After more than a decade of research, web application security continues to be a challenge and the backend database the most appetizing target. The paper proposes preventing injection attacks against the database management system (DBMS) behind web applications by embedding protections in the DBMS itself. The motivation is twofold. First, the approach of embedding protections in operating systems and applications running on top of them has been effective to protect this software. Second, there is a semantic mismatch between how SQL queries are believed to be executed by the DBMS and how they are actually executed, leading to subtle vulnerabilities in prevention mechanisms. The approach – SEPTIC – was implemented in MySQL and evaluated experimentally with web applications written in PHP and Java/Spring. In the evaluation SEPTIC has shown neither false negatives nor false positives, on the contrary of alternative approaches, causing also a low performance overhead in the order of 2.2%.

Pawar, Shwetambari, Jain, Nilakshi, Deshpande, Swati.  2016.  System Attribute Measures of Network Security Analyzer. Proceedings of the ACM Symposium on Women in Research 2016. :51–54.

In this paper, we have mentioned a method to find the performance of projectwhich detects various web - attacks. The project is capable to identifying and preventing attacks like SQL Injection, Cross – Site Scripting, URL rewriting, Web server 400 error code etc. The performance of system is detected using the system attributes that are mentioned in this paper. This is also used to determine efficiency of the system.