Visible to the public Biblio

Found 7504 results

Filters: Keyword is Metrics  [Clear All Filters]
2017-08-22
Esiner, Ertem, Datta, Anwitaman.  2016.  Layered Security for Storage at the Edge: On Decentralized Multi-factor Access Control. Proceedings of the 17th International Conference on Distributed Computing and Networking. :9:1–9:10.

In this paper we propose a protocol that allows end-users in a decentralized setup (without requiring any trusted third party) to protect data shipped to remote servers using two factors - knowledge (passwords) and possession (a time based one time password generation for authentication) that is portable. The protocol also supports revocation and recreation of a new possession factor if the older possession factor is compromised, provided the legitimate owner still has a copy of the possession factor. Furthermore, akin to some other recent works, our approach naturally protects the outsourced data from the storage servers themselves, by application of encryption and dispersal of information across multiple servers. We also extend the basic protocol to demonstrate how collaboration can be supported even while the stored content is encrypted, and where each collaborator is still restrained from accessing the data through a multi-factor access mechanism. Such techniques achieving layered security is crucial to (opportunistically) harness storage resources from untrusted entities.

Luo, Ge, Gao, Ying, Zheng, Changchun.  2016.  Strongly Secure Network Coding via Reed-Solomon Codes. Proceedings of the Fifth International Conference on Network, Communication and Computing. :110–114.

To consider construction of strongly secure network coding scheme without universality, this paper focuses on properties of MDS(maximum distance separable) codes, especially, Reed-Solomon codes. Our scheme applies Reed-Solomon codes in coset coding scheme to achieve the security based on the classical underlying network coding. Comparing with the existing scheme, MRD(maximum rank distance) code and a necessary condition based on MRD are not required in the scheme. Furthermore, considering the conditions between the code for security and the underlying network code, the scheme could be applied for more situations on fields.

Naghmouchi, M. Yassine, Perrot, Nancy, Kheir, Nizar, Mahjoub, A. Ridha, Wary, Jean-Philippe.  2016.  A New Risk Assessment Framework Using Graph Theory for Complex ICT Systems. Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats. :97–100.

In this paper, we propose a new risk analysis framework that enables to supervise risks in complex and distributed systems. Our contribution is twofold. First, we provide the Risk Assessment Graphs (RAGs) as a model of risk analysis. This graph-based model is adaptable to the system changes over the time. We also introduce the potentiality and the accessibility functions which, during each time slot, evaluate respectively the chance of exploiting the RAG's nodes, and the connection time between these nodes. In addition, we provide a worst-case risk evaluation approach, based on the assumption that the intruder threats usually aim at maximising their benefits by inflicting the maximum damage to the target system (i.e. choosing the most likely paths in the RAG). We then introduce three security metrics: the propagated risk, the node risk and the global risk. We illustrate the use of our framework through the simple example of an enterprise email service. Our framework achieves both flexibility and generality requirements, it can be used to assess the external threats as well as the insider ones, and it applies to a wide set of applications.

Agrafiotis, Ioannis, Erola, Arnau, Goldsmith, Michael, Creese, Sadie.  2016.  A Tripwire Grammar for Insider Threat Detection. Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats. :105–108.

The threat from insiders is an ever-growing concern for organisations, and in recent years the harm that insiders pose has been widely demonstrated. This paper describes our recent work into how we might support insider threat detection when actions are taken which can be immediately determined as of concern because they fall into one of two categories: they violate a policy which is specifically crafted to describe behaviours that are highly likely to be of concern if they are exhibited, or they exhibit behaviours which follow a pattern of a known insider threat attack. In particular, we view these concerning actions as something that we can design and implement tripwires within a system to detect. We then orchestrate these tripwires in conjunction with an anomaly detection system and present an approach to formalising tripwires of both categories. Our intention being that by having a single framework for describing them, alongside a library of existing tripwires in use, we can provide the community of practitioners and researchers with the basis to document and evolve this common understanding of tripwires.

Sanzgiri, Ameya, Dasgupta, Dipankar.  2016.  Classification of Insider Threat Detection Techniques. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :25:1–25:4.

Most insider attacks done by people who have the knowledge and technical know-how of launching such attacks. This topic has long been studied and many detection techniques were proposed to deal with insider threats. This short paper summarized and classified insider threat detection techniques based on strategies used for detection.

Arathy, P. J., Nair, Vrinda V..  2016.  Analysis of Spoofing Detection Using Video Subsection Processing. Proceedings of the International Conference on Informatics and Analytics. :76:1–76:6.

Imposters gain unauthorized access to biometric recognition systems using fake biometric data of the legitimate user termed as spoofing. Spoofing of face recognition systems is done by photographs, 3D models and videos of the user. Attack video contains noise from the acquisition process. In this work, we use noise residual content of the video in order to detect spoofed videos. We take advantage of wavelet transform for representing the noise video. Samples of the noise video, termed as visual rhythm image is created for each video. Local Binary Pattern (LBP) and uniform Local Binary Pattern (LBPu2) are extracted from the visual rhythm image followed by classification using Support Vector Machine (SVM). Large size of video from which a number of frames are used for analysis results in huge execution timing. In this work the spoof detection algorithm is applied on various levels of subsections of the video frames resulting in reduced execution timing with reasonable detection accuracies.

Rahman, Md. Habibur, Sharmin, Sadia, Sarwar, Sheikh Muhammad, Shoyaib, Mohammad.  2016.  Software Defect Prediction Using Feature Space Transformation. Proceedings of the International Conference on Internet of Things and Cloud Computing. :72:1–72:6.

In software quality estimation research, software defect prediction is a key topic. A defect prediction model is generally constructed using a variety of software attributes and each attribute may have positive, negative or neutral effect on a specific model. Selection of an optimal set of attributes for model development remains a vital yet unexplored issue. In this paper, we have introduced a new feature space transformation process with a normalization technique to improve the defect prediction accuracy. We proposed a feature space transformation technique and classify the instances using Support Vector Machine (SVM) with its histogram intersection kernel. The proposed method is evaluated using the data sets from NASA metric data repository and its application demonstrates acceptable accuracy.

Meitei, Irom Lalit, Singh, Khundrakpam Johnson, De, Tanmay.  2016.  Detection of DDoS DNS Amplification Attack Using Classification Algorithm. Proceedings of the International Conference on Informatics and Analytics. :81:1–81:6.

The Domain Name System (DNS) is a critically fundamental element in the internet technology as it translates domain names into corresponding IP addresses. The DNS queries and responses are UDP (User Datagram Protocol) based. DNS name servers are constantly facing threats of DNS amplification attacks. DNS amplification attack is one of the major Distributed Denial of Service (DDoS) attacks, in DNS. The DNS amplification attack victimized huge business and financial companies and organizations by giving disturbance to the customers. In this paper, a mechanism is proposed to detect such attacks coming from the compromised machines. We analysed DNS traffic packet comparatively based on the Machine Learning Classification algorithms such as Decision Tree (TREE), Multi Layer Perceptron (MLP), Naïve Bayes (NB) and Support Vector Machine (SVM) to classify the DNS traffics into normal and abnormal. In this approach attribute selection algorithms such as Information Gain, Gain Ratio and Chi Square are used to achieve optimal feature subset. In the experimental result it shows that the Decision Tree achieved 99.3% accuracy. This model gives highest accuracy and performance as compared to other Machine Learning algorithms.

Bouchlaghem, Rihab, Elkhelifi, Aymen, Faiz, Rim.  2016.  A Machine Learning Approach For Classifying Sentiments in Arabic Tweets. Proceedings of the 6th International Conference on Web Intelligence, Mining and Semantics. :24:1–24:6.

Nowadays, sentiment analysis methods become more and more popular especially with the proliferation of social media platform users number. In the same context, this paper presents a sentiment analysis approach which can faithfully translate the sentimental orientation of Arabic Twitter posts, based on a novel data representation and machine learning techniques. The proposed approach applied a wide range of features: lexical, surface-form, syntactic, etc. We also made use of lexicon features inferred from two Arabic sentiment words lexicons. To build our supervised sentiment analysis system, we use several standard classification methods (Support Vector Machines, K-Nearest Neighbour, Naïve Bayes, Decision Trees, Random Forest) known by their effectiveness over such classification issues. In our study, Support Vector Machines classifier outperforms other supervised algorithms in Arabic Twitter sentiment analysis. Via an ablation experiments, we show the positive impact of lexicon based features on providing higher prediction performance.

ZareMoodi, Poorya, Siahroudi, Sajjad Kamali, Beigy, Hamid.  2016.  A Support Vector Based Approach for Classification Beyond the Learned Label Space in Data Streams. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :910–915.

Most of the supervised classification algorithms are proposed to classify newly seen instances based on their learned label space. However, in the case of data streams, concept-evolution is inevitable. In this paper we propose a support vector based approach for classification beyond the learned label space in data streams with regard to other challenges in data streams like concept-drift and infinite-length. We maintain the boundaries of observed classes through the stream by utilizing a support vector based method (SVDD). Newly arrived instances located outside these boundaries will be analyzed by constructing neighborhood graph to detect the emergence of a class beyond the learned label space (novel class). Our method is more accurate to model intricate-shape class boundaries than existing method since it utilizes support vector data description method. Dynamically maintaining boundaries by shrinking, enlarging and merging spheres in the kernel space, helps our method to adapt both dramatic and gradual changes of underlying distribution of data, and also be more memory efficient than the existing methods. Conducted experiments on both real and synthetic benchmark data sets show the superiority of the proposed method over the state-of-the-art methods in this area.

Gao, Yan, Yang, Chunhui.  2016.  Software Defect Prediction Based on Manifold Learning in Subspace Selection. Proceedings of the 2016 International Conference on Intelligent Information Processing. :17:1–17:6.

Software defects will lead to software running error and system crashes. In order to detect software defect as early as possible at early stage of software development, a series of machine learning approaches have been studied and applied to predict defects in software modules. Unfortunately, the imbalanceof software defect datasets brings great challenge to software defect prediction model training. In this paper, a new manifold learning based subspace learning algorithm, Discriminative Locality Alignment(DLA), is introduced into software defects prediction. Experimental results demonstrate that DLA is consistently superior to LDA (Linear Discriminant Analysis) and PCA (Principal Component Analysis) in terms of discriminate information extraction and prediction performance. In addition, DLA reveals some attractive intrinsic properties for numeric calculation, e.g. it can overcome the matrix singular problem and small sample size problem in software defect prediction.

Wu, Chongliang, Wang, Shangfei, Pan, Bowen, Chen, Huaping.  2016.  Facial Expression Recognition with Deep Two-view Support Vector Machine. Proceedings of the 2016 ACM on Multimedia Conference. :616–620.

This paper proposes a novel deep two-view approach to learn features from both visible and thermal images and leverage the commonality among visible and thermal images for facial expression recognition from visible images. The thermal images are used as privileged information, which is required only during training to help visible images learn better features and classifier. Specifically, we first learn a deep model for visible images and thermal images respectively, and use the learned feature representations to train SVM classifiers for expression classification. We then jointly refine the deep models as well as the SVM classifiers for both thermal images and visible images by imposing the constraint that the outputs of the SVM classifiers from two views are similar. Therefore, the resulting representations and classifiers capture the inherent connections among visible facial image, infrared facial image and target expression labels, and hence improve the recognition performance for facial expression recognition from visible images during testing. Experimental results on the benchmark expression database demonstrate the effectiveness of our proposed method.

Skowyra, Richard, Bauer, Kevin, Dedhia, Veer, Okhravi, Hamed.  2016.  Have No PHEAR: Networks Without Identifiers. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :3–14.

Network protocols such as Ethernet and TCP/IP were not designed to ensure the security and privacy of users. To protect users' privacy, anonymity networks such as Tor have been proposed to hide both identities and communication contents for Internet traffic. However, such solutions cannot protect enterprise network traffic that does not transit the Internet. In this paper, we present the design, implementation, and evaluation of a moving target technique called Packet Header Randomization (PHEAR), a privacy-enhancing system for enterprise networks that leverages emerging Software-Defined Networking hardware and protocols to eliminate identifiers found at the MAC, Network, and higher layers of the network stack. PHEAR also encrypts all packet data beyond the Network layer. We evaluate the security of PHEAR against a variety of known and novel attacks and conduct whole-network experiments that show the prototype deployment provides sufficient performance for common applications such as web browsing and file sharing.

Lazarenko, Aleksandr, Avdoshin, Sergey.  2016.  Anonymity of Tor: Myth and Reality. Proceedings of the 12th Central and Eastern European Software Engineering Conference in Russia. :10:1–10:5.

Privacy enhancing technologies (PETs) are ubiquitous nowadays. They are beneficial for a wide range of users. However, PETs are not always used for legal activity. The present paper is focused on Tor users deanonimization1 using out-of-the box technologies and a basic machine learning algorithm. The aim of the work is to show that it is possible to deanonimize a small fraction of users without having a lot of resources and state-of-the-art machine learning techniques. The deanonimization is a very important task from the point of view of national security. To address this issue, we are using a website fingerprinting attack.

Ma, Xiao, Hancock, Jeff, Naaman, Mor.  2016.  Anonymity, Intimacy and Self-Disclosure in Social Media. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :3857–3869.

Self-disclosure is rewarding and provides significant benefits for individuals, but it also involves risks, especially in social media settings. We conducted an online experiment to study the relationship between content intimacy and willingness to self-disclose in social media, and how identification (real name vs. anonymous) and audience type (social ties vs. people nearby) moderate that relationship. Content intimacy is known to regulate self-disclosure in face-to-face communication: people self-disclose less as content intimacy increases. We show that such regulation persists in online social media settings. Further, although anonymity and an audience of social ties are both known to increase self-disclosure, it is unclear whether they (1) increase self-disclosure baseline for content of all intimacy levels, or (2) weaken intimacy's regulation effect, making people more willing to disclose intimate content. We show that intimacy always regulates self-disclosure, regardless of settings. We also show that anonymity mainly increases self-disclosure baseline and (sometimes) weakens the regulation. On the other hand, an audience of social ties increases the baseline but strengthens the regulation. Finally, we demonstrate that anonymity has a more salient effect on content of negative valence.The results are critical to understanding the dynamics and opportunities of self-disclosure in social media services that vary levels of identification and types of audience.

Yang, Yanjiang, Lu, Haibing, Liu, Joseph K., Weng, Jian, Zhang, Youcheng, Zhou, Jianying.  2016.  Credential Wrapping: From Anonymous Password Authentication to Anonymous Biometric Authentication. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :141–151.

The anonymous password authentication scheme proposed in ACSAC'10 under an unorthodox approach of password wrapped credentials advanced anonymous password authentication to be a practically ready primitive, and it is being standardized. In this paper, we improve on that scheme by proposing a new method of "public key suppression" for achieving server-designated credential verifiability, a core technicality in materializing the concept of password wrapped credential. Besides better performance, our new method simplifies the configuration of the authentication server, rendering the resulting scheme even more practical. Further, we extend the idea of password wrapped credential to biometric wrapped credential\vphantom\\, to achieve anonymous biometric authentication. As expected, biometric wrapped credentials help break the linear server-side computation barrier intrinsic in the standard setting of biometric authentication. Experimental results validate the feasibility of realizing efficient anonymous biometric authentication.

Jansen, Rob, Johnson, Aaron.  2016.  Safely Measuring Tor. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1553–1567.

Tor is a popular network for anonymous communication. The usage and operation of Tor is not well-understood, however, because its privacy goals make common measurement approaches ineffective or risky. We present PrivCount, a system for measuring the Tor network designed with user privacy as a primary goal. PrivCount securely aggregates measurements across Tor relays and over time to produce differentially private outputs. PrivCount improves on prior approaches by enabling flexible exploration of many diverse kinds of Tor measurements while maintaining accuracy and privacy for each. We use PrivCount to perform a measurement study of Tor of sufficient breadth and depth to inform accurate models of Tor users and traffic. Our results indicate that Tor has 710,000 users connected but only 550,000 active at a given time, that Web traffic now constitutes 91% of data bytes on Tor, and that the strictness of relays' connection policies significantly affects the type of application data they forward.

Jakobsen, Sune K., Orlandi, Claudio.  2016.  How To Bootstrap Anonymous Communication. Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. :333–344.

We ask whether it is possible to anonymously communicate a large amount of data using only public (non-anonymous) communication together with a small anonymous channel. We think this is a central question in the theory of anonymous communication and to the best of our knowledge this is the first formal study in this direction. Towards this goal, we introduce the novel concept of anonymous steganography: think of a leaker Lea who wants to leak a large document to Joe the journalist. Using anonymous steganography Lea can embed this document in innocent looking communication on some popular website (such as cat videos on YouTube or funny memes on 9GAG). Then Lea provides Joe with a short decoding key dk which, when applied to the entire website, recovers the document while hiding the identity of Lea among the large number of users of the website. Our contributions include: Introducing and formally defining anonymous steganography, A construction showing that anonymous steganography is possible (which uses recent results in circuits obfuscation), A lower bound on the number of bits which are needed to bootstrap anonymous communication.

Demertzis, Ioannis, Papadopoulos, Stavros, Papapetrou, Odysseas, Deligiannakis, Antonios, Garofalakis, Minos.  2016.  Practical Private Range Search Revisited. Proceedings of the 2016 International Conference on Management of Data. :185–198.

We consider a data owner that outsources its dataset to an untrusted server. The owner wishes to enable the server to answer range queries on a single attribute, without compromising the privacy of the data and the queries. There are several schemes on "practical" private range search (mainly in Databases venues) that attempt to strike a trade-off between efficiency and security. Nevertheless, these methods either lack provable security guarantees, or permit unacceptable privacy leakages. In this paper, we take an interdisciplinary approach, which combines the rigor of Security formulations and proofs with efficient Data Management techniques. We construct a wide set of novel schemes with realistic security/performance trade-offs, adopting the notion of Searchable Symmetric Encryption (SSE) primarily proposed for keyword search. We reduce range search to multi-keyword search using range covering techniques with tree-like indexes. We demonstrate that, given any secure SSE scheme, the challenge boils down to (i) formulating leakages that arise from the index structure, and (ii) minimizing false positives incurred by some schemes under heavy data skew. We analytically detail the superiority of our proposals over prior work and experimentally confirm their practicality.

Dave, Jay, Das, Manik Lal.  2016.  Securing SQL with Access Control for Database As a Service Model. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :104:1–104:6.

'Software as a service - SaaS' is a well known model used in cloud infrastructure, outsourcing and pervasive computing. With the SaaS model, application service providers (ASP) facilitates various functionalities of software to application developers as well as to consumers over a public channel like Internet. In order to manage large volumes of users data, 'Database as a service - DaaS' model is a practical requirement for ASPs. The DaaS model allows implementation of need-based (e.g., role-based) privileges of database access to its users. However, the use of DaaS model raises security concerns (e.g. confidentiality and integrity of data) of data while storing users data in untrusted public storage server. In this paper, we review one DaaS tool, CryptDB [1], developed in recent times, and we observe some limitations in it and then present an improved solution for securing data in untrusted database provider. The proposed solution mitigates the limitations of CryptDB while keeping the efficiency of the service model used between ASP and DB intact.

Chen, Fei, Zhang, Taoyi, Chen, Jianyong, Xiang, Tao.  2016.  Cloud Storage Integrity Checking: Going from Theory to Practice. Proceedings of the 4th ACM International Workshop on Security in Cloud Computing. :24–28.

In the past decade, researchers have proposed various cloud storage integrity checking protocols to enable a cloud storage user to validate the integrity of the user's outsourced data. While the proposed solutions can in principle solve the cloud storage integrity checking problem, they are not sufficient for current cloud storage practices. In this position paper, we show the gaps between theoretical and practical cloud storage integrity checking solutions, through a categorization of existing solutions and an analysis of their underlying assumptions. To bridge the gap, we also call for practical cloud storage integrity checking solutions for three scenarios.

Karras, Panagiotis, Nikitin, Artyom, Saad, Muhammad, Bhatt, Rudrika, Antyukhov, Denis, Idreos, Stratos.  2016.  Adaptive Indexing over Encrypted Numeric Data. Proceedings of the 2016 International Conference on Management of Data. :171–183.

Today, outsourcing query processing tasks to remote cloud servers becomes a viable option; such outsourcing calls for encrypting data stored at the server so as to render it secure against eavesdropping adversaries and/or an honest-but-curious server itself. At the same time, to be efficiently managed, outsourced data should be indexed, and even adaptively so, as a side-effect of query processing. Computationally heavy encryption schemes render such outsourcing unattractive; an alternative, Order-Preserving Encryption Scheme (OPES), intentionally preserves and reveals the order in the data, hence is unattractive from the security viewpoint. In this paper, we propose and analyze a scheme for lightweight and indexable encryption, based on linear-algebra operations. Our scheme provides higher security than OPES and allows for range and point queries to be efficiently evaluated over encrypted numeric data, with decryption performed at the client side. We implement a prototype that performs incremental, query-triggered adaptive indexing over encrypted numeric data based on this scheme, without leaking order information in advance, and without prohibitive overhead, as our extensive experimental study demonstrates.

Wu, Rongxin, Xiao, Xiao, Cheung, Shing-Chi, Zhang, Hongyu, Zhang, Charles.  2016.  Casper: An Efficient Approach to Call Trace Collection. Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. :678–690.

Call traces, i.e., sequences of function calls and returns, are fundamental to a wide range of program analyses such as bug reproduction, fault diagnosis, performance analysis, and many others. The conventional approach to collect call traces that instruments each function call and return site incurs large space and time overhead. Our approach aims at reducing the recording overheads by instrumenting only a small amount of call sites while keeping the capability of recovering the full trace. We propose a call trace model and a logged call trace model based on an LL(1) grammar, which enables us to define the criteria of a feasible solution to call trace collection. Based on the two models, we prove that to collect call traces with minimal instrumentation is an NP-hard problem. We then propose an efficient approach to obtaining a suboptimal solution. We implemented our approach as a tool Casper and evaluated it using the DaCapo benchmark suite. The experiment results show that our approach causes significantly lower runtime (and space) overhead than two state-of-the-arts approaches.

Schroeder, Bianca.  2016.  Case Studies from the Real World: The Importance of Measurement and Analysis in Building Better Systems. Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering. :1–1.

At the core of the "Big Data" revolution lie frameworks and systems that allow for the massively parallel processing of large amounts of data. Ironically, while they have been designed for processing large amounts of data, these systems are at the same time major producers of data: to support the administration and management of these huge-scale systems, they are configured to generate detailed log and monitoring data, periodically capturing the system state across all nodes, components and jobs in the system. While such logging information is used routinely by sysadmins for ad-hoc trouble-shooting and problem diagnosis, we point out that there is a tremendous value in analyzing such data from a research point of view. In this talk, we will go over several case studies that demonstrate how measuring and analyzing measurement data from production systems can provide new insights into how systems work and fail, and how these new insights can help in designing better systems.

Xu, Jun, Mu, Dongliang, Chen, Ping, Xing, Xinyu, Wang, Pei, Liu, Peng.  2016.  CREDAL: Towards Locating a Memory Corruption Vulnerability with Your Core Dump. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :529–540.

After a program has crashed and terminated abnormally, it typically leaves behind a snapshot of its crashing state in the form of a core dump. While a core dump carries a large amount of information, which has long been used for software debugging, it barely serves as informative debugging aids in locating software faults, particularly memory corruption vulnerabilities. A memory corruption vulnerability is a special type of software faults that an attacker can exploit to manipulate the content at a certain memory. As such, a core dump may contain a certain amount of corrupted data, which increases the difficulty in identifying useful debugging information (e.g. , a crash point and stack traces). Without a proper mechanism to deal with this problem, a core dump can be practically useless for software failure diagnosis. In this work, we develop CREDAL, an automatic tool that employs the source code of a crashing program to enhance core dump analysis and turns a core dump to an informative aid in tracking down memory corruption vulnerabilities. Specifically, CREDAL systematically analyzes a core dump potentially corrupted and identifies the crash point and stack frames. For a core dump carrying corrupted data, it goes beyond the crash point and stack trace. In particular, CREDAL further pinpoints the variables holding corrupted data using the source code of the crashing program along with the stack frames. To assist software developers (or security analysts) in tracking down a memory corruption vulnerability, CREDAL also performs analysis and highlights the code fragments corresponding to data corruption. To demonstrate the utility of CREDAL, we use it to analyze 80 crashes corresponding to 73 memory corruption vulnerabilities archived in Offensive Security Exploit Database. We show that, CREDAL can accurately pinpoint the crash point and (fully or partially) restore a stack trace even though a crashing program stack carries corrupted data. In addition, we demonstrate CREDAL can potentially reduce the manual effort of finding the code fragment that is likely to contain memory corruption vulnerabilities.