Visible to the public Biblio

Found 112 results

Filters: Keyword is Forensics  [Clear All Filters]
2019-02-22
Steinebach, Martin, Ester, Andre, Liu, Huajian.  2018.  Channel Steganalysis. Proceedings of the 13th International Conference on Availability, Reliability and Security. :9:1-9:8.

The rise of social networks during the last 10 years has created a situation in which up to 100 million new images and photographs are uploaded and shared by users every day. This environment poses an ideal background for those who wish to communicate covertly by the use of steganography. It also creates a new set of challenges for steganalysts, who have to shift their field of work away from a purely scientific laboratory environment and into a diverse real-world scenario, while at the same time having to deal with entirely new problems, such as the detection of steganographic channels or the impact that even a low false positive rate has when investigating the millions of images which are shared every day on social networks. We evaluate how to address these challenges with traditional steganographic and statistical methods, rather then using high performance computing and machine learning. To achieve this we first analyze the steganographic algorithm F5 applied to images with a high degree of diversity, as would be seen in a typical social network. We show that the biggest challenge lies in the detection of images whose payload is less then 50% of the available capacity of an image. We suggest new detection methods and apply these to the problem of channel detection in social network. We are able to show that using our attacks we are able to detect the majority of covert F5 channels after a mix containing 10 stego images has been classified by our scheme.

2019-02-14
Xu, Z., Shi, C., Cheng, C. C., Gong, N. Z., Guan, Y..  2018.  A Dynamic Taint Analysis Tool for Android App Forensics. 2018 IEEE Security and Privacy Workshops (SPW). :160-169.

The plethora of mobile apps introduce critical challenges to digital forensics practitioners, due to the diversity and the large number (millions) of mobile apps available to download from Google play, Apple store, as well as hundreds of other online app stores. Law enforcement investigators often find themselves in a situation that on the seized mobile phone devices, there are many popular and less-popular apps with interface of different languages and functionalities. Investigators would not be able to have sufficient expert-knowledge about every single app, sometimes nor even a very basic understanding about what possible evidentiary data could be discoverable from these mobile devices being investigated. Existing literature in digital forensic field showed that most such investigations still rely on the investigator's manual analysis using mobile forensic toolkits like Cellebrite and Encase. The problem with such manual approaches is that there is no guarantee on the completeness of such evidence discovery. Our goal is to develop an automated mobile app analysis tool to analyze an app and discover what types of and where forensic evidentiary data that app generate and store locally on the mobile device or remotely on external 3rd-party server(s). With the app analysis tool, we will build a database of mobile apps, and for each app, we will create a list of app-generated evidence in terms of data types, locations (and/or sequence of locations) and data format/syntax. The outcome from this research will help digital forensic practitioners to reduce the complexity of their case investigations and provide a better completeness guarantee of evidence discovery, thereby deliver timely and more complete investigative results, and eventually reduce backlogs at crime labs. In this paper, we will present the main technical approaches for us to implement a dynamic Taint analysis tool for Android apps forensics. With the tool, we have analyzed 2,100 real-world Android apps. For each app, our tool produces the list of evidentiary data (e.g., GPS locations, device ID, contacts, browsing history, and some user inputs) that the app could have collected and stored on the devices' local storage in the forms of file or SQLite database. We have evaluated our tool using both benchmark apps and real-world apps. Our results demonstrated that the initial success of our tool in accurately discovering the evidentiary data.

Cox, Guilherme, Yan, Zi, Bhattacharjee, Abhishek, Ganapathy, Vinod.  2018.  Secure, Consistent, and High-Performance Memory Snapshotting. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy. :236-247.

Many security and forensic analyses rely on the ability to fetch memory snapshots from a target machine. To date, the security community has relied on virtualization, external hardware or trusted hardware to obtain such snapshots. These techniques either sacrifice snapshot consistency or degrade the performance of applications executing atop the target. We present SnipSnap, a new snapshot acquisition system based on on-package DRAM technologies that offers snapshot consistency without excessively hurting the performance of the target's applications. We realize SnipSnap and evaluate its benefits using careful hardware emulation and software simulation, and report our results.

2019-01-16
Jia, Z., Cui, X., Liu, Q., Wang, X., Liu, C..  2018.  Micro-Honeypot: Using Browser Fingerprinting to Track Attackers. 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC). :197–204.
Web attacks have proliferated across the whole Internet in recent years. To protect websites, security vendors and researchers collect attack information using web honeypots. However, web attackers can hide themselves by using stepping stones (e.g., VPN, encrypted proxy) or anonymous networks (e.g., Tor network). Conventional web honeypots lack an effective way to gather information about an attacker's identity, which raises a big obstacle for cybercrime traceability and forensics. Traditional forensics methods are based on traffic analysis; it requires that defenders gain access to the entire network. It is not suitable for honeypots. In this paper, we present the design, implementation, and deployment of the Micro-Honeypot, which aims to use the browser fingerprinting technique to track a web attacker. Traditional honeypot lure attackers and records attacker's activity. Micro-Honeypot is deployed in a honeypot. It will run and gather identity information when an attacker visits the honeypot. Our preliminary results show that Micro-Honeypot could collect more information and track attackers although they might have used proxies or anonymous networks to hide themselves.
Rodríguez, R. J., Martín-Pérez, M., Abadía, I..  2018.  A tool to compute approximation matching between windows processes. 2018 6th International Symposium on Digital Forensic and Security (ISDFS). :1–6.
Finding identical digital objects (or artifacts) during a forensic analysis is commonly achieved by means of cryptographic hashing functions, such as MD5, SHA1, or SHA-256, to name a few. However, these functions suffer from the avalanche effect property, which guarantees that if an input is changed slightly the output changes significantly. Hence, these functions are unsuitable for typical digital forensics scenarios where a forensics memory image from a likely compromised machine shall be analyzed. This memory image file contains a snapshot of processes (instances of executable files) which were up on execution when the dumping process was done. However, processes are relocated at memory and contain dynamic data that depend on the current execution and environmental conditions. Therefore, the comparison of cryptographic hash values of different processes from the same executable file will be negative. Bytewise approximation matching algorithms may help in these scenarios, since they provide a similarity measurement in the range [0,1] between similar inputs instead of a yes/no answer (in the range 0,1). In this paper, we introduce ProcessFuzzyHash, a Volatility plugin that enables us to compute approximation hash values of processes contained in a Windows memory dump.
2018-05-09
Williams, Laurie.  2017.  Building Forensics in: Supporting the Investigation of Digital Criminal Activities (Invited Talk). Proceedings of the 1st ACM SIGSOFT International Workshop on Software Engineering and Digital Forensics. :1–1.
Logging mechanisms that capture detailed traces of user activity, including creating, reading, updating, and deleting (CRUD) data, facilitate meaningful forensic analysis following a security or privacy breach. However, software requirements often inadequately and inconsistently state “what” user actions should be logged, thus hindering meaningful forensic analysis. In this talk, we will explore a variety of techniques for building a software system that supports forensic analysis. We will discuss systematic heuristics-driven and patterns-driven processes for identifying log events that must be logged based on user actions and potential accidental and malicious use, as described in natural language software artifacts. We then discuss systematic process for creating a black-box test suite for verifying the identified log events are logged. Using the results of executing the black-box test suite, we propose and evaluate a security metric for measuring the forensic-ability of user activity logs.
2018-04-02
Siddiqi, M., All, S. T., Sivaraman, V..  2017.  Secure Lightweight Context-Driven Data Logging for Bodyworn Sensing Devices. 2017 5th International Symposium on Digital Forensic and Security (ISDFS). :1–6.

Rapid advancement in wearable technology has unlocked a tremendous potential of its applications in the medical domain. Among the challenges in making the technology more useful for medical purposes is the lack of confidence in the data thus generated and communicated. Incentives have led to attacks on such systems. We propose a novel lightweight scheme to securely log the data from bodyworn sensing devices by utilizing neighboring devices as witnesses who store the fingerprints of data in Bloom filters to be later used for forensics. Medical data from each sensor is stored at various locations of the system in chronological epoch-level blocks chained together, similar to the blockchain. Besides secure logging, the scheme offers to secure other contextual information such as localization and timestamping. We prove the effectiveness of the scheme through experimental results. We define performance parameters of our scheme and quantify their cost benefit trade-offs through simulation.

Barrere, M., Steiner, R. V., Mohsen, R., Lupu, E. C..  2017.  Tracking the Bad Guys: An Efficient Forensic Methodology to Trace Multi-Step Attacks Using Core Attack Graphs. 2017 13th International Conference on Network and Service Management (CNSM). :1–7.

In this paper, we describe an efficient methodology to guide investigators during network forensic analysis. To this end, we introduce the concept of core attack graph, a compact representation of the main routes an attacker can take towards specific network targets. Such compactness allows forensic investigators to focus their efforts on critical nodes that are more likely to be part of attack paths, thus reducing the overall number of nodes (devices, network privileges) that need to be examined. Nevertheless, core graphs also allow investigators to hierarchically explore the graph in order to retrieve different levels of summarised information. We have evaluated our approach over different network topologies varying parameters such as network size, density, and forensic evaluation threshold. Our results demonstrate that we can achieve the same level of accuracy provided by standard logical attack graphs while significantly reducing the exploration rate of the network.

2018-03-19
Chen, Z., Tondi, B., Li, X., Ni, R., Zhao, Y., Barni, M..  2017.  A Gradient-Based Pixel-Domain Attack against SVM Detection of Global Image Manipulations. 2017 IEEE Workshop on Information Forensics and Security (WIFS). :1–6.

We present a gradient-based attack against SVM-based forensic techniques relying on high-dimensional SPAM features. As opposed to prior work, the attack works directly in the pixel domain even if the relationship between pixel values and SPAM features can not be inverted. The proposed method relies on the estimation of the gradient of the SVM output with respect to pixel values, however it departs from gradient descent methodology due to the necessity of preserving the integer nature of pixels and to reduce the effect of the attack on image quality. A fast algorithm to estimate the gradient is also introduced to reduce the complexity of the attack. We tested the proposed attack against SVM detection of histogram stretching, adaptive histogram equalization and median filtering. In all cases the attack succeeded in inducing a decision error with a very limited distortion, the PSNR between the original and the attacked images ranging from 50 to 70 dBs. The attack is also effective in the case of attacks with Limited Knowledge (LK) when the SVM used by the attacker is trained on a different dataset with respect to that used by the analyst.

Rocha, A., Scheirer, W. J., Forstall, C. W., Cavalcante, T., Theophilo, A., Shen, B., Carvalho, A. R. B., Stamatatos, E..  2017.  Authorship Attribution for Social Media Forensics. IEEE Transactions on Information Forensics and Security. 12:5–33.

The veil of anonymity provided by smartphones with pre-paid SIM cards, public Wi-Fi hotspots, and distributed networks like Tor has drastically complicated the task of identifying users of social media during forensic investigations. In some cases, the text of a single posted message will be the only clue to an author's identity. How can we accurately predict who that author might be when the message may never exceed 140 characters on a service like Twitter? For the past 50 years, linguists, computer scientists, and scholars of the humanities have been jointly developing automated methods to identify authors based on the style of their writing. All authors possess peculiarities of habit that influence the form and content of their written works. These characteristics can often be quantified and measured using machine learning algorithms. In this paper, we provide a comprehensive review of the methods of authorship attribution that can be applied to the problem of social media forensics. Furthermore, we examine emerging supervised learning-based methods that are effective for small sample sizes, and provide step-by-step explanations for several scalable approaches as instructional case studies for newcomers to the field. We argue that there is a significant need in forensics for new authorship attribution algorithms that can exploit context, can process multi-modal data, and are tolerant to incomplete knowledge of the space of all possible authors at training time.

2018-03-05
McDonald, J. T., Manikyam, R., Glisson, W. B., Andel, T. R., Gu, Y. X..  2017.  Enhanced Operating System Protection to Support Digital Forensic Investigations. 2017 IEEE Trustcom/BigDataSE/ICESS. :650–659.

Digital forensic investigators today are faced with numerous problems when recovering footprints of criminal activity that involve the use of computer systems. Investigators need the ability to recover evidence in a forensically sound manner, even when criminals actively work to alter the integrity, veracity, and provenance of data, applications and software that are used to support illicit activities. In many ways, operating systems (OS) can be strengthened from a technological viewpoint to support verifiable, accurate, and consistent recovery of system data when needed for forensic collection efforts. In this paper, we extend the ideas for forensic-friendly OS design by proposing the use of a practical form of computing on encrypted data (CED) and computing with encrypted functions (CEF) which builds upon prior work on component encryption (in circuits) and white-box cryptography (in software). We conduct experiments on sample programs to provide analysis of the approach based on security and efficiency, illustrating how component encryption can strengthen key OS functions and improve tamper-resistance to anti-forensic activities. We analyze the tradeoff space for use of the algorithm in a holistic approach that provides additional security and comparable properties to fully homomorphic encryption (FHE).

Hauger, W. K., Olivier, M. S..  2017.  Forensic Attribution in NoSQL Databases. 2017 Information Security for South Africa (ISSA). :74–82.

NoSQL databases have gained a lot of popularity over the last few years. They are now used in many new system implementations that work with vast amounts of data. This data will typically also include sensitive information that needs to be secured. NoSQL databases are also underlying a number of cloud implementations which are increasingly being used to store sensitive information by various organisations. This has made NoSQL databases a new target for hackers and other state sponsored actors. Forensic examinations of compromised systems will need to be conducted to determine what exactly transpired and who was responsible. This paper examines specifically if NoSQL databases have security features that leave relevant traces so that accurate forensic attribution can be conducted. The seeming lack of default security measures such as access control and logging has prompted this examination. A survey into the top ranked NoSQL databases was conducted to establish what authentication and authorisation features are available. Additionally the provided logging mechanisms were also examined since access control without any auditing would not aid forensic attribution tremendously. Some of the surveyed NoSQL databases do not provide adequate access control mechanisms and logging features that leave relevant traces to allow forensic attribution to be done using those. The other surveyed NoSQL databases did provide adequate mechanisms and logging traces for forensic attribution, but they are not enabled or configured by default. This means that in many cases they might not be available, leading to insufficient information to perform accurate forensic attribution even on those databases.

2018-02-27
Liu, C., Singhal, A., Wijesekera, D..  2017.  A Layered Graphical Model for Mission Attack Impact Analysis. 2017 IEEE Conference on Communications and Network Security (CNS). :602–609.

Business or military missions are supported by hardware and software systems. Unanticipated cyber activities occurring in supporting systems can impact such missions. In order to quantify such impact, we describe a layered graphical model as an extension of forensic investigation. Our model has three layers: the upper layer models operational tasks that constitute the mission and their inter-dependencies. The middle layer reconstructs attack scenarios from available evidence to reconstruct their inter-relationships. In cases where not all evidence is available, the lower level reconstructs potentially missing attack steps. Using the three levels of graphs constructed in these steps, we present a method to compute the impacts of attack activities on missions. We use NIST National Vulnerability Database's (NVD)-Common Vulnerability Scoring System (CVSS) scores or forensic investigators' estimates in our impact computations. We present a case study to show the utility of our model.

2018-02-02
Hossain, M., Hasan, R., Zawoad, S..  2017.  Trust-IoV: A Trustworthy Forensic Investigation Framework for the Internet of Vehicles (IoV). 2017 IEEE International Congress on Internet of Things (ICIOT). :25–32.

The Internet of Vehicles (IoV) is a complex and dynamic mobile network system that enables information sharing between vehicles, their surrounding sensors, and clouds. While IoV opens new opportunities in various applications and services to provide safety on the road, it introduces new challenges in the field of digital forensics investigations. The existing tools and procedures of digital forensics cannot meet the highly distributed, decentralized, dynamic, and mobile infrastructures of the IoV. Forensic investigators will face challenges while identifying necessary pieces of evidence from the IoV environment, and collecting and analyzing the evidence. In this article, we propose TrustIoV - a digital forensic framework for the IoV systems that provides mechanisms to collect and store trustworthy evidence from the distributed infrastructure. Trust-IoV maintains a secure provenance of the evidence to ensure the integrity of the stored evidence and allows investigators to verify the integrity of the evidence during an investigation. Our experimental results on a simulated environment suggest that Trust-IoV can operate with minimal overhead while ensuring the trustworthiness of evidence in a strong adversarial scenario.

2018-01-10
Thaler, S., Menkonvski, V., Petkovic, M..  2017.  Towards a neural language model for signature extraction from forensic logs. 2017 5th International Symposium on Digital Forensic and Security (ISDFS). :1–6.
Signature extraction is a critical preprocessing step in forensic log analysis because it enables sophisticated analysis techniques to be applied to logs. Currently, most signature extraction frameworks either use rule-based approaches or handcrafted algorithms. Rule-based systems are error-prone and require high maintenance effort. Hand-crafted algorithms use heuristics and tend to work well only for specialized use cases. In this paper we present a novel approach to extract signatures from forensic logs that is based on a neural language model. This language model learns to identify mutable and non-mutable parts in a log message. We use this information to extract signatures. Neural language models have shown to work extremely well for learning complex relationships in natural language text. We experimentally demonstrate that our model can detect which parts are mutable with an accuracy of 86.4%. We also show how extracted signatures can be used for clustering log lines.
2017-12-28
Thuraisingham, B., Kantarcioglu, M., Hamlen, K., Khan, L., Finin, T., Joshi, A., Oates, T., Bertino, E..  2016.  A Data Driven Approach for the Science of Cyber Security: Challenges and Directions. 2016 IEEE 17th International Conference on Information Reuse and Integration (IRI). :1–10.

This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.

2017-11-03
Cabaj, K., Mazurczyk, W..  2016.  Using Software-Defined Networking for Ransomware Mitigation: The Case of CryptoWall. IEEE Network. 30:14–20.

Currently, different forms of ransomware are increasingly threatening Internet users. Modern ransomware encrypts important user data, and it is only possible to recover it once a ransom has been paid. In this article we show how software-defined networking can be utilized to improve ransomware mitigation. In more detail, we analyze the behavior of popular ransomware - CryptoWall - and, based on this knowledge, propose two real-time mitigation methods. Then we describe the design of an SDN-based system, implemented using OpenFlow, that facilitates a timely reaction to this threat, and is a crucial factor in the case of crypto ransomware. What is important is that such a design does not significantly affect overall network performance. Experimental results confirm that the proposed approach is feasible and efficient.

2017-06-05
Mirsky, Yisroel, Shabtai, Asaf, Rokach, Lior, Shapira, Bracha, Elovici, Yuval.  2016.  SherLock vs Moriarty: A Smartphone Dataset for Cybersecurity Research. Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security. :1–12.

In this paper we describe and share with the research community, a significant smartphone dataset obtained from an ongoing long-term data collection experiment. The dataset currently contains 10 billion data records from 30 users collected over a period of 1.6 years and an additional 20 users for 6 months (totaling 50 active users currently participating in the experiment). The experiment involves two smartphone agents: SherLock and Moriarty. SherLock collects a wide variety of software and sensor data at a high sample rate. Moriarty perpetrates various attacks on the user and logs its activities, thus providing labels for the SherLock dataset. The primary purpose of the dataset is to help security professionals and academic researchers in developing innovative methods of implicitly detecting malicious behavior in smartphones. Specifically, from data obtainable without superuser (root) privileges. To demonstrate possible uses of the dataset, we perform a basic malware analysis and evaluate a method of continuous user authentication.

2017-05-30
Lacroix, Jesse, El-Khatib, Khalil, Akalu, Rajen.  2016.  Vehicular Digital Forensics: What Does My Vehicle Know About Me? Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications. :59–66.

A major component of modern vehicles is the infotainment system, which interfaces with its drivers and passengers. Other mobile devices, such as handheld phones and laptops, can relay information to the embedded infotainment system through Bluetooth and vehicle WiFi. The ability to extract information from these systems would help forensic analysts determine the general contents that is stored in an infotainment system. Based off the data that is extracted, this would help determine what stored information is relevant to law enforcement agencies and what information is non-essential when it comes to solving criminal activities relating to the vehicle itself. This would overall solidify the Intelligent Transport System and Vehicular Ad Hoc Network infrastructure in combating crime through the use of vehicle forensics. Additionally, determining the content of these systems will allow forensic analysts to know if they can determine anything about the end-user directly and/or indirectly.

Xu, Zhang, Wu, Zhenyu, Li, Zhichun, Jee, Kangkook, Rhee, Junghwan, Xiao, Xusheng, Xu, Fengyuan, Wang, Haining, Jiang, Guofei.  2016.  High Fidelity Data Reduction for Big Data Security Dependency Analyses. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :504–516.

Intrusive multi-step attacks, such as Advanced Persistent Threat (APT) attacks, have plagued enterprises with significant financial losses and are the top reason for enterprises to increase their security budgets. Since these attacks are sophisticated and stealthy, they can remain undetected for years if individual steps are buried in background "noise." Thus, enterprises are seeking solutions to "connect the suspicious dots" across multiple activities. This requires ubiquitous system auditing for long periods of time, which in turn causes overwhelmingly large amount of system audit events. Given a limited system budget, how to efficiently handle ever-increasing system audit logs is a great challenge. This paper proposes a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high-quality forensic analysis. In particular, we first propose an aggregation algorithm that preserves the dependency of events during data reduction to ensure the high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. To validate the efficacy of our proposed approach, we conduct a comprehensive evaluation on real-world auditing systems using log traces of more than one month. Our evaluation results demonstrate that our approach can significantly reduce the size of system logs and improve the efficiency of forensic analysis without losing accuracy.

Xu, Guanshuo, Wu, Han-Zhou, Shi, Yun Q...  2016.  Ensemble of CNNs for Steganalysis: An Empirical Study. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :103–107.

There has been growing interest in using convolutional neural networks (CNNs) in the fields of image forensics and steganalysis, and some promising results have been reported recently. These works mainly focus on the architectural design of CNNs, usually, a single CNN model is trained and then tested in experiments. It is known that, neural networks, including CNNs, are suitable to form ensembles. From this perspective, in this paper, we employ CNNs as base learners and test several different ensemble strategies. In our study, at first, a recently proposed CNN architecture is adopted to build a group of CNNs, each of them is trained on a random subsample of the training dataset. The output probabilities, or some intermediate feature representations, of each CNN, are then extracted from the original data and pooled together to form new features ready for the second level of classification. To make best use of the trained CNN models, we manage to partially recover the lost information due to spatial subsampling in the pooling layers when forming feature vectors. Performance of the ensemble methods are evaluated on BOSSbase by detecting S-UNIWARD at 0.4 bpp embedding rate. Results have indicated that both the recovery of the lost information, and learning from intermediate representation in CNNs instead of output probabilities, have led to performance improvement.

Shelke, Priya M., Prasad, Rajesh S..  2016.  Improving JPEG Image Anti-forensics. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :75:1–75:5.

This paper proposes a forensic method for identifying whether an image was previously compressed by JPEG and also proposes an improved anti-forensics method to enhance the quality of noise added image. Stamm and Liu's anti-forensics method disable the detection capabilities of various forensics methods proposed in the literature, used for identifying the compressed images. However, it also degrades the quality of the image. First, we analyze the anti-forensics method and then use the decimal histogram of the coefficients to distinguish the never compressed images from the previously compressed; even the compressed image processed anti-forensically. After analyzing the noise distribution in the AF image, we propose a method to remove the Gaussian noise caused by image dithering which in turn enhances the image quality. The paper is organized in the following manner: Section I is the introduction, containing previous literature. Section II briefs Anti-forensic method proposed by Stamm et al. In section III, we have proposed a forensic approach and section IV comprises of improved anti-forensic approach. Section V covers details of experimentation followed by the conclusion.

2017-05-19
Ahmed, Irfan, Roussev, Vassil, Johnson, William, Senthivel, Saranyan, Sudhakaran, Sneha.  2016.  A SCADA System Testbed for Cybersecurity and Forensic Research and Pedagogy. Proceedings of the 2Nd Annual Industrial Control System Security Workshop. :1–9.

This paper presents a supervisory control and data acquisition (SCADA) testbed recently built at the University of New Orleans. The testbed consists of models of three industrial physical processes: a gas pipeline, a power transmission and distribution system, and a wastewater treatment plant–these systems are fully-functional and implemented at small-scale. It utilizes real-world industrial equipment such as transformers, programmable logic controllers (PLC), aerators, etc., bringing it closer to modeling real-world SCADA systems. Sensors, actuators, and PLCs are deployed at each physical process system for local control and monitoring, and the PLCs are also connected to a computer running human-machine interface (HMI) software for monitoring the status of the physical processes. The testbed is a useful resource for cybersecurity research, forensic research, and education on different aspects of SCADA systems such as PLC programming, protocol analysis, and demonstration of cyber attacks.

2017-05-18
Chan, Ellick, Venkataraman, Shivaram, David, Francis, Chaugule, Amey, Campbell, Roy.  2010.  Forenscope: A Framework for Live Forensics. Proceedings of the 26th Annual Computer Security Applications Conference. :307–316.

Current post-mortem cyber-forensic techniques may cause significant disruption to the evidence gathering process by breaking active network connections and unmounting encrypted disks. Although newer live forensic analysis tools can preserve active state, they may taint evidence by leaving footprints in memory. To help address these concerns we present Forenscope, a framework that allows an investigator to examine the state of an active system without the effects of taint or forensic blurriness caused by analyzing a running system. We show how Forenscope can fit into accepted workflows to improve the evidence gathering process. Forenscope preserves the state of the running system and allows running processes, open files, encrypted filesystems and open network sockets to persist during the analysis process. Forenscope has been tested on live systems to show that it does not operationally disrupt critical processes and that it can perform an analysis in less than 15 seconds while using only 125 KB of memory. We show that Forenscope can detect stealth rootkits, neutralize threats and expedite the investigation process by finding evidence in memory.

2017-03-08
Morales, A., Luna-Garcia, E., Fierrez, J., Ortega-Garcia, J..  2015.  Score normalization for keystroke dynamics biometrics. 2015 International Carnahan Conference on Security Technology (ICCST). :223–228.

This paper analyzes score normalization for keystroke dynamics authentication systems. Previous studies have shown that the performance of behavioral biometric recognition systems (e.g. voice and signature) can be largely improved with score normalization and target-dependent techniques. The main objective of this work is twofold: i) to analyze the effects of different thresholding techniques in 4 different keystroke dynamics recognition systems for real operational scenarios; and ii) to improve the performance of keystroke dynamics on the basis of target-dependent score normalization techniques. The experiments included in this work are worked out over the keystroke pattern of 114 users from two different publicly available databases. The experiments show that there is large room for improvements in keystroke dynamic systems. The results suggest that score normalization techniques can be used to improve the performance of keystroke dynamics systems in more than 20%. These results encourage researchers to explore this research line to further improve the performance of these systems in real operational environments.