Visible to the public Biblio

Filters: Keyword is information forensics  [Clear All Filters]
2018-03-05
Zia, Tanveer, Liu, Peng, Han, Weili.  2017.  Application-Specific Digital Forensics Investigative Model in Internet of Things (IoT). Proceedings of the 12th International Conference on Availability, Reliability and Security. :55:1–55:7.

Besides its enormous benefits to the industry and community the Internet of Things (IoT) has introduced unique security challenges to its enablers and adopters. As the trend in cybersecurity threats continue to grow, it is likely to influence IoT deployments. Therefore it is eminent that besides strengthening the security of IoT systems we develop effective digital forensics techniques that when breaches occur we can track the sources of attacks and bring perpetrators to the due process with reliable digital evidence. The biggest challenge in this regard is the heterogeneous nature of devices in IoT systems and lack of unified standards. In this paper we investigate digital forensics from IoT perspectives. We argue that besides traditional digital forensics practices it is important to have application-specific forensics in place to ensure collection of evidence in context of specific IoT applications. We consider top three IoT applications and introduce a model which deals with not just traditional forensics but is applicable in digital as well as application-specific forensics process. We believe that the proposed model will enable collection, examination, analysis and reporting of forensically sound evidence in an IoT application-specific digital forensics investigation.

Gowda, Thamme, Hundman, Kyle, Mattmann, Chris A..  2017.  An Approach for Automatic and Large Scale Image Forensics. Proceedings of the 2Nd International Workshop on Multimedia Forensics and Security. :16–20.

This paper describes the applications of deep learning-based image recognition in the DARPA Memex program and its repository of 1.4 million weapons-related images collected from the Deep web. We develop a fast, efficient, and easily deployable framework for integrating Google's Tensorflow framework with Apache Tika for automatically performing image forensics on the Memex data. Our framework and its integration are evaluated qualitatively and quantitatively and our work suggests that automated, large-scale, and reliable image classification and forensics can be widely used and deployed in bulk analysis for answering domain-specific questions.

2017-05-30
Vaughn, Jr., Rayford B., Morris, Tommy.  2016.  Addressing Critical Industrial Control System Cyber Security Concerns via High Fidelity Simulation. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :12:1–12:4.

This paper outlines a set of 10 cyber security concerns associated with Industrial Control Systems (ICS). The concerns address software and hardware development, implementation, and maintenance practices, supply chain assurance, the need for cyber forensics in ICS, a lack of awareness and training, and finally, a need for test beds which can be used to address the first 9 cited concerns. The concerns documented in this paper were developed based on the authors' combined experience conducting research in this field for the US Department of Homeland Security, the National Science Foundation, and the Department of Defense. The second half of this paper documents a virtual test bed platform which is offered as a tool to address the concerns listed in the first half of the paper. The paper discusses various types of test beds proposed in literature for ICS research, provides an overview of the virtual test bed platform developed by the authors, and lists future works required to extend the existing test beds to serve as a development platform.

Lacroix, Jesse, El-Khatib, Khalil, Akalu, Rajen.  2016.  Vehicular Digital Forensics: What Does My Vehicle Know About Me? Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications. :59–66.

A major component of modern vehicles is the infotainment system, which interfaces with its drivers and passengers. Other mobile devices, such as handheld phones and laptops, can relay information to the embedded infotainment system through Bluetooth and vehicle WiFi. The ability to extract information from these systems would help forensic analysts determine the general contents that is stored in an infotainment system. Based off the data that is extracted, this would help determine what stored information is relevant to law enforcement agencies and what information is non-essential when it comes to solving criminal activities relating to the vehicle itself. This would overall solidify the Intelligent Transport System and Vehicular Ad Hoc Network infrastructure in combating crime through the use of vehicle forensics. Additionally, determining the content of these systems will allow forensic analysts to know if they can determine anything about the end-user directly and/or indirectly.

Gu, Yufei, Lin, Zhiqiang.  2016.  Derandomizing Kernel Address Space Layout for Memory Introspection and Forensics. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :62–72.

Modern OS kernels including Windows, Linux, and Mac OS all have adopted kernel Address Space Layout Randomization (ASLR), which shifts the base address of kernel code and data into different locations in different runs. Consequently, when performing introspection or forensic analysis of kernel memory, we cannot use any pre-determined addresses to interpret the kernel events. Instead, we must derandomize the address space layout and use the new addresses. However, few efforts have been made to derandomize the kernel address space and yet there are many questions left such as which approach is more efficient and robust. Therefore, we present the first systematic study of how to derandomize a kernel when given a memory snapshot of a running kernel instance. Unlike the derandomization approaches used in traditional memory exploits in which only remote access is available, with introspection and forensics applications, we can use all the information available in kernel memory to generate signatures and derandomize the ASLR. In other words, there exists a large volume of solutions for this problem. As such, in this paper we examine a number of typical approaches to generate strong signatures from both kernel code and data based on the insight of how kernel code and data is updated, and compare them from efficiency (in terms of simplicity, speed etc.) and robustness (e.g., whether the approach is hard to be evaded or forged) perspective. In particular, we have designed four approaches including brute-force code scanning, patched code signature generation, unpatched code signature generation, and read-only pointer based approach, according to the intrinsic behavior of kernel code and data with respect to kernel ASLR. We have gained encouraging results for each of these approaches and the corresponding experimental results are reported in this paper.

Jadhao, Ankita R., Agrawal, Avinash J..  2016.  A Digital Forensics Investigation Model for Social Networking Site. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :130:1–130:4.

Social Networking is fundamentally shifting the way we communicate, sharing idea and form opinions. All people try to use social media for there need, people from every age group are involved in social media site or e-commerce site. Nowadays almost every illegal activity is happened using the social network and instant messages. It means that present system is not capable to found all suspicious words. In this paper, we provided a brief description of problem and review on the different framework developed so far. Propose a better system which can be indentify criminal activity through social networking more efficiently. Use Ontology Based Information Extraction (OBIE) technique to identify domain of word and Association Rule mining to generate rules. Heuristic method checks in user database for malicious users according to predefine elements and Naïve Bayes method is use to identify the context behind the message or post. The experimental result is used for further action on victim by cyber crime department.

AL-ATHAMNEH, M., KURUGOLLU, F., CROOKES, D., FARID, M..  2016.  Video Authentication Based on Statistical Local Information. Proceedings of the 9th International Conference on Utility and Cloud Computing. :388–391.

With the outgrowth of video editing tools, video information trustworthiness becomes a hypersensitive field. Today many devices have the capability of capturing digital videos such as CCTV, digital cameras and mobile phones and these videos may transmitted over the Internet or any other non secure channel. As digital video can be used to as supporting evidence, it has to be protected against manipulation or tampering. As most video authentication techniques are based on watermarking and digital signatures, these techniques are effectively used in copyright purposes but difficult to implement in other cases such as video surveillance or in videos captured by consumer's cameras. In this paper we propose an intelligent technique for video authentication which uses the video local information which makes it useful for real world applications. The proposed algorithm relies on the video's statistical local information which was applied on a dataset of videos captured by a range of consumer video cameras. The results show that the proposed algorithm has potential to be a reliable intelligent technique in digital video authentication without the need to use for SVM classifier which makes it faster and less computationally expensive in comparing with other intelligent techniques.

Xu, Zhang, Wu, Zhenyu, Li, Zhichun, Jee, Kangkook, Rhee, Junghwan, Xiao, Xusheng, Xu, Fengyuan, Wang, Haining, Jiang, Guofei.  2016.  High Fidelity Data Reduction for Big Data Security Dependency Analyses. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :504–516.

Intrusive multi-step attacks, such as Advanced Persistent Threat (APT) attacks, have plagued enterprises with significant financial losses and are the top reason for enterprises to increase their security budgets. Since these attacks are sophisticated and stealthy, they can remain undetected for years if individual steps are buried in background "noise." Thus, enterprises are seeking solutions to "connect the suspicious dots" across multiple activities. This requires ubiquitous system auditing for long periods of time, which in turn causes overwhelmingly large amount of system audit events. Given a limited system budget, how to efficiently handle ever-increasing system audit logs is a great challenge. This paper proposes a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high-quality forensic analysis. In particular, we first propose an aggregation algorithm that preserves the dependency of events during data reduction to ensure the high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. To validate the efficacy of our proposed approach, we conduct a comprehensive evaluation on real-world auditing systems using log traces of more than one month. Our evaluation results demonstrate that our approach can significantly reduce the size of system logs and improve the efficiency of forensic analysis without losing accuracy.

Pasquini, Cecilia, Schöttle, Pascal, Böhme, Rainer, Boato, Giulia, Pèrez-Gonzàlez, Fernando.  2016.  Forensics of High Quality and Nearly Identical JPEG Image Recompression. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :11–21.

We address the known problem of detecting a previous compression in JPEG images, focusing on the challenging case of high and very high quality factors (textgreater= 90) as well as repeated compression with identical or nearly identical quality factors. We first revisit the approaches based on Benford–Fourier analysis in the DCT domain and block convergence analysis in the spatial domain. Both were originally conceived for specific scenarios. Leveraging decision tree theory, we design a combined approach complementing the discriminatory capabilities. We obtain a set of novel detectors targeted to high quality grayscale JPEG images.

Xu, Guanshuo, Wu, Han-Zhou, Shi, Yun Q...  2016.  Ensemble of CNNs for Steganalysis: An Empirical Study. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :103–107.

There has been growing interest in using convolutional neural networks (CNNs) in the fields of image forensics and steganalysis, and some promising results have been reported recently. These works mainly focus on the architectural design of CNNs, usually, a single CNN model is trained and then tested in experiments. It is known that, neural networks, including CNNs, are suitable to form ensembles. From this perspective, in this paper, we employ CNNs as base learners and test several different ensemble strategies. In our study, at first, a recently proposed CNN architecture is adopted to build a group of CNNs, each of them is trained on a random subsample of the training dataset. The output probabilities, or some intermediate feature representations, of each CNN, are then extracted from the original data and pooled together to form new features ready for the second level of classification. To make best use of the trained CNN models, we manage to partially recover the lost information due to spatial subsampling in the pooling layers when forming feature vectors. Performance of the ensemble methods are evaluated on BOSSbase by detecting S-UNIWARD at 0.4 bpp embedding rate. Results have indicated that both the recovery of the lost information, and learning from intermediate representation in CNNs instead of output probabilities, have led to performance improvement.

Shelke, Priya M., Prasad, Rajesh S..  2016.  Improving JPEG Image Anti-forensics. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :75:1–75:5.

This paper proposes a forensic method for identifying whether an image was previously compressed by JPEG and also proposes an improved anti-forensics method to enhance the quality of noise added image. Stamm and Liu's anti-forensics method disable the detection capabilities of various forensics methods proposed in the literature, used for identifying the compressed images. However, it also degrades the quality of the image. First, we analyze the anti-forensics method and then use the decimal histogram of the coefficients to distinguish the never compressed images from the previously compressed; even the compressed image processed anti-forensically. After analyzing the noise distribution in the AF image, we propose a method to remove the Gaussian noise caused by image dithering which in turn enhances the image quality. The paper is organized in the following manner: Section I is the introduction, containing previous literature. Section II briefs Anti-forensic method proposed by Stamm et al. In section III, we have proposed a forensic approach and section IV comprises of improved anti-forensic approach. Section V covers details of experimentation followed by the conclusion.

Cuzzocrea, Alfredo, Pirrò, Giuseppe.  2016.  A Semantic-web-technology-based Framework for Supporting Knowledge-driven Digital Forensics. Proceedings of the 8th International Conference on Management of Digital EcoSystems. :58–66.

The usage of Information and Communication Technologies (ICTs) pervades everyday's life. If it is true that ICT contributed to improve the quality of our life, it is also true that new forms of (cyber)crime have emerged in this setting. The diversity and amount of information forensic investigators need to cope with, when tackling a cyber-crime case, call for tools and techniques where knowledge is the main actor. Current approaches leave to the investigator the chore of integrating the diverse sources of evidence relevant for a case thus hindering the automatic generation of reusable knowledge. This paper describes an architecture that lifts the classical phases of a digital forensic investigation to a knowledge-driven setting. We discuss how the usage of languages and technologies originating from the Semantic Web proposal can complement digital forensics tools so that knowledge becomes a first-class citizen. Our architecture enables to perform in an integrated way complex forensic investigations and, as a by-product, build a knowledge base that can be consulted to gain insights from previous cases. Our proposal has been inspired by real-world scenarios emerging in the context of an Italian research project about cyber security.