Biblio
File timestamps do not receive much attention from information security specialists and computer forensic scientists. It is believed that timestamps are extremely easy to fake, and the system time of a computer can be changed. However, operating system for synchronizing processes and working with file objects needs accurate time readings. The authors estimate that several million timestamps can be stored on the logical partition of a hard disk with the NTFS. The MFT stores four timestamps for each file object in \$STANDARDİNFORMATION and \$FILE\_NAME attributes. Furthermore, each directory in the İNDEX\_ROOT or İNDEX\_ALLOCATION attributes contains four more timestamps for each file within it. File timestamps are set and changed as a result of file operations. At the same time, some file operations differently affect changes in timestamps. This article presents the results of the tool-based observation over the creation and update of timestamps in the MFT resulting from the basic file operations. Analysis of the results is of interest with regard to computer forensic science.
This paper presents a theoretical background of main research activity focused on the evaluation of wiping/erasure standards which are mostly implemented in specific software products developed and programming for data wiping. The information saved in storage devices often consists of metadata and trace data. Especially but not only these kinds of data are very important in the process of forensic analysis because they sometimes contain information about interconnection on another file. Most people saving their sensitive information on their local storage devices and later they want to secure erase these files but usually there is a problem with this operation. Secure file destruction is one of many Anti-forensics methods. The outcome of this paper is to define the future research activities focused on the establishment of the suitable digital environment. This environment will be prepared for testing and evaluating selected wiping standards and appropriate eraser software.
In recent decades, a significant research effort has been devoted to the development of forensic tools for retrieving information and detecting possible tampering of multimedia documents. A number of counter-forensic tools have been developed as well in order to impede a correct analysis. Such tools are often very effective due to the vulnerability of multimedia forensics tools, which are not designed to work in an adversarial environment. In this scenario, developing forensic techniques capable of granting good performance even in the presence of an adversary aiming at impeding the forensic analysis, is becoming a necessity. This turns out to be a difficult task, given the weakness of the traces the forensic analysis usually relies on. The goal of this paper is to provide an overview of the advances made over the last decade in the field of adversarial multimedia forensics. We first consider the view points of the forensic analyst and the attacker independently, then we review some of the attempts made to simultaneously take into account both perspectives by resorting to game theory. Eventually, we discuss the hottest open problems and outline possible paths for future research.
As modern attacks become more stealthy and persistent, detecting or preventing them at their early stages becomes virtually impossible. Instead, an attack investigation or provenance system aims to continuously monitor and log interesting system events with minimal overhead. Later, if the system observes any anomalous behavior, it analyzes the log to identify who initiated the attack and which resources were affected by the attack and then assess and recover from any damage incurred. However, because of a fundamental tradeoff between log granularity and system performance, existing systems typically record system-call events without detailed program-level activities (e.g., memory operation) required for accurately reconstructing attack causality or demand that every monitored program be instrumented to provide program-level information. To address this issue, we propose RAIN, a Refinable Attack INvestigation system based on a record-replay technology that records system-call events during runtime and performs instruction-level dynamic information flow tracking (DIFT) during on-demand process replay. Instead of replaying every process with DIFT, RAIN conducts system-call-level reachability analysis to filter out unrelated processes and to minimize the number of processes to be replayed, making inter-process DIFT feasible. Evaluation results show that RAIN effectively prunes out unrelated processes and determines attack causality with negligible false positive rates. In addition, the runtime overhead of RAIN is similar to existing system-call level provenance systems and its analysis overhead is much smaller than full-system DIFT.
The deployment of Voice over Internet Protocol (VoIP) in place of traditional communication facilities has helped in huge reduction in operating costs, as well as enabled adoption of next generation communication services-based IP. At the same time, cyber criminals have also started intercepting environment and creating challenges for law enforcement system in any Country. At this instant, we propose a framework for the forensic analysis of the VoIP traffic over the network. This includes identifying and analyzing of network patterns of VoIP- SIP which is used for the setting up a session for the communication, and VoIP-RTP which is used for sending the data. Our network forensic investigation framework also focus on developing an efficient packet reordering and reconstruction algorithm for tracing the malicious users involved in conversation. The proposed framework is based on network forensics which can be used for content level observation of VoIP and regenerate original malicious content or session between malicious users for their prosecution in the court.
During the last years, criminals have become aware of how digital evidences that lead them to courts and jail are collected and analyzed. Hence, they have started to develop antiforensic techniques to evade, hamper, or nullify their evidences. Nowadays, these techniques are broadly used by criminals, causing the forensic analysis to be in a state of decay. To defeat against these techniques, forensic analyst need to first identify them, and then to mitigate somehow their effects. In this paper, wereview the anti-forensic techniques and propose a new taxonomy that relates them to the initial phase of a forensic process mainly affected by each technique. Furthermore, we introduce mitigation techniques for these anti-forensic techniques, considering the chance to overcome the anti-forensic techniques and the difficulty to apply them.
With the spectacular increase in online activities like e-transactions, security and privacy issues are at the peak with respect to their significance. Large numbers of database security breaches are occurring at a very high rate on daily basis. So, there is a crucial need in the field of database forensics to make several redundant copies of sensitive data found in database server artifacts, audit logs, cache, table storage etc. for analysis purposes. Large volume of metadata is available in database infrastructure for investigation purposes but most of the effort lies in the retrieval and analysis of that information from computing systems. Thus, in this paper we mainly focus on the significance of metadata in database forensics. We proposed a system here to perform forensics analysis of database by generating its metadata file independent of the DBMS system used. We also aim to generate the digital evidence against criminals for presenting it in the court of law in the form of who, when, why, what, how and where did the fraudulent transaction occur. Thus, we are presenting a system to detect major database attacks as well as anti-forensics attacks by developing an open source database forensics tool. Eventually, we are pointing out the challenges in the field of forensics and how these challenges can be used as opportunities to stimulate the areas of database forensics.
Techniques for network security analysis have historically focused on the actions of the network hosts. Outside of forensic analysis, little has been done to detect or predict malicious or infected nodes strictly based on their association with other known malicious nodes. This methodology is highly prevalent in the graph analytics world, however, and is referred to as community detection. In this paper, we present a method for detecting malicious and infected nodes on both monitored networks and the external Internet. We leverage prior community detection and graphical modeling work by propagating threat probabilities across network nodes, given an initial set of known malicious nodes. We enhance prior work by employing constraints that remove the adverse effect of cyclic propagation that is a byproduct of current methods. We demonstrate the effectiveness of probabilistic threat propagation on the tasks of detecting botnets and malicious web destinations.
Network traffic is a rich source of information for security monitoring. However the increasing volume of data to treat raises issues, rendering holistic analysis of network traffic difficult. In this paper we propose a solution to cope with the tremendous amount of data to analyse for security monitoring perspectives. We introduce an architecture dedicated to security monitoring of local enterprise networks. The application domain of such a system is mainly network intrusion detection and prevention, but can be used as well for forensic analysis. This architecture integrates two systems, one dedicated to scalable distributed data storage and management and the other dedicated to data exploitation. DNS data, NetFlow records, HTTP traffic and honeypot data are mined and correlated in a distributed system that leverages state of the art big data solution. Data correlation schemes are proposed and their performance are evaluated against several well-known big data framework including Hadoop and Spark.