Visible to the public Biblio

Filters: Keyword is Manuals  [Clear All Filters]
2020-03-09
Calzavara, Stefano, Conti, Mauro, Focardi, Riccardo, Rabitti, Alvise, Tolomei, Gabriele.  2019.  Mitch: A Machine Learning Approach to the Black-Box Detection of CSRF Vulnerabilities. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :528–543.

Cross-Site Request Forgery (CSRF) is one of the oldest and simplest attacks on the Web, yet it is still effective on many websites and it can lead to severe consequences, such as economic losses and account takeovers. Unfortunately, tools and techniques proposed so far to identify CSRF vulnerabilities either need manual reviewing by human experts or assume the availability of the source code of the web application. In this paper we present Mitch, the first machine learning solution for the black-box detection of CSRF vulnerabilities. At the core of Mitch there is an automated detector of sensitive HTTP requests, i.e., requests which require protection against CSRF for security reasons. We trained the detector using supervised learning techniques on a dataset of 5,828 HTTP requests collected on popular websites, which we make available to other security researchers. Our solution outperforms existing detection heuristics proposed in the literature, allowing us to identify 35 new CSRF vulnerabilities on 20 major websites and 3 previously undetected CSRF vulnerabilities on production software already analyzed using a state-of-the-art tool.

2020-02-17
MacDermott, Áine, Lea, Stephen, Iqbal, Farkhund, Idowu, Ibrahim, Shah, Babar.  2019.  Forensic Analysis of Wearable Devices: Fitbit, Garmin and HETP Watches. 2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1–6.
Wearable technology has been on an exponential rise and shows no signs of slowing down. One category of wearable technology is Fitness bands, which have the potential to show a user's activity levels and location data. Such information stored in fitness bands is just the beginning of a long trail of evidence fitness bands can store, which represents a huge opportunity to digital forensic practitioners. On the surface of recent work and research in this area, there does not appear to be any similar work that has already taken place on fitness bands and particularly, the devices in this study, a Garmin Forerunner 110, a Fitbit Charge HR and a Generic low-cost HETP fitness tracker. In this paper, we present our analysis of these devices for any possible digital evidence in a forensically sound manner, identifying files of interest and location data on the device. Data accuracy and validity of the evidence is shown, as a test run scenario wearing all of the devices allowed for data comparison analysis.
2019-02-14
Xu, Z., Shi, C., Cheng, C. C., Gong, N. Z., Guan, Y..  2018.  A Dynamic Taint Analysis Tool for Android App Forensics. 2018 IEEE Security and Privacy Workshops (SPW). :160-169.

The plethora of mobile apps introduce critical challenges to digital forensics practitioners, due to the diversity and the large number (millions) of mobile apps available to download from Google play, Apple store, as well as hundreds of other online app stores. Law enforcement investigators often find themselves in a situation that on the seized mobile phone devices, there are many popular and less-popular apps with interface of different languages and functionalities. Investigators would not be able to have sufficient expert-knowledge about every single app, sometimes nor even a very basic understanding about what possible evidentiary data could be discoverable from these mobile devices being investigated. Existing literature in digital forensic field showed that most such investigations still rely on the investigator's manual analysis using mobile forensic toolkits like Cellebrite and Encase. The problem with such manual approaches is that there is no guarantee on the completeness of such evidence discovery. Our goal is to develop an automated mobile app analysis tool to analyze an app and discover what types of and where forensic evidentiary data that app generate and store locally on the mobile device or remotely on external 3rd-party server(s). With the app analysis tool, we will build a database of mobile apps, and for each app, we will create a list of app-generated evidence in terms of data types, locations (and/or sequence of locations) and data format/syntax. The outcome from this research will help digital forensic practitioners to reduce the complexity of their case investigations and provide a better completeness guarantee of evidence discovery, thereby deliver timely and more complete investigative results, and eventually reduce backlogs at crime labs. In this paper, we will present the main technical approaches for us to implement a dynamic Taint analysis tool for Android apps forensics. With the tool, we have analyzed 2,100 real-world Android apps. For each app, our tool produces the list of evidentiary data (e.g., GPS locations, device ID, contacts, browsing history, and some user inputs) that the app could have collected and stored on the devices' local storage in the forms of file or SQLite database. We have evaluated our tool using both benchmark apps and real-world apps. Our results demonstrated that the initial success of our tool in accurately discovering the evidentiary data.

2018-03-05
Mfula, H., Nurminen, J. K..  2017.  Adaptive Root Cause Analysis for Self-Healing in 5G Networks. 2017 International Conference on High Performance Computing Simulation (HPCS). :136–143.

Root cause analysis (RCA) is a common and recurring task performed by operators of cellular networks. It is done mainly to keep customers satisfied with the quality of offered services and to maximize return on investment (ROI) by minimizing and where possible eliminating the root causes of faults in cellular networks. Currently, the actual detection and diagnosis of faults or potential faults is still a manual and slow process often carried out by network experts who manually analyze and correlate various pieces of network data such as, alarms, call traces, configuration management (CM) and key performance indicator (KPI) data in order to come up with the most probable root cause of a given network fault. In this paper, we propose an automated fault detection and diagnosis solution called adaptive root cause analysis (ARCA). The solution uses measurements and other network data together with Bayesian network theory to perform automated evidence based RCA. Compared to the current common practice, our solution is faster due to automation of the entire RCA process. The solution is also cheaper because it needs fewer or no personnel in order to operate and it improves efficiency through domain knowledge reuse during adaptive learning. As it uses a probabilistic Bayesian classifier, it can work with incomplete data and it can handle large datasets with complex probability combinations. Experimental results from stratified synthesized data affirmatively validate the feasibility of using such a solution as a key part of self-healing (SH) especially in emerging self-organizing network (SON) based solutions in LTE Advanced (LTE-A) and 5G.

2018-02-21
Jiang, Z., Zhou, A., Liu, L., Jia, P., Liu, L., Zuo, Z..  2017.  CrackDex: Universal and automatic DEX extraction method. 2017 7th IEEE International Conference on Electronics Information and Emergency Communication (ICEIEC). :53–60.

With Android application packing technology evolving, there are more and more ways to harden APPs. Manually unpacking APPs becomes more difficult as the time needed for analyzing increase exponentially. At the beginning, the packing technology is designed to prevent APPs from being easily decompiled, tampered and re-packed. But unfortunately, many malicious APPs start to use packing service to protect themselves. At present, most of the antivirus software focus on APPs that are unpacked, which means if malicious APPs apply the packing service, they can easily escape from a lot of antivirus software. Therefore, we should not only emphasize the importance of packing, but also concentrate on the unpacking technology. Only by doing this can we protect the normal APPs, and not miss any harmful APPs at the same time. In this paper, we first systematically study a lot of DEX packing and unpacking technologies, then propose and develop a universal unpacking system, named CrackDex, which is capable of extracting the original DEX file from the packed APP. We propose three core technologies: simulation execution, DEX reassembling, and DEX restoration, to get the unpacked DEX file. CrackDex is a part of the Dalvik virtual machine, and it monitors the execution of functions to locate the unpacking point in the portable interpreter, then launches the simulation execution, collects the data of original DEX file through corresponding structure pointer, finally fulfills the unpacking process by reassembling the data collected. The results of our experiments show that CrackDex can be used to effectively unpack APPs that are packed by packing service in a universal approach without any other knowledge of packing service.

2017-03-08
Kesiman, M. W. A., Prum, S., Sunarya, I. M. G., Burie, J. C., Ogier, J. M..  2015.  An analysis of ground truth binarized image variability of palm leaf manuscripts. 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA). :229–233.

As a very valuable cultural heritage, palm leaf manuscripts offer a new challenge in document analysis system due to the specific characteristics on physical support of the manuscript. With the aim of finding an optimal binarization method for palm leaf manuscript images, creating a new ground truth binarized image is a necessary step in document analysis of palm leaf manuscript. But, regarding to the human intervention in ground truthing process, an important remark about the subjectivity effect on the construction of ground truth binarized image has been analysed and reported. In this paper, we present an experiment in a real condition to analyse the existance of human subjectivity on the construction of ground truth binarized image of palm leaf manuscript images and to measure quantitatively the ground truth variability with several binarization evaluation metrics.

2017-02-27
Mulcahy, J. J., Huang, S..  2015.  An autonomic approach to extend the business value of a legacy order fulfillment system. 2015 Annual IEEE Systems Conference (SysCon) Proceedings. :595–600.

In the modern retailing industry, many enterprise resource planning (ERP) systems are considered legacy software systems that have become too expensive to replace and too costly to re-engineer. Countering the need to maintain and extend the business value of these systems is the need to do so in the simplest, cheapest, and least risky manner available. There are a number of approaches used by software engineers to mitigate the negative impact of evolving a legacy systems, including leveraging service-oriented architecture to automate manual tasks previously performed by humans. A relatively recent approach in software engineering focuses upon implementing self-managing attributes, or “autonomic” behavior in software applications and systems of applications in order to reduce or eliminate the need for human monitoring and intervention. Entire systems can be autonomic or they can be hybrid systems that implement one or more autonomic components to communicate with external systems. In this paper, we describe a commercial development project in which a legacy multi-channel commerce enterprise resource planning system was extended with service-oriented architecture an autonomic control loop design to communicate with an external third-party security screening provider. The goal was to reduce the cost of the human labor necessary to screen an ever-increasing volume of orders and to reduce the potential for human error in the screening process. The solution automated what was previously an inefficient, incomplete, and potentially error-prone manual process by inserting a new autonomic software component into the existing order fulfillment workflow.

2015-05-06
Meng Zhang, Bingham, J.D., Erickson, J., Sorin, D.J..  2014.  PVCoherence: Designing flat coherence protocols for scalable verification. High Performance Computer Architecture (HPCA), 2014 IEEE 20th International Symposium on. :392-403.

The goal of this work is to design cache coherence protocols with many cores that can be verified with state-of-the-art automated verification methodologies. In particular, we focus on flat (non-hierarchical) coherence protocols, and we use a mostly-automated methodology based on parametric verification (PV). We propose several design guidelines that architects should follow if they want to design protocols that can be parametrically verified. We experimentally evaluate performance, storage overhead, and scalability of a protocol verified with PV compared to a highly optimized protocol that cannot be verified with PV.

2015-05-05
Bhandari, P., Gujral, M.S..  2014.  Ontology based approach for perception of network security state. Engineering and Computational Sciences (RAECS), 2014 Recent Advances in. :1-6.

This paper presents an ontological approach to perceive the current security status of the network. Computer network is a dynamic entity whose state changes with the introduction of new services, installation of new network operating system, and addition of new hardware components, creation of new user roles and by attacks from various actors instigated by aggressors. Various security mechanisms employed in the network does not give the complete picture of security of complete network. In this paper we have proposed taxonomy and ontology which may be used to infer impact of various events happening in the network on security status of the network. Vulnerability, Network and Attack are the main taxonomy classes in the ontology. Vulnerability class describes various types of vulnerabilities in the network which may in hardware components like storage devices, computing devices or networks devices. Attack class has many subclasses like Actor class which is entity executing the attack, Goal class describes goal of the attack, Attack mechanism class defines attack methodology, Scope class describes size and utility of the target, Automation level describes the automation level of the attack Evaluation of security status of the network is required for network security situational awareness. Network class has network operating system, users, roles, hardware components and services as its subclasses. Based on this taxonomy ontology has been developed to perceive network security status. Finally a framework, which uses this ontology as knowledgebase has been proposed.
 

Coelho Martins da Fonseca, J.C., Amorim Vieira, M.P..  2014.  A Practical Experience on the Impact of Plugins in Web Security. Reliable Distributed Systems (SRDS), 2014 IEEE 33rd International Symposium on. :21-30.

In an attempt to support customization, many web applications allow the integration of third-party server-side plugins that offer diverse functionality, but also open an additional door for security vulnerabilities. In this paper we study the use of static code analysis tools to detect vulnerabilities in the plugins of the web application. The goal is twofold: 1) to study the effectiveness of static analysis on the detection of web application plugin vulnerabilities, and 2) to understand the potential impact of those plugins in the security of the core web application. We use two static code analyzers to evaluate a large number of plugins for a widely used Content Manage-ment System. Results show that many plugins that are current-ly deployed worldwide have dangerous Cross Site Scripting and SQL Injection vulnerabilities that can be easily exploited, and that even widely used static analysis tools may present disappointing vulnerability coverage and false positive rates.

Gupta, M.K., Govil, M.C., Singh, G..  2014.  Static analysis approaches to detect SQL injection and cross site scripting vulnerabilities in web applications: A survey. Recent Advances and Innovations in Engineering (ICRAIE), 2014. :1-5.

Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) and Cross-Site Scripting (XSS) vulnerabilities are most dangerous security vulnerabilities exploited in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on defensive programming, vulnerability detection and attack prevention techniques has been quite intensive in the past decade. Defensive programming is a set of coding guidelines to develop secure applications. But, mostly developers do not follow security guidelines and repeat same type of programming mistakes in their code. Attack prevention techniques protect the applications from attack during their execution in actual environment. The difficulties associated with accurate detection of SQLI and XSS vulnerabilities in coding phase of software development life cycle. This paper proposes a classification of software security approaches used to develop secure software in various phase of software development life cycle. It also presents a survey of static analysis based approaches to detect SQL Injection and cross-site scripting vulnerabilities in source code of web applications. The aim of these approaches is to identify the weaknesses in source code before their exploitation in actual environment. This paper would help researchers to note down future direction for securing legacy web applications in early phases of software development life cycle.

2015-05-04
Caso, J.S..  2014.  The rules of engagement for cyber-warfare and the Tallinn Manual: A case study. Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2014 IEEE 4th Annual International Conference on. :252-257.

Documents such as the Geneva (1949) and Hague Conventions (1899 and 1907) that have clearly outlined the rules of engagement for warfare find themselves challenged by the presence of a new arena: cyber. Considering the potential nature of these offenses, operations taking place in the realm of cyber cannot simply be generalized as “cyber-warfare,” as they may also be acts of cyber-espionage, cyber-terrorism, cyber-sabaotge, etc. Cyber-attacks, such as those on Estonia in 2007, have begun to test the limits of NATO's Article 5 and the UN Charter's Article 2(4) against the use of force. What defines “force” as it relates to cyber, and what kind of response is merited in the case of uncertainty regarding attribution? In 2009, NATO's Cooperative Cyber Defence Centre of Excellence commissioned a group of experts to publish a study on the application of international law to cyber-warfare. This document, the Tallinn Manual, was published in 2013 as a non-binding exercise to stimulate discussion on the codification of international law on the subject. After analysis, this paper concludes that the Tallinn Manual classifies the 2010 Stuxnet attack on Iran's nuclear program as an illegal act of force. The purpose of this paper is the following: (1) to analyze the historical and technical background of cyber-warfare, (2) to evaluate the Tallinn Manual as it relates to the justification cyber-warfare, and (3) to examine the applicability of the Tallinn Manual in a case study of a historical example of a cyber-attacks.
 

Watney, M..  2014.  Challenges pertaining to cyber war under international law. Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on. :1-5.

State-level intrusion in the cyberspace of another country seriously threatens a state's peace and security. Consequently many types of cyberspace intrusion are being referred to as cyber war with scant regard to the legal position under international law. This is but one of the challenges facing state-level cyber intrusion. The current rules of international law prohibit certain types of intrusion. However, international law does not define which intrusion fall within the prohibited category of intrusion nor when the threshold of intrusion is surpassed. International lawyers have to determine the type of intrusion and threshold on a case-by-case basis. The Tallinn Manual may serve as guideline in this assessment, but determination of the type of intrusion and attribution to a specific state is not easily established. The current rules of international law do not prohibit all intrusion which on statelevel may be highly invasive and destructive. Unrestrained cyber intrusion may result in cyberspace becoming a battle space in which state(s) with strong cyber abilities dominate cyberspace resulting in resentment and fear among other states. The latter may be prevented on an international level by involving all states on an equal and transparent manner in cyberspace governance.