Biblio
Password auditing can enhance the cyber situational awareness of defenders, e.g. cyber security/IT professionals, with regards to the strength of text-based authentication mechanisms utilized in an organization. Auditing results can proactively indicate if weak passwords exist in an organization, decreasing the risks of compromisation. Password cracking is a typical and time-consuming way to perform password auditing. Given that defenders perform password auditing within a specific evaluation timeframe, the cracking process needs to be optimized to yield useful results. Existing password cracking tools do not provide holistic features to optimize the process. Therefore, the need arises to build new password auditing toolkits to assist defenders to achieve their task in an effective and efficient way. Moreover, to maximize the benefits of password auditing, a security policy should be utilized. Currently the efforts focus on the specification of password security policies, providing rules on how to construct passwords. This work proposes the functionality that should be supported by next-generation password auditing toolkits and provides guidelines to drive the specification of a relevant password auditing policy.
While advances in cyber-security defensive mechanisms have substantially prevented malware from penetrating into organizational Information Systems (IS) networks, organizational users have found themselves vulnerable to threats emanating from Advanced Persistent Threat (APT) vectors, mostly in the form of spear phishing. In this respect, the question of how an organizational user can differentiate between a genuine communication and a similar looking fraudulent communication in an email/APT threat vector remains a dilemma. Therefore, identifying and evaluating the APT vector attributes and assigning relative weights to them can assist the user to make a correct decision when confronted with a scenario that may be genuine or a malicious APT vector. In this respect, we propose an APT Decision Matrix model which can be used as a lens to build multiple APT threat vector scenarios to identify threat attributes and their weights, which can lead to systems compromise.
Among the threats to information systems of state institutions, enterprises and financial organizations of particular importance are those originating from organized criminal groups that specialize in obtaining unauthorized access to the computer information protected by law. Criminal groups often possess a material base including financial, technical, human and other resources that allow to perform targeted attacks on information resources as secretly as possible. The principal features of such targeted attacks are the use of software created or modified specifically for use in illegal purposes with respect to specific organizations. Due to these circumstances, the detection of such attacks is quite difficult, and their prevention is even more complicated. In this regard, the task of identifying and analyzing such threats is very relevant. One effective way to solve it is to implement the Honeypot system, which allows to research the strategy and tactics of the attackers. In the present article, there is proposed the original architecture of the Honeypot system designed to study targeted attacks on information systems of criminogenic objects. The architectural design includes such basic elements as the functional component, the registrar of events occurring in the system and the protector. The key features of the proposed Honeypot system are considered, and the functional purpose of its main components is described. The proposed system can find its application in providing information security of institutions, organizations and enterprises, it can be used in the development of information security systems.
With an increase in targeted attacks such as advanced persistent threats (APTs), enterprise system defenders require comprehensive frameworks that allow them to collaborate and evaluate their defense systems against such attacks. MITRE has developed a framework which includes a database of different kill-chains, tactics, techniques, and procedures that attackers employ to perform these attacks. In this work, we leverage natural language processing techniques to extract attacker actions from threat report documents generated by different organizations and automatically classify them into standardized tactics and techniques, while providing relevant mitigation advisories for each attack. A naïve method to achieve this is by training a machine learning model to predict labels that associate the reports with relevant categories. In practice, however, sufficient labeled data for model training is not always readily available, so that training and test data come from different sources, resulting in bias. A naïve model would typically underperform in such a situation. We address this major challenge by incorporating an importance weighting scheme called bias correction that efficiently utilizes available labeled data, given threat reports, whose categories are to be automatically predicted. We empirically evaluated our approach on 18,257 real-world threat reports generated between year 2000 and 2018 from various computer security organizations to demonstrate its superiority by comparing its performance with an existing approach.
Application whitelisting software allows only examined and trusted applications to run on user's machine. Since many malicious files don't require administrative privileges in order for them to be executed, whitelisting can be the only way to block the execution of unauthorized applications in enterprise environment and thus prevent infection or data breach. In order to assess the current state of such solutions, the access to three whitelisting solution licenses was obtained with the purpose to test their effectiveness against different modern types of ransomware found in the wild. To conduct this study a virtual environment was used with Windows Server and Enterprise editions installed. The objective of this paper is not to evaluate each vendor or make recommendations of purchasing specific software but rather to assess the ability of application control solutions to block execution of ransomware files, as well as assess the potential for future research. The results of the research show the promise and effectiveness of whitelisting solutions.
Organizations are exposed to various cyber-attacks. When a component is exploited, the overall computed damage is impacted by the number of components the network includes. This work is focuses on estimating the Target Distribution characteristic of an attacked network. According existing security assessment models, Target Distribution is assessed by using ordinal values based on users' intuitive knowledge. This work is aimed at defining a formula which enables measuring quantitatively the attacked components' distribution. The proposed formula is based on the real-time configuration of the system. Using the proposed measure, firms can quantify damages, allocate appropriate budgets to actual real risks and build their configuration while taking in consideration the risks impacted by components' distribution. The formula is demonstrated as part of a security continuous monitoring system.
Malware has become sophisticated and organizations don't have a Plan B when standard lines of defense fail. These failures have devastating consequences for organizations, such as sensitive information being exfiltrated. A promising avenue for improving the effectiveness of behavioral-based malware detectors is to combine fast (usually not highly accurate) traditional machine learning (ML) detectors with high-accuracy, but time-consuming, deep learning (DL) models. The main idea is to place software receiving borderline classifications by traditional ML methods in an environment where uncertainty is added, while software is analyzed by time-consuming DL models. The goal of uncertainty is to rate-limit actions of potential malware during deep analysis. In this paper, we describe Chameleon, a Linux-based framework that implements this uncertain environment. Chameleon offers two environments for its OS processes: standard - for software identified as benign by traditional ML detectors - and uncertain - for software that received borderline classifications analyzed by ML methods. The uncertain environment will bring obstacles to software execution through random perturbations applied probabilistically on selected system calls. We evaluated Chameleon with 113 applications from common benchmarks and 100 malware samples for Linux. Our results show that at threshold 10%, intrusive and non-intrusive strategies caused approximately 65% of malware to fail accomplishing their tasks, while approximately 30% of the analyzed benign software to meet with various levels of disruption (crashed or hampered). We also found that I/O-bound software was three times more affected by uncertainty than CPU-bound software.
One of the specially designated versatile networks, commonly referred to as MANET, performs on the basics that each and every one grouping in nodes totally operate in self-sorting out limits. In any case, performing in a group capacity maximizes quality and different sources. Mobile ad hoc network is a wireless infrastructureless network. Due to its unique features, various challenges are faced under MANET when the role of routing and its security comes into play. The review has demonstrated that the impact of failures during the information transmission has not been considered in the existing research. The majority of strategies for ad hoc networks just determines the path and transmits the data which prompts to packet drop in case of failures, thus resulting in low dependability. The majority of the existing research has neglected the use of the rejoining processing of the root nodes network. Most of the existing techniques are based on detecting the failures but the use of path re-routing has also been neglected in the existing methods. Here, we have proposed a method of path re-routing for managing the authorized nodes and managing the keys for group in ad hoc environment. Securing Schemes, named as 2ACK and the EGSR schemes have been proposed, which may be truly interacted to most of the routing protocol. The path re-routing has the ability to reduce the ratio of dropped packets. The comparative analysis has clearly shown that the proposed technique outperforms the available techniques in terms of various quality metrics.
Nowadays the application of integrated management systems (IMS) attracts the attention of top management from various organizations. However, there is an important problem of running the security audits in IMS and realization of complex checks of different ISO standards in full scale with the essential reducing of available resources.
Information security policies are not easy to create unless organizations explicitly recognize the various steps required in the development process of an information security policy, especially in institutions of higher education that use enormous amounts of IT. An improper development process or a copied security policy content from another organization might also fail to execute an effective job. The execution could be aimed at addressing an issue such as the non-compliance to applicable rules and regulations even if the replicated policy is properly developed, referenced, cited in laws or regulations and interpreted correctly. A generic framework was proposed to improve and establish the development process of security policies in institutions of higher education. The content analysis and cross-case analysis methods were used in this study in order to gain a thorough understanding of the information security policy development process in institutions of higher education.
This paper combines FMEA and n2 approaches in order to create a methodology to determine risks associated with the components of an underwater system. This methodology is based on defining the risk level related to each one of the components and interfaces that belong to a complex underwater system. As far as the authors know, this approach has not been reported before. The resulting information from the mentioned procedures is combined to find the system's critical elements and interfaces that are most affected by each failure mode. Finally, a calculation is performed to determine the severity level of each failure mode based on the system's critical elements.
Trust Management (TM) systems for authentication are vital to the security of online interactions, which are ubiquitous in our everyday lives. Various systems, like the Web PKI (X.509) and PGP's Web of Trust are used to manage trust in this setting. In recent years, blockchain technology has been introduced as a panacea to our security problems, including that of authentication, without sufficient reasoning, as to its merits.In this work, we investigate the merits of using open distributed ledgers (ODLs), such as the one implemented by blockchain technology, for securing TM systems for authentication. We formally model such systems, and explore how blockchain can help mitigate attacks against them. After formal argumentation, we conclude that in the context of Trust Management for authentication, blockchain technology, and ODLs in general, can offer considerable advantages compared to previous approaches. Our analysis is, to the best of our knowledge, the first to formally model and argue about the security of TM systems for authentication, based on blockchain technology. To achieve this result, we first provide an abstract model for TM systems for authentication. Then, we show how this model can be conceptually encoded in a blockchain, by expressing it as a series of state transitions. As a next step, we examine five prevalent attacks on TM systems, and provide evidence that blockchain-based solutions can be beneficial to the security of such systems, by mitigating, or completely negating such attacks.
Advanced Persistent Threats are increasingly becoming one of the major concerns to many industries and organizations. Currently, there exists numerous articles and industrial reports describing various case studies of recent notable Advanced Persistent Threat attacks. However, these documents are expressed in natural language. This limits the efficient reusability of the threat intelligence information due to ambiguous nature of the natural language. In this article, we propose a model to formally represent Advanced Persistent Threats as multi-agent systems. Our model is inspired by the concepts of agent-oriented social modelling approaches, generally used for software security requirement analysis.
As the development of technology increases, the security risk also increases. This has affected most organizations, irrespective of size, as they depend on the increasingly pervasive technology to perform their daily tasks. However, the dependency on technology has introduced diverse security vulnerabilities in organizations which requires a reliable preparedness for probable forensic investigation of the unauthorized incident. Keystroke dynamics is one of the cost-effective methods for collecting potential digital evidence. This paper presents a keystroke pattern analysis technique suitable for the collection of complementary potential digital evidence for forensic readiness. The proposition introduced a technique that relies on the extraction of reliable behavioral signature from user activity. Experimental validation of the proposition demonstrates the effectiveness of proposition using a multi-scheme classifier. The overall goal is to have forensically sound and admissible keystroke evidence that could be presented during the forensic investigation to minimize the costs and time of the investigation.
Whilst the fundamental composition of digital forensic readiness have been expounded by myriad literature, the integration of behavioral modalities have not been considered. Behavioral modalities such as keystroke and mouse dynamics are key components of human behavior that have been widely used in complementing security in an organization. However, these modalities present better forensic properties, thus more relevant in investigation/incident response, than its deployment in security. This study, therefore, proposes a forensic framework which encompasses a step-by-step guide on how to integrate behavioral biometrics into digital forensic readiness process. The proposed framework, behavioral biometrics-based digital forensics readiness framework (BBDFRF) comprised four phases which include data acquisition, preservation, user-authentication, and user pattern attribution phase. The proposed BBDFRF is evaluated in line with the ISO/IEC 27043 standard for proactive forensics, to address the gap on the integration of the behavioral biometrics into proactive forensics. BBDFRF thus extends the body of literature on the forensic capability of behavioral biometrics. The implementation of this framework can be used to also strengthen the security mechanism of an organization, particularly on continuous authentication.
The article issue is the enterprise information protection within the internet of things concept. The aim of research is to develop arrangements set to ensure secure enterprise IPv6 network operating. The object of research is the enterprise IPv6 network. The subject of research is modern switching equipment as a tool to ensure network protection. The research task is to prioritize functioning of switches in production and corporation enterprise networks, to develop a network host protection algorithm, to test the developed algorithm on the Cisco Packet Tracer 7 software emulator. The result of research is the proposed approach to IPv6-network security based on analysis of modern switches functionality, developed and tested enterprise network host protection algorithm under IPv6-protocol with an automated network SLAAC-configuration control, a set of arrangements for resisting default enterprise gateway attacks, using ACL, VLAN, SEND, RA Guard security technology, which allows creating sufficiently high level of networks security.
Volume of digital data is increasing at a faster rate and the security of the data is at risk while being transit on a network as well as at rest. The execution time of full disk encryption in large servers is significant because of the computational complexity associated with disk encryption. Hence it is necessary to reduce the execution time of full disk encryption from the application point of view. In this work a full disk encryption algorithm namely EME2 AES (Encrypt Mix Encrypt V2 Advanced Encryption Standard) is analyzed. The execution speed of this algorithm is reduced by means of multicore compatible parallel implementation which makes use of available cores. Parallel implementation is executed on a multicore machine with 8 cores and speed up on the multicore implementation is measured. Results show that the multicore implementation of EME2 AES using OpenMP is up to 2.85 times faster than sequential execution for the chosen infrastructure and data range.
Used by both information systems designers and security personnel, the Attack Tree method provides a graphical analysis of the ways in which an entity (a computer system or network, an entire organization, etc.) can be attacked and indicates the countermeasures that can be taken to prevent the attackers to reach their objective. In this paper, we built an Attack Tree focused on the goal “compromising the security of a Web platform”, considering the most common vulnerabilities of the WordPress platform identified by CVE (Common Vulnerabilities and Exposures), a global reference system for recording information regarding computer security threats. Finally, based on the likelihood of the attacks, we made a quantitative analysis of the probability that the security of the Web platform can be compromised.
The Air Force is shifting its cybersecurity paradigm from an information technology (IT)-centric toward a mission oriented approach. Instead of focusing on how to defend its IT infrastructure, it seeks to provide mission assurance by defending mission relevant cyber terrain enabling mission execution in a contested environment. In order to actively defend a mission in cyberspace, efforts must be taken to understand and document that mission's dependence on cyberspace and cyber assets. This is known as cyber terrain mission mapping. This paper seeks to define mission mapping and overview methodologies. We also analyze current tools seeking to provide cyber situational awareness through mission mapping or cyber dependency impact analysis and identify existing shortfalls.