Biblio
Traditional security practices focus on negative incentives that attempt to force compliance through constraints, monitoring, and punishment. This paper describes a missing dimension of most organizations' insider threat defense-one that explicitly considers positive incentives for attracting individuals to act in the interests of the organization. Positive incentives focus on properties of the organizational context of workforce management practices - including those relating to organizational supportiveness, coworker connectedness, and job engagement. Without due attention to the organizational context in which insider threats occur, insider misbehaviors may simply reoccur as a natural response to counterproductive or dysfunctional management practices. A balanced combination of positive and negative incentives can improve employees' relationships with the organization and provide a means for employees to better cope with personal and professional stressors. An insider threat program that balances organizational incentives can become an advocate for the workforce and a means for improving employee work life - a welcome message to employees who feel threatened by programs focused on discovering insider wrongdoing.
Public key infrastructure (PKI) is the foundation and core of network security construction. Blockchain (BC) has many technical characteristics, such as decentralization, impossibility of being tampered with and forged, which makes it have incomparable advantages in ensuring information credibility, security, traceability and other aspects of traditional technology. In this paper, a method of constructing PKI certificate system based on permissioned BC is proposed. The problems of multi-CA mutual trust, poor certificate configuration efficiency and single point failure in digital certificate system are solved by using the characteristics of BC distribution and non-tampering. At the same time, in order to solve the problem of identity privacy on BC, this paper proposes a privacy-aware PKI system based on permissioned BCs. This system is an anonymous digital certificate publishing scheme., which achieves the separation of user registration and authorization, and has the characteristics of anonymity and conditional traceability, so as to realize to protect user's identity privacy. The system meets the requirements of certificate security and anonymity, reduces the cost of CA construction, operation and maintenance in traditional PKI technology, and improves the efficiency of certificate application and configuration.
With an increase in targeted attacks such as advanced persistent threats (APTs), enterprise system defenders require comprehensive frameworks that allow them to collaborate and evaluate their defense systems against such attacks. MITRE has developed a framework which includes a database of different kill-chains, tactics, techniques, and procedures that attackers employ to perform these attacks. In this work, we leverage natural language processing techniques to extract attacker actions from threat report documents generated by different organizations and automatically classify them into standardized tactics and techniques, while providing relevant mitigation advisories for each attack. A naïve method to achieve this is by training a machine learning model to predict labels that associate the reports with relevant categories. In practice, however, sufficient labeled data for model training is not always readily available, so that training and test data come from different sources, resulting in bias. A naïve model would typically underperform in such a situation. We address this major challenge by incorporating an importance weighting scheme called bias correction that efficiently utilizes available labeled data, given threat reports, whose categories are to be automatically predicted. We empirically evaluated our approach on 18,257 real-world threat reports generated between year 2000 and 2018 from various computer security organizations to demonstrate its superiority by comparing its performance with an existing approach.
With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.
Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.
Trust Management (TM) systems for authentication are vital to the security of online interactions, which are ubiquitous in our everyday lives. Various systems, like the Web PKI (X.509) and PGP's Web of Trust are used to manage trust in this setting. In recent years, blockchain technology has been introduced as a panacea to our security problems, including that of authentication, without sufficient reasoning, as to its merits.In this work, we investigate the merits of using open distributed ledgers (ODLs), such as the one implemented by blockchain technology, for securing TM systems for authentication. We formally model such systems, and explore how blockchain can help mitigate attacks against them. After formal argumentation, we conclude that in the context of Trust Management for authentication, blockchain technology, and ODLs in general, can offer considerable advantages compared to previous approaches. Our analysis is, to the best of our knowledge, the first to formally model and argue about the security of TM systems for authentication, based on blockchain technology. To achieve this result, we first provide an abstract model for TM systems for authentication. Then, we show how this model can be conceptually encoded in a blockchain, by expressing it as a series of state transitions. As a next step, we examine five prevalent attacks on TM systems, and provide evidence that blockchain-based solutions can be beneficial to the security of such systems, by mitigating, or completely negating such attacks.
As cloud computing becomes increasingly pervasive, it is critical for cloud providers to support basic security controls. Although major cloud providers tout such features, relatively little is known in many cases about their design and implementation. In this paper, we describe several security features in OpenStack, a widely-used, open source cloud computing platform. Our contributions to OpenStack range from key management and storage encryption to guaranteeing the integrity of virtual machine (VM) images prior to boot. We describe the design and implementation of these features in detail and provide a security analysis that enumerates the threats that each mitigates. Our performance evaluation shows that these security features have an acceptable cost-in some cases, within the measurement error observed in an operational cloud deployment. Finally, we highlight lessons learned from our real-world development experiences from contributing these features to OpenStack as a way to encourage others to transition their research into practice.
Advanced Persistent Threats are increasingly becoming one of the major concerns to many industries and organizations. Currently, there exists numerous articles and industrial reports describing various case studies of recent notable Advanced Persistent Threat attacks. However, these documents are expressed in natural language. This limits the efficient reusability of the threat intelligence information due to ambiguous nature of the natural language. In this article, we propose a model to formally represent Advanced Persistent Threats as multi-agent systems. Our model is inspired by the concepts of agent-oriented social modelling approaches, generally used for software security requirement analysis.
Whilst the fundamental composition of digital forensic readiness have been expounded by myriad literature, the integration of behavioral modalities have not been considered. Behavioral modalities such as keystroke and mouse dynamics are key components of human behavior that have been widely used in complementing security in an organization. However, these modalities present better forensic properties, thus more relevant in investigation/incident response, than its deployment in security. This study, therefore, proposes a forensic framework which encompasses a step-by-step guide on how to integrate behavioral biometrics into digital forensic readiness process. The proposed framework, behavioral biometrics-based digital forensics readiness framework (BBDFRF) comprised four phases which include data acquisition, preservation, user-authentication, and user pattern attribution phase. The proposed BBDFRF is evaluated in line with the ISO/IEC 27043 standard for proactive forensics, to address the gap on the integration of the behavioral biometrics into proactive forensics. BBDFRF thus extends the body of literature on the forensic capability of behavioral biometrics. The implementation of this framework can be used to also strengthen the security mechanism of an organization, particularly on continuous authentication.
Malware has become sophisticated and organizations don't have a Plan B when standard lines of defense fail. These failures have devastating consequences for organizations, such as sensitive information being exfiltrated. A promising avenue for improving the effectiveness of behavioral-based malware detectors is to combine fast (usually not highly accurate) traditional machine learning (ML) detectors with high-accuracy, but time-consuming, deep learning (DL) models. The main idea is to place software receiving borderline classifications by traditional ML methods in an environment where uncertainty is added, while software is analyzed by time-consuming DL models. The goal of uncertainty is to rate-limit actions of potential malware during deep analysis. In this paper, we describe Chameleon, a Linux-based framework that implements this uncertain environment. Chameleon offers two environments for its OS processes: standard - for software identified as benign by traditional ML detectors - and uncertain - for software that received borderline classifications analyzed by ML methods. The uncertain environment will bring obstacles to software execution through random perturbations applied probabilistically on selected system calls. We evaluated Chameleon with 113 applications from common benchmarks and 100 malware samples for Linux. Our results show that at threshold 10%, intrusive and non-intrusive strategies caused approximately 65% of malware to fail accomplishing their tasks, while approximately 30% of the analyzed benign software to meet with various levels of disruption (crashed or hampered). We also found that I/O-bound software was three times more affected by uncertainty than CPU-bound software.
Organizations are exposed to various cyber-attacks. When a component is exploited, the overall computed damage is impacted by the number of components the network includes. This work is focuses on estimating the Target Distribution characteristic of an attacked network. According existing security assessment models, Target Distribution is assessed by using ordinal values based on users' intuitive knowledge. This work is aimed at defining a formula which enables measuring quantitatively the attacked components' distribution. The proposed formula is based on the real-time configuration of the system. Using the proposed measure, firms can quantify damages, allocate appropriate budgets to actual real risks and build their configuration while taking in consideration the risks impacted by components' distribution. The formula is demonstrated as part of a security continuous monitoring system.
Information security policies are not easy to create unless organizations explicitly recognize the various steps required in the development process of an information security policy, especially in institutions of higher education that use enormous amounts of IT. An improper development process or a copied security policy content from another organization might also fail to execute an effective job. The execution could be aimed at addressing an issue such as the non-compliance to applicable rules and regulations even if the replicated policy is properly developed, referenced, cited in laws or regulations and interpreted correctly. A generic framework was proposed to improve and establish the development process of security policies in institutions of higher education. The content analysis and cross-case analysis methods were used in this study in order to gain a thorough understanding of the information security policy development process in institutions of higher education.
Distributed Denial of Service (DDoS) is a sophisticated cyber-attack due to its variety of types and techniques. The traditional mitigation method of this attack is to deploy dedicated security appliances such as firewall, load balancer, etc. However, due to the limited capacity of the hardware and the potential high volume of DDoS traffic, it may not be able to defend all the attacks. Therefore, cloud-based DDoS protection services were introduced to allow the organizations to redirect their traffic to the scrubbing centers in the cloud for filtering. This solution has some drawbacks such as privacy violation and latency. More recently, Network Functions Virtualization (NFV) and edge computing have been proposed as new networking service models. In this paper, we design a framework that leverages NFV and edge computing for DDoS mitigation through two-stage processes.
In the 21st century, integrated transport, service and mobility concepts for real-life situations enabled by automation system and smarter connectivity. These services and ideas can be blessed from cloud computing, and big data management techniques for the transport system. These methods could also include automation, security, and integration with other modes. Integrated transport system can offer new means of communication among vehicles. This paper presents how hybrid could computing influence to make urban transportation smarter besides considering issues like security and privacy. However, a simple structured framework based on a hybrid cloud computing system might prevent common existing issues.
There is widening chasm between the ease of creating software and difficulty of "building security in". This paper reviews the approach, the findings and recent experiments from a seven-year effort to enable consistency across a large, diverse development organization and software portfolio via policies, guidance, automated tools and services. Experience shows that developing secure software is an elusive goal for most. It requires every team to know and apply a wide range of security knowledge in the context of what software is being built, how the software will be used, and the projected threats in the environment where the software will operate. The drive for better outcomes for secure development and increased developer productivity led to experiments to augment developer knowledge and eventually realize the goal of "building the right security in".
As one of the security components in cyber situational awareness systems, Intrusion Detection System (IDS) is implemented by many organizations in their networks to address the impact of network attacks. Regardless of the tools and technologies used to generate security alarms, IDS can provide a situation overview of network traffic. With the security alarm data generated, most organizations do not have the right techniques and further analysis to make this alarm data more valuable for the security team to handle attacks and reduce risk to the organization. This paper proposes the IDS Metrics Framework for cyber situational awareness system that includes the latest technologies and techniques that can be used to create valuable metrics for security advisors in making the right decisions. This metrics framework consists of the various tools and techniques used to evaluate the data. The evaluation of the data is then used as a measurement against one or more reference points to produce an outcome that can be very useful for the decision making process of cyber situational awareness system. This metric offers an additional Graphical User Interface (GUI) tools that produces graphical displays and provides a great platform for analysis and decision-making by security teams.
This paper combines FMEA and n2 approaches in order to create a methodology to determine risks associated with the components of an underwater system. This methodology is based on defining the risk level related to each one of the components and interfaces that belong to a complex underwater system. As far as the authors know, this approach has not been reported before. The resulting information from the mentioned procedures is combined to find the system's critical elements and interfaces that are most affected by each failure mode. Finally, a calculation is performed to determine the severity level of each failure mode based on the system's critical elements.
Insider threats can cause immense damage to organizations of different types, including government, corporate, and non-profit organizations. Being an insider, however, does not necessarily equate to being a threat. Effectively identifying valid threats, and assessing the type of threat an insider presents, remain difficult challenges. In this work, we propose a novel breakdown of eight insider threat types, identified by using three insider traits: predictability, susceptibility, and awareness. In addition to presenting this framework for insider threat types, we implement a computational model to demonstrate the viability of our framework with synthetic scenarios devised after reviewing real world insider threat case studies. The results yield useful insights into how further investigation might proceed to reveal how best to gauge predictability, susceptibility, and awareness, and precisely how they relate to the eight insider types.
Volume of digital data is increasing at a faster rate and the security of the data is at risk while being transit on a network as well as at rest. The execution time of full disk encryption in large servers is significant because of the computational complexity associated with disk encryption. Hence it is necessary to reduce the execution time of full disk encryption from the application point of view. In this work a full disk encryption algorithm namely EME2 AES (Encrypt Mix Encrypt V2 Advanced Encryption Standard) is analyzed. The execution speed of this algorithm is reduced by means of multicore compatible parallel implementation which makes use of available cores. Parallel implementation is executed on a multicore machine with 8 cores and speed up on the multicore implementation is measured. Results show that the multicore implementation of EME2 AES using OpenMP is up to 2.85 times faster than sequential execution for the chosen infrastructure and data range.
Nowadays the application of integrated management systems (IMS) attracts the attention of top management from various organizations. However, there is an important problem of running the security audits in IMS and realization of complex checks of different ISO standards in full scale with the essential reducing of available resources.
The article issue is the enterprise information protection within the internet of things concept. The aim of research is to develop arrangements set to ensure secure enterprise IPv6 network operating. The object of research is the enterprise IPv6 network. The subject of research is modern switching equipment as a tool to ensure network protection. The research task is to prioritize functioning of switches in production and corporation enterprise networks, to develop a network host protection algorithm, to test the developed algorithm on the Cisco Packet Tracer 7 software emulator. The result of research is the proposed approach to IPv6-network security based on analysis of modern switches functionality, developed and tested enterprise network host protection algorithm under IPv6-protocol with an automated network SLAAC-configuration control, a set of arrangements for resisting default enterprise gateway attacks, using ACL, VLAN, SEND, RA Guard security technology, which allows creating sufficiently high level of networks security.