Biblio
Nowadays, Information Technology is one of the important parts of human life and also of organizations. Organizations face problems such as IT problems. To solve these problems, they have to improve their security sections. Thus there is a need for security assessments within organizations to ensure security conditions. The use of security standards and general metric can be useful for measuring the safety of an organization; however, it should be noted that the general metric which are applied to businesses in general cannot be effective in this particular situation. Thus it's important to select metric standards for different businesses to improve both cost and organizational security. The selection of suitable security measures lies in the use of an efficient way to identify them. Due to the numerous complexities of these metric and the extent to which they are defined, in this paper that is based on comparative study and the benchmarking method, taxonomy for security measures is considered to be helpful for a business to choose metric tailored to their needs and conditions.
With an increase in targeted attacks such as advanced persistent threats (APTs), enterprise system defenders require comprehensive frameworks that allow them to collaborate and evaluate their defense systems against such attacks. MITRE has developed a framework which includes a database of different kill-chains, tactics, techniques, and procedures that attackers employ to perform these attacks. In this work, we leverage natural language processing techniques to extract attacker actions from threat report documents generated by different organizations and automatically classify them into standardized tactics and techniques, while providing relevant mitigation advisories for each attack. A naïve method to achieve this is by training a machine learning model to predict labels that associate the reports with relevant categories. In practice, however, sufficient labeled data for model training is not always readily available, so that training and test data come from different sources, resulting in bias. A naïve model would typically underperform in such a situation. We address this major challenge by incorporating an importance weighting scheme called bias correction that efficiently utilizes available labeled data, given threat reports, whose categories are to be automatically predicted. We empirically evaluated our approach on 18,257 real-world threat reports generated between year 2000 and 2018 from various computer security organizations to demonstrate its superiority by comparing its performance with an existing approach.
Application whitelisting software allows only examined and trusted applications to run on user's machine. Since many malicious files don't require administrative privileges in order for them to be executed, whitelisting can be the only way to block the execution of unauthorized applications in enterprise environment and thus prevent infection or data breach. In order to assess the current state of such solutions, the access to three whitelisting solution licenses was obtained with the purpose to test their effectiveness against different modern types of ransomware found in the wild. To conduct this study a virtual environment was used with Windows Server and Enterprise editions installed. The objective of this paper is not to evaluate each vendor or make recommendations of purchasing specific software but rather to assess the ability of application control solutions to block execution of ransomware files, as well as assess the potential for future research. The results of the research show the promise and effectiveness of whitelisting solutions.
Organizations are exposed to various cyber-attacks. When a component is exploited, the overall computed damage is impacted by the number of components the network includes. This work is focuses on estimating the Target Distribution characteristic of an attacked network. According existing security assessment models, Target Distribution is assessed by using ordinal values based on users' intuitive knowledge. This work is aimed at defining a formula which enables measuring quantitatively the attacked components' distribution. The proposed formula is based on the real-time configuration of the system. Using the proposed measure, firms can quantify damages, allocate appropriate budgets to actual real risks and build their configuration while taking in consideration the risks impacted by components' distribution. The formula is demonstrated as part of a security continuous monitoring system.
Business or military missions are supported by hardware and software systems. Unanticipated cyber activities occurring in supporting systems can impact such missions. In order to quantify such impact, we describe a layered graphical model as an extension of forensic investigation. Our model has three layers: the upper layer models operational tasks that constitute the mission and their inter-dependencies. The middle layer reconstructs attack scenarios from available evidence to reconstruct their inter-relationships. In cases where not all evidence is available, the lower level reconstructs potentially missing attack steps. Using the three levels of graphs constructed in these steps, we present a method to compute the impacts of attack activities on missions. We use NIST National Vulnerability Database's (NVD)-Common Vulnerability Scoring System (CVSS) scores or forensic investigators' estimates in our impact computations. We present a case study to show the utility of our model.
Tracing and integrating security requirements throughout the development process is a key challenge in security engineering. In socio-technical systems, security requirements for the organizational and technical aspects of a system are currently dealt with separately, giving rise to substantial misconceptions and errors. In this paper, we present a model-based security engineering framework for supporting the system design on the organizational and technical level. The key idea is to allow the involved experts to specify security requirements in the languages they are familiar with: business analysts use BPMN for procedural system descriptions; system developers use UML to design and implement the system architecture. Security requirements are captured via the language extensions SecBPMN2 and UMLsec. We provide a model transformation to bridge the conceptual gap between SecBPMN2 and UMLsec. Using UMLsec policies, various security properties of the resulting architecture can be verified. In a case study featuring an air traffic management system, we show how our framework can be practically applied.
Securing their critical documents on the cloud from data threats is a major challenge faced by organizations today. Controlling and limiting access to such documents requires a robust and trustworthy access control mechanism. In this paper, we propose a semantically rich access control system that employs an access broker module to evaluate access decisions based on rules generated using the organizations confidentiality policies. The proposed system analyzes the multi-valued attributes of the user making the request and the requested document that is stored on a cloud service platform, before making an access decision. Furthermore, our system guarantees an end-to-end oblivious data transaction between the organization and the cloud service provider using oblivious storage techniques. Thus, an organization can use our system to secure their documents as well as obscure their access pattern details from an untrusted cloud service provider.
Supercomputers are widely applied in various domains, which have advantage of high processing capability and mass storage. With growing supercomputing users, the system security receives comprehensive attentions, and becomes more and more important. In this paper, according to the characteristics of supercomputing environment, we perform an in-depth analysis of existing security problems in the process of using resources. To solve these problems, we propose a security analysis method and a prototype system for supercomputing users' behavior. The basic idea is to restore the complete users' behavior paths and operation records based on the supercomputing business process and track the use of resources. Finally, the method is evaluated and the results show that the security analysis method of users' behavior can help administrators detect security incidents in time and respond quickly. The final purpose is to optimize and improve the security level of the whole system.
This publication presents some techniques for insider threats and cryptographic protocols in secure processes. Those processes are dedicated to the information management of strategic data splitting. Strategic data splitting is dedicated to enterprise management processes as well as methods of securely storing and managing this type of data. Because usually strategic data are not enough secure and resistant for unauthorized leakage, we propose a new protocol that allows to protect data in different management structures. The presented data splitting techniques will concern cryptographic information splitting algorithms, as well as data sharing algorithms making use of cognitive data analysis techniques. The insider threats techniques will concern data reconstruction methods and cognitive data analysis techniques. Systems for the semantic analysis and secure information management will be used to conceal strategic information about the condition of the enterprise. Using the new approach, which is based on cognitive systems allow to guarantee the secure features and make the management processes more efficient.
Insider threats can cause immense damage to organizations of different types, including government, corporate, and non-profit organizations. Being an insider, however, does not necessarily equate to being a threat. Effectively identifying valid threats, and assessing the type of threat an insider presents, remain difficult challenges. In this work, we propose a novel breakdown of eight insider threat types, identified by using three insider traits: predictability, susceptibility, and awareness. In addition to presenting this framework for insider threat types, we implement a computational model to demonstrate the viability of our framework with synthetic scenarios devised after reviewing real world insider threat case studies. The results yield useful insights into how further investigation might proceed to reveal how best to gauge predictability, susceptibility, and awareness, and precisely how they relate to the eight insider types.
Organizations are experiencing an ever-growing concern of how to identify and defend against insider threats. Those who have authorized access to sensitive organizational data are placed in a position of power that could well be abused and could cause significant damage to an organization. This could range from financial theft and intellectual property theft to the destruction of property and business reputation. Traditional intrusion detection systems are neither designed nor capable of identifying those who act maliciously within an organization. In this paper, we describe an automated system that is capable of detecting insider threats within an organization. We define a tree-structure profiling approach that incorporates the details of activities conducted by each user and each job role and then use this to obtain a consistent representation of features that provide a rich description of the user's behavior. Deviation can be assessed based on the amount of variance that each user exhibits across multiple attributes, compared against their peers. We have performed experimentation using ten synthetic data-driven scenarios and found that the system can identify anomalous behavior that may be indicative of a potential threat. We also show how our detection system can be combined with visual analytics tools to support further investigation by an analyst.
While most organizations continue to invest in traditional network defences, a formidable security challenge has been brewing within their own boundaries. Malicious insiders with privileged access in the guise of a trusted source have carried out many attacks causing far reaching damage to financial stability, national security and brand reputation for both public and private sector organizations. Growing exposure and impact of the whistleblower community and concerns about job security with changing organizational dynamics has further aggravated this situation. The unpredictability of malicious attackers, as well as the complexity of malicious actions, necessitates the careful analysis of network, system and user parameters correlated with insider threat problem. Thus it creates a high dimensional, heterogeneous data analysis problem in isolating suspicious users. This research work proposes an insider threat detection framework, which utilizes the attributed graph clustering techniques and outlier ranking mechanism for enterprise users. Empirical results also confirm the effectiveness of the method by achieving the best area under curve value of 0.7648 for the receiver operating characteristic curve.
Information security management is time-consuming and error-prone. Apart from day-to-day operations, organizations need to comply with industrial regulations or government directives. Thus, organizations are looking for security tools to automate security management tasks and daily operations. Security Content Automation Protocol (SCAP) is a suite of specifications that help to automate security management tasks such as vulnerability measurement and policy compliance evaluation. SCAP benchmark provides detailed guidance on setting the security configuration of network devices, operating systems, and applications. Organizations can use SCAP benchmark to perform automated configuration compliance assessment on network devices, operating systems, and applications. This paper discusses SCAP benchmark components and the development of a SCAP benchmark for automating Cisco router security configuration compliance.
The Industrial Internet promises to radically change and improve many industry's daily business activities, from simple data collection and processing to context-driven, intelligent and pro-active support of workers' everyday tasks and life. The present paper first provides insight into a typical industrial internet application architecture, then it highlights one fundamental arising contradiction: “Who owns the data is often not capable of analyzing it”. This statement is explained by imaging a visionary data supply chain that would realize some of the Industrial Internet promises. To concretely implement such a system, recent standards published by The Open Group are presented, where we highlight the characteristics that make them suitable for Industrial Internet applications. Finally, we discuss comparable solutions and concludes with new business use cases.
Sharing cyber security data across organizational boundaries brings both privacy risks in the exposure of personal information and data, and organizational risk in disclosing internal information. These risks occur as information leaks in network traffic or logs, and also in queries made across organizations. They are also complicated by the trade-offs in privacy preservation and utility present in anonymization to manage disclosure. In this paper, we define three principles that guide sharing security information across organizations: Least Disclosure, Qualitative Evaluation, and Forward Progress. We then discuss engineering approaches that apply these principles to a distributed security system. Application of these principles can reduce the risk of data exposure and help manage trust requirements for data sharing, helping to meet our goal of balancing privacy, organizational risk, and the ability to better respond to security with shared information.
Recent years have seen the rise of sophisticated attacks including advanced persistent threats (APT) which pose severe risks to organizations and governments. Additionally, new malware strains appear at a higher rate than ever before. Since many of these malware evade existing security products, traditional defenses deployed by enterprises today often fail at detecting infections at an early stage. We address the problem of detecting early-stage APT infection by proposing a new framework based on belief propagation inspired from graph theory. We demonstrate that our techniques perform well on two large datasets. We achieve high accuracy on two months of DNS logs released by Los Alamos National Lab (LANL), which include APT infection attacks simulated by LANL domain experts. We also apply our algorithms to 38TB of web proxy logs collected at the border of a large enterprise and identify hundreds of malicious domains overlooked by state-of-the-art security products.
Advanced Persistent Threat (APT), unlike traditional hacking attempts, carries out specific attacks on a specific target to illegally collect information and data from it. These targeted attacks use special-crafted malware and infrequent activity to avoid detection, so that hackers can retain control over target systems unnoticed for long periods of time. In order to detect these stealthy activities, a large-volume of traffic data generated in a period of time has to be analyzed. We proposed a scalable solution, Ctracer to detect stealthy command and control channel in a large-volume of traffic data. APT uses multiple command and control (C&C) channel and change them frequently to avoid detection, but there are common signatures in those C&C sessions. By identifying common network signature, Ctracer is able to group the C&C sessions. Therefore, we can detect an APT and all the C&C session used in an APT attack. The Ctracer is evaluated in a large enterprise for four months, twenty C&C servers, three APT attacks are reported. After investigated by the enterprise's Security Operations Center (SOC), the forensic report shows that there is specific enterprise targeted APT cases and not ever discovered for over 120 days.
Identity verification plays an important role in creating trust in the economic system. It can, and should, be done in a way that doesn't decrease individual privacy.
DeepQA is a large-scale natural language processing (NLP) question-and-answer system that responds across a breadth of structured and unstructured data, from hundreds of analytics that are combined with over 50 models, trained through machine learning. After the 2011 historic milestone of defeating the two best human players in the Jeopardy! game show, the technology behind IBM Watson, DeepQA, is undergoing gamification into real-world business problems. Gamifying a business domain for Watson is a composite of functional, content, and training adaptation for nongame play. During domain gamification for medical, financial, government, or any other business, each system change affects the machine-learning process. As opposed to the original Watson Jeopardy!, whose class distribution of positive-to-negative labels is 1:100, in adaptation the computed training instances, question-and-answer pairs transformed into true-false labels, result in a very low positive-to-negative ratio of 1:100 000. Such initial extreme class imbalance during domain gamification poses a big challenge for the Watson machine-learning pipelines. The combination of ingested corpus sets, question-and-answer pairs, configuration settings, and NLP algorithms contribute toward the challenging data state. We propose several data engineering techniques, such as answer key vetting and expansion, source ingestion, oversampling classes, and question set modifications to increase the computed true labels. In addition, algorithm engineering, such as an implementation of the Newton-Raphson logistic regression with a regularization term, relaxes the constraints of class imbalance during training adaptation. We conclude by empirically demonstrating that data and algorithm engineering are complementary and indispensable to overcome the challenges in this first Watson gamification for real-world business problems.
File encryption is an effective way for an enterprise to prevent its data from being lost. However, the data may still be deliberately or inadvertently leaked out by the insiders or customers. When the sensitive data are leaked, it often results in huge monetary damages and credit loss. In this paper, we propose a novel group file encryption/decryption method, named the Group File Encryption Method using Dynamic System Environment Key (GEMS for short), which provides users with auto crypt, authentication, authorization, and auditing security schemes by utilizing a group key and a system environment key. In the GEMS, the important parameters are hidden and stored in different devices to avoid them from being cracked easily. Besides, it can resist known-key and eavesdropping attacks to achieve a very high security level, which is practically useful in securing an enterprise's and a government's private data.
The paradigm shift from traditional BPM to Subject-oriented BPM (S-BPM) is accounted to identifying independently acting subjects. As such, they can perform arbitrary actions on arbitrary objects. Abstract State Machines (ASMs) work on a similar basis. Exploring their capabilities with respect to representing and executing S-BPM models strengthens the theoretical foundations of S-BPM, and thus, validity of S-BPM tools. Moreover it enables coherent intertwining of business process modeling with executing of S-BPM representations. In this contribution we introduce the framework and roadmap tackling the exploration of the ASM approach in the context of S-BPM. We also report the major result, namely the implementation of an executable workflow engine with an Abstract State Machine interpreter based on an existing abstract interpreter model for S-BPM (applying the ASM refinement concept). This workflow engine serves as a baseline and reference implementation for further language and processing developments, such as simulation tools, as it has been developed within the Open-S-BPM initiative.