Biblio
The method of assessment of degree of compliance of divisions of the complex distributed corporate information system to a number of information security indicators is offered. As a result of the methodology implementation a comparative assessment of compliance level of each of the divisions for the corporate information security policy requirements may be given. This assessment may be used for the purpose of further decision-making by the management of the corporation on measures to minimize risks as a result of possible implementation of threats to information security.
In this research paper author surveys the need of data protection from intelligent systems in the private and public sectors. For this, she identifies that the Smart Information Security Intel processes needs to be the suggestive key policy for both sectors of governance either public or private. The information is very sensitive for any organization. When the government offices are concerned, information needs to be abstracted and encapsulated so that there is no information stealing. For this purposes, the art of skill set and new optimized technology needs to be stationed. Author identifies that digital bar-coded air port like security using conveyor belts and digital bar-coded conveyor boxes to scan switched ON articles like internet of things needs to be placed. As otherwise, there can potentially be data, articles or information stealing from the operational sites where access is unauthorized. Such activities shall need to be scrutinized, minutely. The biometric such as fingerprints, iris, voice and face recognition pattern updates in the virtual data tables must be taken to keep data entry-exit log up to-date. The information technicians of the sentinel systems must help catch the anomalies in the professional working time in private and public sectors if there is red flag as indicator. The author in this research paper shall discuss in detail what we shall station, how we shall station and what all measures we might need to undertake to safeguard the stealing of sensitive information from the organizations like administration buildings, government buildings, educational schools, hospitals, courts, private buildings, banks and all other offices nation-wide. The TO-BE new processes shall make the AS-IS office system more information secured, data protected and personnel security stronger.
This paper begins with an introduction to security metrics, describing the need for security metrics, followed by a discussion of the nature of security metrics, including the challenges found with some security metrics used in the past. The paper then discusses what makes a good security metric and proposes a rigorous step-by-step method that can be applied to design good security metrics, and to test existing security metrics to see if they are good metrics. Application examples are included to illustrate the method.
In enterprise environments, the amount of managed assets and vulnerabilities that can be exploited is staggering. Hackers' lateral movements between such assets generate a complex big data graph, that contains potential hacking paths. In this vision paper, we enumerate risk-reduction security requirements in large scale environments, then present the Agile Security methodology and technologies for detection, modeling, and constant prioritization of security requirements, agile style. Agile Security models different types of security requirements into the context of an attack graph, containing business process targets and critical assets identification, configuration items, and possible impacts of cyber-attacks. By simulating and analyzing virtual adversary attack paths toward cardinal assets, Agile Security examines the business impact on business processes and prioritizes surgical requirements. Thus, handling these requirements backlog that are constantly evaluated as an outcome of employing Agile Security, gradually increases system hardening, reduces business risks and informs the IT service desk or Security Operation Center what remediation action to perform next. Once remediated, Agile Security constantly recomputes residual risk, assessing risk increase by threat intelligence or infrastructure changes versus defender's remediation actions in order to drive overall attack surface reduction.
We address the need for security requirements to take into account risks arising from complex supply chains underpinning cyber-physical infrastructures such as industrial control systems (ICS). We present SEISMiC (SEcurity Industrial control SysteM supply Chains), a framework that takes into account the whole spectrum of security risks - from technical aspects through to human and organizational issues - across an ICS supply chain. We demonstrate the effectiveness of SEISMiC through a supply chain risk assessment of Natanz, Iran's nuclear facility that was the subject of the Stuxnet attack.
With the advent of the big data era, information systems have exhibited some new features, including boundary obfuscation, system virtualization, unstructured and diversification of data types, and low coupling among function and data. These features not only lead to a big difference between big data technology (DT) and information technology (IT), but also promote the upgrading and evolution of network security technology. In response to these changes, in this paper we compare the characteristics between IT era and DT era, and then propose four DT security principles: privacy, integrity, traceability, and controllability, as well as active and dynamic defense strategy based on "propagation prediction, audit prediction, dynamic management and control". We further discuss the security challenges faced by DT and the corresponding assurance strategies. On this basis, the big data security technologies can be divided into four levels: elimination, continuation, improvement, and innovation. These technologies are analyzed, combed and explained according to six categories: access control, identification and authentication, data encryption, data privacy, intrusion prevention, security audit and disaster recovery. The results will support the evolution of security technologies in the DT era, the construction of big data platforms, the designation of security assurance strategies, and security technology choices suitable for big data.
Nowadays, Information Technology is one of the important parts of human life and also of organizations. Organizations face problems such as IT problems. To solve these problems, they have to improve their security sections. Thus there is a need for security assessments within organizations to ensure security conditions. The use of security standards and general metric can be useful for measuring the safety of an organization; however, it should be noted that the general metric which are applied to businesses in general cannot be effective in this particular situation. Thus it's important to select metric standards for different businesses to improve both cost and organizational security. The selection of suitable security measures lies in the use of an efficient way to identify them. Due to the numerous complexities of these metric and the extent to which they are defined, in this paper that is based on comparative study and the benchmarking method, taxonomy for security measures is considered to be helpful for a business to choose metric tailored to their needs and conditions.
This paper demonstrates how the Insider Threat Cybersecurity Framework (ITCF) web tool and methodology help provide a more dynamic, defense-in-depth security posture against insider cyber and cyber-physical threats. ITCF includes over 30 cybersecurity best practices to help organizations identify, protect, detect, respond and recover to sophisticated insider threats and vulnerabilities. The paper tests the efficacy of this approach and helps validate and verify ITCF's capabilities and features through various insider attacks use-cases. Two case-studies were explored to determine how organizations can leverage ITCF to increase their overall security posture against insider attacks. The paper also highlights how ITCF facilitates implementation of the goals outlined in two Presidential Executive Orders to improve the security of classified information and help owners and operators secure critical infrastructure. In realization of these goals, ITCF: provides an easy to use rapid assessment tool to perform an insider threat self-assessment; determines the current insider threat cybersecurity posture; defines investment-based goals to achieve a target state; connects the cybersecurity posture with business processes, functions, and continuity; and finally, helps develop plans to answer critical organizational cybersecurity questions. In this paper, the webtool and its core capabilities are tested by performing an extensive comparative assessment over two different high-profile insider threat incidents.
With an increase in targeted attacks such as advanced persistent threats (APTs), enterprise system defenders require comprehensive frameworks that allow them to collaborate and evaluate their defense systems against such attacks. MITRE has developed a framework which includes a database of different kill-chains, tactics, techniques, and procedures that attackers employ to perform these attacks. In this work, we leverage natural language processing techniques to extract attacker actions from threat report documents generated by different organizations and automatically classify them into standardized tactics and techniques, while providing relevant mitigation advisories for each attack. A naïve method to achieve this is by training a machine learning model to predict labels that associate the reports with relevant categories. In practice, however, sufficient labeled data for model training is not always readily available, so that training and test data come from different sources, resulting in bias. A naïve model would typically underperform in such a situation. We address this major challenge by incorporating an importance weighting scheme called bias correction that efficiently utilizes available labeled data, given threat reports, whose categories are to be automatically predicted. We empirically evaluated our approach on 18,257 real-world threat reports generated between year 2000 and 2018 from various computer security organizations to demonstrate its superiority by comparing its performance with an existing approach.
Organizations are exposed to various cyber-attacks. When a component is exploited, the overall computed damage is impacted by the number of components the network includes. This work is focuses on estimating the Target Distribution characteristic of an attacked network. According existing security assessment models, Target Distribution is assessed by using ordinal values based on users' intuitive knowledge. This work is aimed at defining a formula which enables measuring quantitatively the attacked components' distribution. The proposed formula is based on the real-time configuration of the system. Using the proposed measure, firms can quantify damages, allocate appropriate budgets to actual real risks and build their configuration while taking in consideration the risks impacted by components' distribution. The formula is demonstrated as part of a security continuous monitoring system.
Malware has become sophisticated and organizations don't have a Plan B when standard lines of defense fail. These failures have devastating consequences for organizations, such as sensitive information being exfiltrated. A promising avenue for improving the effectiveness of behavioral-based malware detectors is to combine fast (usually not highly accurate) traditional machine learning (ML) detectors with high-accuracy, but time-consuming, deep learning (DL) models. The main idea is to place software receiving borderline classifications by traditional ML methods in an environment where uncertainty is added, while software is analyzed by time-consuming DL models. The goal of uncertainty is to rate-limit actions of potential malware during deep analysis. In this paper, we describe Chameleon, a Linux-based framework that implements this uncertain environment. Chameleon offers two environments for its OS processes: standard - for software identified as benign by traditional ML detectors - and uncertain - for software that received borderline classifications analyzed by ML methods. The uncertain environment will bring obstacles to software execution through random perturbations applied probabilistically on selected system calls. We evaluated Chameleon with 113 applications from common benchmarks and 100 malware samples for Linux. Our results show that at threshold 10%, intrusive and non-intrusive strategies caused approximately 65% of malware to fail accomplishing their tasks, while approximately 30% of the analyzed benign software to meet with various levels of disruption (crashed or hampered). We also found that I/O-bound software was three times more affected by uncertainty than CPU-bound software.
Nowadays the application of integrated management systems (IMS) attracts the attention of top management from various organizations. However, there is an important problem of running the security audits in IMS and realization of complex checks of different ISO standards in full scale with the essential reducing of available resources.
The current state of the internet relies heavily on SSL/TLS and the certificate authority model. This model has systematic problems, both in its design as well as its implementation. There are problems with certificate revocation, certificate authority governance, breaches, poor security practices, single points of failure and with root stores. This paper begins with a general introduction to SSL/TLS and a description of the role of certificates, certificate authorities and root stores in the current model. This paper will then explore problems with the current model and describe work being done to help mitigate these problems.
In this paper, we review big data characteristics and security challenges in the cloud and visit different cloud domains and security regulations. We propose using integrated auditing for secure data storage and transaction logs, real-time compliance and security monitoring, regulatory compliance, data environment, identity and access management, infrastructure auditing, availability, privacy, legality, cyber threats, and granular auditing to achieve big data security. We apply a stochastic process model to conduct security analyses in availability and mean time to security failure. Potential future works are also discussed.
Software has emerged as a significant part of many domains, including financial service platforms, social networks and vehicle control. Standards organizations have responded to this by creating regulations to address issues such as safety and privacy. In this context, compliance of software with standards has emerged as a key issue. For software development organizations, compliance is a complex and costly goal to achieve and is often accomplished by producing so-called assurance cases, which demonstrate that the system indeed satisfies the property imposed by a standard (e.g., safety, privacy, security). As systems and standards undergo evolution for a variety of reasons, maintaining assurance cases multiplies the effort. In this work, we propose to exploit the connection between the field of model management and the problem of compliance management and propose methods that use model management techniques to address compliance scenarios such as assurance case evolution and reuse. For validation, we ground our approaches on the automotive domain and the ISO 26262 standard for functional safety of road vehicles.
With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.
Web application security has become crucially vital these days. Earlier "default allow" model was used to secure web applications but it was unable to secure web applications against plethora of attacks [1]. In contrast, more restricted security to the web applications is provided by default deny model which at first, builds a model for the particular application and then permits merely those requests that conform to that model while ignoring everything else. Besides this, a novel and effective methodology is followed that allows to analyze the validity of application requests and further results in the generation of semi structured XML cases for the web applications. Furthermore, mature and resilient XML cases are generated by employing learning techniques. This system will further be gauged by examining that XML file containing cases are in correct accordance with the XML format or not. Moreover, the distinction between malicious and non-malicious traffic is carried out carefully. Results have proved its efficacy of rule generation employing access traffic log of cross site scripting (XSS), SQL injection, HTTP Request Splitting, HTTP response splitting and Buffer overflow attacks.
The safety, security, and resilience of international postal, shipping, and transportation critical infrastructure are vital to the global supply chain that enables worldwide commerce and communications. But security on an international scale continues to fail in the face of new threats, such as the discovery by Panamanian authorities of suspected components of a surface-to-air missile system aboard a North Korean-flagged ship in July 2013 [1].This reality calls for new and innovative approaches to critical infrastructure security. Owners and operators of critical postal, shipping, and transportation operations need new methods to identify, assess, and mitigate security risks and gaps in the most effective manner possible.