Biblio
Several assessment techniques and methodologies exist to analyze the security of an application dynamically. However, they either are focused on a particular product or are mainly concerned about the assessment process rather than the product's security confidence. Most crucially, they tend to assess the security of a target application as a standalone artifact without assessing its host infrastructure. Such attempts can undervalue the overall security posture since the infrastructure becomes crucial when it hosts a critical application. We present an ontology-based security model that aims to provide the necessary knowledge, including network settings, application configurations, testing techniques and tools, and security metrics to evaluate the security aptitude of a critical application in the context of its hosting infrastructure. The objective is to integrate the current good practices and standards in security testing and virtualization to furnish an on-demand and test-ready virtual target infrastructure to execute the critical application and to initiate a context-aware and quantifiable security assessment process in an automated manner. Furthermore, we present a security assessment architecture to reflect on how the ontology can be integrated into a standard process.
A wide variety of security software systems need to be integrated into a Security Orchestration Platform (SecOrP) to streamline the processes of defending against and responding to cybersecurity attacks. Lack of interpretability and interoperability among security systems are considered the key challenges to fully leverage the potential of the collective capabilities of different security systems. The processes of integrating security systems are repetitive, time-consuming and error-prone; these processes are carried out manually by human experts or using ad-hoc methods. To help automate security systems integration processes, we propose an Ontology-driven approach for Security OrchestrAtion Platform (OnSOAP). The developed solution enables interpretability, and interoperability among security systems, which may exist in operational silos. We demonstrate OnSOAP's support for automated integration of security systems to execute the incident response process with three security systems (Splunk, Limacharlie, and Snort) for a Distributed Denial of Service (DDoS) attack. The evaluation results show that OnSOAP enables SecOrP to interpret the input and output of different security systems, produce error-free integration details, and make security systems interoperable with each other to automate and accelerate an incident response process.
The smart grid is a complex cyber-physical system (CPS) that poses challenges related to scale, integration, interoperability, processes, governance, and human elements. The US National Institute of Standards and Technology (NIST) and its government, university and industry collaborators, developed an approach, called CPS Framework, to reasoning about CPS across multiple levels of concern and competency, including trustworthiness, privacy, reliability, and regulatory. The approach uses ontology and reasoning techniques to achieve a greater understanding of the interdependencies among the elements of the CPS Framework model applied to use cases. This paper demonstrates that the approach extends naturally to automated and manual decision-making for smart grids: we apply it to smart grid use cases, and illustrate how it can be used to analyze grid topologies and address concerns about the smart grid. Smart grid stakeholders, whose decision making may be assisted by this approach, include planners, designers and operators.
Techniques applied in response to detrimental digital incidents vary in many respects according to their attributes. Models of techniques exist in current research but are typically restricted to some subset with regards to the discipline of the incident. An enormous collection of techniques is actually available for use. There is no single model representing all these techniques. There is no current categorisation of digital forensics reactive techniques that classify techniques according to the attribute of function and nor is there an attempt to classify techniques in a means that goes beyond a subset. In this paper, an ontology that depicts digital forensic reactive techniques classified by function is presented. The ontology itself contains additional information for each technique useful for merging into a cognate system where the relationship between techniques and other facets of the digital investigative process can be defined. A number of existing techniques were collected and described according to their function - a verb. The function then guided the placement and classification of the techniques in the ontology according to the ontology development process. The ontology contributes to a knowledge base for digital forensics - essentially useful as a resource for the various people operating in the field of digital forensics. The benefit of this that the information can be queried, assumptions can be made explicit, and there is a one-stop-shop for digital forensics reactive techniques with their place in the investigation detailed.
Development of information systems dealing with education and labour market using web and grid service architecture enables their modularity, expandability and interoperability. Application of ontologies to the web helps with collecting and selecting the knowledge about a certain field in a generic way, thus enabling different applications to understand, use, reuse and share the knowledge among them. A necessary step before publishing computer-interpretable data on the public web is the implementation of common standards that will ensure the exchange of information. Croatian Qualification Framework (CROQF) is a project of standardization of occupations for the labour market, as well as standardization of sets of qualifications, skills and competences and their mutual relations. This paper analysis a respectable amount of research dealing with application of ontologies to information systems in education during the last decade. The main goal is to compare achieved results according to: 1) phases of development/classifications of education-related ontologies; 2) areas of education and 3) standards and structures of metadata for educational systems. Collected information is used to provide insight into building blocks of CROQF, both the ones well supported by experience and best practices, and the ones that are not, together with guidelines for development of own standards using ontological structures.
This paper presents PSO, an ontological framework and a methodology for improving physical security and insider threat detection. PSO can facilitate forensic data analysis and proactively mitigate insider threats by leveraging rule-based anomaly detection. In all too many cases, rule-based anomaly detection can detect employee deviations from organizational security policies. In addition, PSO can be considered a security provenance solution because of its ability to fully reconstruct attack patterns. Provenance graphs can be further analyzed to identify deceptive actions and overcome analytical mistakes that can result in bad decision-making, such as false attribution. Moreover, the information can be used to enrich the available intelligence (about intrusion attempts) that can form use cases to detect and remediate limitations in the system, such as loosely-coupled provenance graphs that in many cases indicate weaknesses in the physical security architecture. Ultimately, validation of the framework through use cases demonstrates and proves that PS0 can improve an organization's security posture in terms of physical security and insider threat detection.
The Semantic Web can be used to enable the interoperability of IoT devices and to annotate their functional and nonfunctional properties, including security and privacy. In this paper, we will show how to use the ontology and JSON-LD to annotate connectivity, security and privacy properties of IoT devices. Out of that, we will present our prototype for a lightweight, secure application level protocol wrapper that ensures communication consistency, secrecy and integrity for low cost IoT devices like the ESP8266 and Photon particle.
The Semantic Web today is a web that allows for intelligent knowledge retrieval by means of semantically annotated tags. This web also known as Intelligent web aims to provide meaningful information to man and machines equally. However, the information thus provided lacks the component of trust. Therefore we propose a method to embed trust in semantic web documents by the concept of provenance which provides answers to who, when, where and by whom the documents were created or modified. This paper demonstrates the same using the Manchester approach of provenance implemented in a University Ontology.
In this paper, we present a security and privacy enhancement (SPE) framework for unmodified mobile operating systems. SPE introduces a new layer between the application and the operating system and does not require a device be jailbroken or utilize a custom operating system. We utilize an existing ontology designed for enforcing security and privacy policies on mobile devices to build a policy that is customizable. Based on this policy, SPE provides enhancements to native controls that currently exist on the platform for privacy and security sensitive components. SPE allows access to these components in a way that allows the framework to ensure the application is truthful in its declared intent and ensure that the user's policy is enforced. In our evaluation we verify the correctness of the framework and the computing impact on the device. Additionally, we discovered security and privacy issues in several open source applications by utilizing the SPE Framework. From our findings, if SPE is adopted by mobile operating systems producers, it would provide consumers and businesses the additional privacy and security controls they demand and allow users to be more aware of security and privacy issues with applications on their devices.
In recent years, the usage of unmanned aircraft systems (UAS) for security-related purposes has increased, ranging from military applications to different areas of civil protection. The deployment of UAS can support security forces in achieving an enhanced situational awareness. However, in order to provide useful input to a situational picture, sensor data provided by UAS has to be integrated with information about the area and objects of interest from other sources. The aim of this study is to design a high-level data fusion component combining probabilistic information processing with logical and probabilistic reasoning, to support human operators in their situational awareness and improving their capabilities for making efficient and effective decisions. To this end, a fusion component based on the ISR (Intelligence, Surveillance and Reconnaissance) Analytics Architecture (ISR-AA) [1] is presented, incorporating an object-oriented world model (OOWM) for information integration, an expressive knowledge model and a reasoning component for detection of critical events. Approaches for translating the information contained in the OOWM into either an ontology for logical reasoning or a Markov logic network for probabilistic reasoning are presented.
Over the last decade, a globalization of the software industry took place, which facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the software engineering community, with not only components but also their problems and vulnerabilities being now shared. For example, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing these vulnerabilities at a global scale becomes an inherently difficult task since many of the existing resources required for such analysis still rely on proprietary knowledge representation. In this research, we introduce an ontology-based knowledge modeling approach that can eliminate such information silos. More specifically, we focus on linking security knowledge with other software knowledge to improve traceability and trust in software products (APIs). Our approach takes advantage of the Semantic Web and its reasoning services, to trace and assess the impact of security vulnerabilities across project boundaries. We present a case study, to illustrate the applicability and flexibility of our ontological modeling approach by tracing vulnerabilities across project and resource boundaries.
Expressing and matching the security policy of each participant accurately is the precondition to construct a secure service composition. Most schemes presently use syntactic approaches to represent and match the security policy for service composition process, which is prone to result in false negative because of lacking semantics. In this paper, a novel approach based on semantics is proposed to express and match the security policies in service composition. Through constructing a general security ontology, the definition method and matching algorithm of the semantic security policy for service composition are presented, and the matching problem of policy is translated into the subsumption reasoning problem of semantic concept. Both the theoretical analysis and experimental evaluation show that, the proposed approach can present the necessary semantic information in the representation of policy and effectively improve the accuracy of matching result, thus overcome the deficiency of the syntactic approaches, and can also simplify the definition and management of the policy at the same time, which thereby provides a more effective solution for building the secure service composition based on security policy.
Recently personal information due to the APT attack, the economic damage and leakage of confidential information is a serious social problem, a great deal of research has been done to solve this problem. APT attacks are threatening traditional hacking techniques as well as to increase the success rate of attacks using sophisticated attack techniques such attacks Zero-Day vulnerability in order to avoid detection techniques and state-of-the-art security because it uses a combination of intelligence. In this paper, the malicious code is designed to detect APT attack based on APT attack behavior ontology that occur during the operation on the target system, it uses intelligent APT attack than to define inference rules can be inferred about malicious attack behavior to propose a method that can be detected.
Although current Internet operations generate voluminous data, they remain largely oblivious of traffic data semantics. This poses many inefficiencies and challenges due to emergent or anomalous behavior impacting the vast array of Internet elements such as services and protocols. In this paper, we propose a Data Semantics Management System (DSMS) for learning Internet traffic data semantics to enable smarter semantics- driven networking operations. We extract networking semantics and build and utilize a dynamic ontology of network concepts to better recognize and act upon emergent or abnormal behavior. Our DSMS utilizes: (1) Latent Dirichlet Allocation algorithm (LDA) for latent features extraction and semantics reasoning; (2) big tables as a cloud-like data storage technique to maintain large-scale data; and (3) Locality Sensitive Hashing algorithm (LSH) for reducing data dimensionality. Our preliminary evaluation using real Internet traffic shows the efficacy of DSMS for learning behavior of normal and abnormal traffic data and for accurately detecting anomalies at low cost.
Dynamic firewalls with stateful inspection have added a lot of security features over the stateless traditional static filters. Dynamic firewalls need to be adaptive. In this paper, we have designed a framework for dynamic firewalls based on probabilistic ontology using Multi Entity Bayesian Networks (MEBN) logic. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated substructures and can express a probability distribution over models of any consistent first order theory. The motivation of our proposed work is about preventing novel attacks (i.e. those attacks for which no signatures have been generated yet). The proposed framework is in two important parts: first part is the data flow architecture which extracts important connection based features with the prime goal of an explicit rule inclusion into the rule base of the firewall; second part is the knowledge flow architecture which uses semantic threat graph as well as reasoning under uncertainty to fulfill the required objective of providing futuristic threat prevention technique in dynamic firewalls.