Biblio
A wide variety of security software systems need to be integrated into a Security Orchestration Platform (SecOrP) to streamline the processes of defending against and responding to cybersecurity attacks. Lack of interpretability and interoperability among security systems are considered the key challenges to fully leverage the potential of the collective capabilities of different security systems. The processes of integrating security systems are repetitive, time-consuming and error-prone; these processes are carried out manually by human experts or using ad-hoc methods. To help automate security systems integration processes, we propose an Ontology-driven approach for Security OrchestrAtion Platform (OnSOAP). The developed solution enables interpretability, and interoperability among security systems, which may exist in operational silos. We demonstrate OnSOAP's support for automated integration of security systems to execute the incident response process with three security systems (Splunk, Limacharlie, and Snort) for a Distributed Denial of Service (DDoS) attack. The evaluation results show that OnSOAP enables SecOrP to interpret the input and output of different security systems, produce error-free integration details, and make security systems interoperable with each other to automate and accelerate an incident response process.
Hash message authentication is a fundamental building block of many networking security protocols such as SSL, TLS, FTP, and even HTTPS. The sponge-based SHA-3 hashing algorithm is the most recently developed hashing function as a result of a NIST competition to find a new hashing standard after SHA-1 and SHA-2 were found to have collisions, and thus were considered broken. We used Xilinx High-Level Synthesis to develop an optimized and pipelined version of the post-quantum-secure SHA-3 hash message authentication code (HMAC) which is capable of computing a HMAC every 280 clock-cycles with an overall throughput of 604 Mbps. We cover the general security of sponge functions in both a classical and quantum computing standpoint for hash functions, and offer a general architecture for HMAC computation when sponge functions are used.
Techniques applied in response to detrimental digital incidents vary in many respects according to their attributes. Models of techniques exist in current research but are typically restricted to some subset with regards to the discipline of the incident. An enormous collection of techniques is actually available for use. There is no single model representing all these techniques. There is no current categorisation of digital forensics reactive techniques that classify techniques according to the attribute of function and nor is there an attempt to classify techniques in a means that goes beyond a subset. In this paper, an ontology that depicts digital forensic reactive techniques classified by function is presented. The ontology itself contains additional information for each technique useful for merging into a cognate system where the relationship between techniques and other facets of the digital investigative process can be defined. A number of existing techniques were collected and described according to their function - a verb. The function then guided the placement and classification of the techniques in the ontology according to the ontology development process. The ontology contributes to a knowledge base for digital forensics - essentially useful as a resource for the various people operating in the field of digital forensics. The benefit of this that the information can be queried, assumptions can be made explicit, and there is a one-stop-shop for digital forensics reactive techniques with their place in the investigation detailed.
Trusted Execution Environments (TEEs) provide hardware support to isolate the execution of sensitive operations on mobile phones for improved security. However, they are not always available to use for application developers. To provide a consistent user experience to those who have and do not have a TEE-enabled device, we could get help from Open-TEE, an open-source GlobalPlatform (GP)-compliant software TEE emulator. However, Open-TEE does not offer any of the security properties hardware TEEs have. In this paper, we propose WhiteBox-TEE which integrates white-box cryptography with Open-TEE to provide better security while still remaining complaint with GP TEE specifications. We discuss the architecture, provisioning mechanism, implementation highlights, security properties and performance issues of WhiteBox-TEE and propose possible revisions to TEE specifications to have better use of white-box cryptography in software-only TEEs.
Fingerprinting the malware by its behavioural signature has been an attractive approach for malware detection due to the homogeneity of dynamic execution patterns across different variants of similar families. Although previous researches show reasonably good performance in dynamic detection using machine learning techniques on a large corpus of training set, decisions must be undertaken based upon a scarce number of observable samples in many practical defence scenarios. This paper demonstrates the effectiveness of generative adversarial autoencoder for dynamic malware detection under outbreak situations where in most cases a single sample is available for training the machine learning algorithm to detect similar samples that are in the wild.
Cloud Management Platforms (CMP) have been developed in recent years to set up cloud computing architecture. Infrastructure-as-a-Service (IaaS) is a cloud-delivered model designed by the provider to gather a set of IT resources which are furnished as services for user Virtual Machine Image (VMI) provisioning and management. Openstack is one of the most useful CMP which has been developed for industry and academic researches to simulate IaaS classical processes such as launch and store user VMI instance. In this paper, the main purpose is to adopt a security policy for a secure launch user VMI across a trust cloud environment founded on a combination of enhanced TPM remote attestation and cryptographic techniques to ensure confidentiality and integrity of user VMI requirements.
Nowadays, video streaming over HTTP is one of the most dominant Internet applications, using adaptive video techniques. Network assisted approaches have been proposed and are being standardized in order to provide high QoE for the end-users of such applications. SAND is a recent MPEG standard where DASH Aware Network Elements (DANEs) are introduced for this purpose. As web-caches are one of the main components of the SAND architecture, the location and the connectivity of these web-caches plays an important role in the user's QoE. The nature of SAND and DANE provides a good foundation for software controlled virtualized DASH environments, and in this paper, we propose a cache location algorithm and a cache migration algorithm for virtualized SAND deployments. The optimal locations for the virtualized DANEs is determined by an SDN controller and migrates it based on gathered statistics. The performance of the resulting system shows that, when SDN and NFV technologies are leveraged in such systems, software controlled virtualized approaches can provide an increase in QoE.
With evolution of the communication technology remote monitoring has rooted into many applications. Swift innovation in Internet of Things (IoT) technology led to development of electronics embedded devices capable of sensing into the remote location and transferring the data through internet across the globe. Such devices transfers the sensitive data, which are susceptible to security attacks by the intruder and network hacker. Paper studies the existing security solutions and limitations for IoT environment and provides a pragmatic lightweight security scheme on Transmission Control Protocol (TCP) network for Remote Monitoring System devices over internet. This security scheme will aid Original Equipment Manufacturer (OEM) developing massive IoT products for remote monitoring. Real time evaluation of this scheme has been analyzed.
Today's rapid progress in the physical implementation of quantum computers demands scalable synthesis methods to map practical logic designs to quantum architectures. There exist many quantum algorithms which use classical functions with superposition of states. Motivated by recent trends, in this paper, we show the design of quantum circuit to perform modular exponentiation functions using two different approaches. In the design phase, first we generate quantum circuit from a verilog implementation of exponentiation functions using synthesis tools and then apply two different Quantum Error Correction techniques. Finally the circuit is further optimized using the Linear Nearest Neighbor (LNN) Property. We demonstrate the effectiveness of our approach by generating a set of networks for the reversible modular exponentiation function for a set of input values. At the end of the work, we have summarized the obtained results, where a cost analysis over our developed approaches has been made. Experimental results show that depending on the choice of different QECC methods the performance figures can vary by up to 11%, 10%, 8% in T-count, number of qubits, number of gates respectively.
Development of information systems dealing with education and labour market using web and grid service architecture enables their modularity, expandability and interoperability. Application of ontologies to the web helps with collecting and selecting the knowledge about a certain field in a generic way, thus enabling different applications to understand, use, reuse and share the knowledge among them. A necessary step before publishing computer-interpretable data on the public web is the implementation of common standards that will ensure the exchange of information. Croatian Qualification Framework (CROQF) is a project of standardization of occupations for the labour market, as well as standardization of sets of qualifications, skills and competences and their mutual relations. This paper analysis a respectable amount of research dealing with application of ontologies to information systems in education during the last decade. The main goal is to compare achieved results according to: 1) phases of development/classifications of education-related ontologies; 2) areas of education and 3) standards and structures of metadata for educational systems. Collected information is used to provide insight into building blocks of CROQF, both the ones well supported by experience and best practices, and the ones that are not, together with guidelines for development of own standards using ontological structures.