Biblio
With evolution of the communication technology remote monitoring has rooted into many applications. Swift innovation in Internet of Things (IoT) technology led to development of electronics embedded devices capable of sensing into the remote location and transferring the data through internet across the globe. Such devices transfers the sensitive data, which are susceptible to security attacks by the intruder and network hacker. Paper studies the existing security solutions and limitations for IoT environment and provides a pragmatic lightweight security scheme on Transmission Control Protocol (TCP) network for Remote Monitoring System devices over internet. This security scheme will aid Original Equipment Manufacturer (OEM) developing massive IoT products for remote monitoring. Real time evaluation of this scheme has been analyzed.
Techniques applied in response to detrimental digital incidents vary in many respects according to their attributes. Models of techniques exist in current research but are typically restricted to some subset with regards to the discipline of the incident. An enormous collection of techniques is actually available for use. There is no single model representing all these techniques. There is no current categorisation of digital forensics reactive techniques that classify techniques according to the attribute of function and nor is there an attempt to classify techniques in a means that goes beyond a subset. In this paper, an ontology that depicts digital forensic reactive techniques classified by function is presented. The ontology itself contains additional information for each technique useful for merging into a cognate system where the relationship between techniques and other facets of the digital investigative process can be defined. A number of existing techniques were collected and described according to their function - a verb. The function then guided the placement and classification of the techniques in the ontology according to the ontology development process. The ontology contributes to a knowledge base for digital forensics - essentially useful as a resource for the various people operating in the field of digital forensics. The benefit of this that the information can be queried, assumptions can be made explicit, and there is a one-stop-shop for digital forensics reactive techniques with their place in the investigation detailed.
Several assessment techniques and methodologies exist to analyze the security of an application dynamically. However, they either are focused on a particular product or are mainly concerned about the assessment process rather than the product's security confidence. Most crucially, they tend to assess the security of a target application as a standalone artifact without assessing its host infrastructure. Such attempts can undervalue the overall security posture since the infrastructure becomes crucial when it hosts a critical application. We present an ontology-based security model that aims to provide the necessary knowledge, including network settings, application configurations, testing techniques and tools, and security metrics to evaluate the security aptitude of a critical application in the context of its hosting infrastructure. The objective is to integrate the current good practices and standards in security testing and virtualization to furnish an on-demand and test-ready virtual target infrastructure to execute the critical application and to initiate a context-aware and quantifiable security assessment process in an automated manner. Furthermore, we present a security assessment architecture to reflect on how the ontology can be integrated into a standard process.
A wide variety of security software systems need to be integrated into a Security Orchestration Platform (SecOrP) to streamline the processes of defending against and responding to cybersecurity attacks. Lack of interpretability and interoperability among security systems are considered the key challenges to fully leverage the potential of the collective capabilities of different security systems. The processes of integrating security systems are repetitive, time-consuming and error-prone; these processes are carried out manually by human experts or using ad-hoc methods. To help automate security systems integration processes, we propose an Ontology-driven approach for Security OrchestrAtion Platform (OnSOAP). The developed solution enables interpretability, and interoperability among security systems, which may exist in operational silos. We demonstrate OnSOAP's support for automated integration of security systems to execute the incident response process with three security systems (Splunk, Limacharlie, and Snort) for a Distributed Denial of Service (DDoS) attack. The evaluation results show that OnSOAP enables SecOrP to interpret the input and output of different security systems, produce error-free integration details, and make security systems interoperable with each other to automate and accelerate an incident response process.
Nowadays, video streaming over HTTP is one of the most dominant Internet applications, using adaptive video techniques. Network assisted approaches have been proposed and are being standardized in order to provide high QoE for the end-users of such applications. SAND is a recent MPEG standard where DASH Aware Network Elements (DANEs) are introduced for this purpose. As web-caches are one of the main components of the SAND architecture, the location and the connectivity of these web-caches plays an important role in the user's QoE. The nature of SAND and DANE provides a good foundation for software controlled virtualized DASH environments, and in this paper, we propose a cache location algorithm and a cache migration algorithm for virtualized SAND deployments. The optimal locations for the virtualized DANEs is determined by an SDN controller and migrates it based on gathered statistics. The performance of the resulting system shows that, when SDN and NFV technologies are leveraged in such systems, software controlled virtualized approaches can provide an increase in QoE.
Motivated by the transformative impact of deep neural networks (DNNs) on different areas (e.g., image and speech recognition), researchers and anti-virus vendors are proposing end-to-end DNNs for malware detection from raw bytes that do not require manual feature engineering. Given the security sensitivity of the task that these DNNs aim to solve, it is important to assess their susceptibility to evasion.
In this work, we propose an attack that guides binary-diversification tools via optimization to mislead DNNs for malware detection while preserving the functionality of binaries. Unlike previous attacks on such DNNs, ours manipulates instructions that are a functional part of the binary, which makes it particularly challenging to defend against. We evaluated our attack against three DNNs in white-box and black-box settings, and found that it can often achieve success rates near 100%. Moreover, we found that our attack can fool some commercial anti-viruses, in certain cases with a success rate of 85%. We explored several defenses, both new and old, and identified some that can successfully prevent over 80% of our evasion attempts. However, these defenses may still be susceptible to evasion by adaptive attackers, and so we advocate for augmenting malware-detection systems with methods that do not rely on machine learning.
Today's rapid progress in the physical implementation of quantum computers demands scalable synthesis methods to map practical logic designs to quantum architectures. There exist many quantum algorithms which use classical functions with superposition of states. Motivated by recent trends, in this paper, we show the design of quantum circuit to perform modular exponentiation functions using two different approaches. In the design phase, first we generate quantum circuit from a verilog implementation of exponentiation functions using synthesis tools and then apply two different Quantum Error Correction techniques. Finally the circuit is further optimized using the Linear Nearest Neighbor (LNN) Property. We demonstrate the effectiveness of our approach by generating a set of networks for the reversible modular exponentiation function for a set of input values. At the end of the work, we have summarized the obtained results, where a cost analysis over our developed approaches has been made. Experimental results show that depending on the choice of different QECC methods the performance figures can vary by up to 11%, 10%, 8% in T-count, number of qubits, number of gates respectively.
We present Ouroboros Crypsinous, the first formally analyzed privacy-preserving proof-of-stake blockchain protocol. To model its security we give a thorough treatment of private ledgers in the (G)UC setting that might be of independent interest. To prove our protocol secure against adaptive attacks, we introduce a new coin evolution technique relying on SNARKs and key-private forward secure encryption. The latter primitive-and the associated construction-can be of independent interest. We stress that existing approaches to private blockchain, such as the proof-of-work-based Zerocash are analyzed only against static corruptions.
A study done by researchers at Ohio University calls for the improvement of information literacy as it was found that most people do not take time to verify whether information is accurate or not before sharing it on social media. The study uses information literacy factors and a theoretical lens to help develop an understanding of why people share "fake news" on social media.
Researchers at the NYU Tandon School of Engineering developed a technique to prevent sophisticated methods of altering photos and videos to produce deep fakes, which are often weaponized to influence people. The technique developed by researchers involves the use of artificial intelligence (AI) to determine the authenticity of images and videos.
Cloud Management Platforms (CMP) have been developed in recent years to set up cloud computing architecture. Infrastructure-as-a-Service (IaaS) is a cloud-delivered model designed by the provider to gather a set of IT resources which are furnished as services for user Virtual Machine Image (VMI) provisioning and management. Openstack is one of the most useful CMP which has been developed for industry and academic researches to simulate IaaS classical processes such as launch and store user VMI instance. In this paper, the main purpose is to adopt a security policy for a secure launch user VMI across a trust cloud environment founded on a combination of enhanced TPM remote attestation and cryptographic techniques to ensure confidentiality and integrity of user VMI requirements.
We present Copland, a language for specifying layered attestations. Layered attestations provide a remote appraiser with structured evidence of the integrity of a target system to support a trust decision. The language is designed to bridge the gap between formal analysis of attestation security guarantees and concrete implementations. We therefore provide two semantic interpretations of terms in our language. The first is a denotational semantics in terms of partially ordered sets of events. This directly connects Copland to prior work on layered attestation. The second is an operational semantics detailing how the data and control flow are executed. This gives explicit implementation guidance for attestation frameworks. We show a formal connection between the two semantics ensuring that any execution according to the operational semantics is consistent with the denotational event semantics. This ensures that formal guarantees resulting from analyzing the event semantics will hold for executions respecting the operational semantics. All results have been formally verified with the Coq proof assistant.
Fingerprinting the malware by its behavioural signature has been an attractive approach for malware detection due to the homogeneity of dynamic execution patterns across different variants of similar families. Although previous researches show reasonably good performance in dynamic detection using machine learning techniques on a large corpus of training set, decisions must be undertaken based upon a scarce number of observable samples in many practical defence scenarios. This paper demonstrates the effectiveness of generative adversarial autoencoder for dynamic malware detection under outbreak situations where in most cases a single sample is available for training the machine learning algorithm to detect similar samples that are in the wild.
Trusted Execution Environments (TEEs) provide hardware support to isolate the execution of sensitive operations on mobile phones for improved security. However, they are not always available to use for application developers. To provide a consistent user experience to those who have and do not have a TEE-enabled device, we could get help from Open-TEE, an open-source GlobalPlatform (GP)-compliant software TEE emulator. However, Open-TEE does not offer any of the security properties hardware TEEs have. In this paper, we propose WhiteBox-TEE which integrates white-box cryptography with Open-TEE to provide better security while still remaining complaint with GP TEE specifications. We discuss the architecture, provisioning mechanism, implementation highlights, security properties and performance issues of WhiteBox-TEE and propose possible revisions to TEE specifications to have better use of white-box cryptography in software-only TEEs.
The globalization of supply chain makes semiconductor chips susceptible to various security threats. Design obfuscation techniques have been widely investigated to thwart intellectual property (IP) piracy attacks. Key distribution among IP providers, system integration team, and end users remains as a challenging problem. This work proposes an orthogonal obfuscation method, which utilizes an orthogonal matrix to authenticate obfuscation keys, rather than directly examining each activation key. The proposed method hides the keys by using an orthogonal obfuscation algorithm to increasing the key retrieval time, such that the primary keys for IP cores will not be leaked. The simulation results show that the proposed method reduces the key retrieval time by 36.3% over the baseline. The proposed obfuscation methods have been successfully applied to ISCAS'89 benchmark circuits. Experimental results indicate that the orthogonal obfuscation only increases the area by 3.4% and consumes 4.7% more power than the baseline1.
Attacks by Jamming on wireless communication network can provoke Denial of Services. According to the communication system which is affected, the consequences can be more or less critical. In this paper, we propose to develop an algorithm which could be implemented at the reception stage of a communication terminal in order to detect the presence of jamming signals. The work is performed on Wi-Fi communication signals and demonstrates the necessity to have a specific signal processing at the reception stage to be able to detect the presence of jamming signals.
Development of information systems dealing with education and labour market using web and grid service architecture enables their modularity, expandability and interoperability. Application of ontologies to the web helps with collecting and selecting the knowledge about a certain field in a generic way, thus enabling different applications to understand, use, reuse and share the knowledge among them. A necessary step before publishing computer-interpretable data on the public web is the implementation of common standards that will ensure the exchange of information. Croatian Qualification Framework (CROQF) is a project of standardization of occupations for the labour market, as well as standardization of sets of qualifications, skills and competences and their mutual relations. This paper analysis a respectable amount of research dealing with application of ontologies to information systems in education during the last decade. The main goal is to compare achieved results according to: 1) phases of development/classifications of education-related ontologies; 2) areas of education and 3) standards and structures of metadata for educational systems. Collected information is used to provide insight into building blocks of CROQF, both the ones well supported by experience and best practices, and the ones that are not, together with guidelines for development of own standards using ontological structures.