Biblio
Network reconnaissance aims at gathering as much information as possible before an attack is launched. Meanwhile, static host address configuration facilitates network reconnaissance. Currently, more sophisticated network reconnaissance has been emerged with the adaptive and cooperative features. To address this, in this paper, we present Hiding and Trapping (HaT), which is a deceptive approach to disrupt adversarial network reconnaissance with the help of the software-defined networking (SDN) paradigm. HaT is able to hide valuable hosts from attackers and to trap them into decoy nodes through strategic and holistic host address mutation according to characteristic of adversaries. We implement a prototype of HaT, and evaluate its performance by experiments. The experimental results show that HaT is capable to effectively disrupt adversarial network reconnaissance with better deceptive performance than the existing address randomization approach.
The NEREIDA wave generation power plant installed in Mutriku, Spain is a multiple Oscillating Water Column (OWC) plant. The power takeoff consists of a Wells turbine coupled to a Doubly Fed Induction Generator (DFIG). The stalling behavior present in the Wells turbine limits the generated power. This paper presents the modeling and a Harmony Search Algorithm-based airflow control of the OWC. The Harmony Search Algorithm (HSA) is proposed to help overcome the limitations of a traditionally tuned PID. An investigation between HSA-tuned controller and the traditionally tuned controller has been performed. Results of the controlled and uncontrolled plant prove the effectiveness of the airflow control and the superiority of the HSA-tuned controller.
Recently, data protection has become increasingly important in cloud environments. The cloud platform has global user information, rich storage resource allocation information, and a fuller understanding of data attributes. At the same time, there is an urgent need for data access control to provide data security, and software-defined network, as a ready-made facility, has a global network view, global network management capabilities, and programable network rules. In this paper, we present an approach, named High-Performance Software-Defined Data Access Network (HP-SDDAN), providing software-defined data access network architecture, global data attribute management and attribute-based data access network. HP-SDDAN combines the excellent features of cloud platform and software-defined network, and fully considers the performance to implement software-defined data access network. In evaluation, we verify the effectiveness and efficiency of HP-SDDAN implementation, with only 1.46% overhead to achieve attribute-based data access control of attribute-based differential privacy.
In monolithic operating system (OS), any error of system software can be exploit to destroy the whole system. The situation becomes much more severe in cloud environment, when the kernel and the hypervisor share the same address space. The security of guest Virtual Machines (VMs), both sensitive data and vital code, can no longer be guaranteed, once the hypervisor is compromised. Therefore, it is essential to deploy some security approaches to secure VMs, regardless of the hypervisor is safe or not. Some approaches propose microhypervisor reducing attack surface, or a new software requiring a higher privilege level than hypervisor. In this paper, we propose a novel approach, named HyperPS, which separates the fundamental and crucial privilege into a new trusted environment in order to monitor hypervisor. A pivotal condition for HyperPS is that hypervisor must not be allowed to manipulate any security-sensitive system resources, such as page tables, system control registers, interaction between VM and hypervisor as well as VM memory mapping. Besides, HyperPS proposes a trusted environment which does not rely on any higher privilege than the hypervisor. We have implemented a prototype for KVM hypervisor on x86 platform with multiple VMs running Linux. KVM with HyperPS can be applied to current commercial cloud computing industry with portability. The security analysis shows that this approach can provide effective monitoring against attacks, and the performance evaluation confirms the efficiency of HyperPS.
Signature-based Intrusion Detection Systems (IDS) are a key component in the cybersecurity defense strategy for any network being monitored. In order to improve the efficiency of the intrusion detection system and the corresponding mitigation action, it is important to address the problem of false alarms. In this paper, we present a comparative analysis of two approaches that consider the false alarm minimization and alarm correlation techniques. The output of this analysis provides us the elements to propose a parallelizable strategy designed to achieve better results in terms of precision, recall and alarm load reduction in the prioritization of alarms. We use Prelude SIEM as the event normalizer in order to process security events from heterogeneous sensors and to correlate them. The alarms are verified using the dynamic network context information collected from the vulnerability analysis, and they are prioritized using the HP Arsight priority formula. The results show an important reduction in the volume of alerts, together with a high precision in the identification of false alarms.
The current paper proposes a method to combine the theoretical concepts of the parallel processing created by the DNA computing and GA environments, with the effectiveness novel mechanism of the distinction and discover of the cryptosystem keys. Three-level contributions to the current work, the first is the adoption of a final key sequence mechanism by the principle of interconnected sequence parts, the second to exploit the principle of the parallel that provides GA in the search for the counter value of the sequences of the challenge to the mechanism of the discrimination, the third, the most important and broadening the breaking of the cipher, is the harmony of the principle of the parallelism that has found via the DNA computing to discover the basic encryption key. The proposed method constructs a combined set of files includes binary sequences produced from substitution of the guess attributes of the binary equations system of the cryptosystem, as well as generating files that include all the prospects of the DNA strands for all successive cipher characters, the way to process these files to be obtained from the first character file, where extract a key sequence of each sequence from mentioned file and processed with the binary sequences that mentioned the counter produced from GA. The aim of the paper is exploitation and implementation the theoretical principles of the parallelism that providing via biological environment with the new sequences recognition mechanism in the cryptanalysis.
In today's time Software Defined Network (SDN) gives the complete control to get the data flow in the network. SDN works as a central point to which data is administered centrally and traffic is also managed. SDN being open source product is more prone to security threats. The security policies are also to be enforced as it would otherwise let the controller be attacked the most. The attacks like DDOS and DOS attacks are more commonly found in SDN controller. DDOS is destructive attack that normally diverts the normal flow of traffic and starts the over flow of flooded packets halting the system. Machine Learning techniques helps to identify the hidden and unexpected pattern of the network and hence helps in analyzing the network flow. All the classified and unclassified techniques can help detect the malicious flow based on certain parameters like packet flow, time duration, accuracy and precision rate. Researchers have used Bayesian Network, Wavelets, Support Vector Machine and KNN to detect DDOS attacks. As per the review it's been analyzed that KNN produces better result as per the higher precision and giving a lower falser rate for detection. This paper produces better approach of hybrid Machine Learning techniques rather than existing KNN on the same data set giving more accuracy of detecting DDOS attacks on higher precision rate. The result of the traffic with both normal and abnormal behavior is shown and as per the result the proposed algorithm is designed which is suited for giving better approach than KNN and will be implemented later on for future.
Hardware Trojan threats caused by malicious designers and untrusted manufacturers have become one of serious issues in modern VLSI systems. In this paper, we show some experimental results to insert hardware Trojans into asynchronous circuits. As a result, the overhead of hardware Trojan insertion in asynchronous circuits may be small for malicious designers who have enough knowledge about the asynchronous circuits. In addition, we also show several Trojan detection methods using deep learning schemes which have been proposed to detect synchronous hardware Trojan in the netlist level. We apply them to asynchronous hardware Trojan circuits and show their results. They have a great potential to detect a hardware Trojan in asynchronous circuits.
To accurately detect Hardware Trojans in integrated circuits design process, a machine-learning-based detection method at the register transfer level (RTL) is proposed. In this method, circuit features are extracted from the RTL source codes and a training database is built using circuits in a Hardware Trojans library. The training database is used to train an efficient detection model based on the gradient boosting algorithm. In order to expand the Hardware Trojans library for detecting new types of Hardware Trojans and update the detection model in time, a server-client mechanism is used. The proposed method can achieve 100% true positive rate and 89% true negative rate, on average, based on the benchmark from Trust-Hub.
As chips become more and more connected, they are more exposed (both to network and to physical attacks). Therefore one shall ensure they enjoy a sufficient protection level. Security within chips is accordingly becoming a hot topic. Incident detection and reporting is one novel function expected from chips. In this talk, we explain why it is worthwhile to resort to Artificial Intelligence (AI) for security event handling. Drivers are the need to aggregate multiple and heterogeneous security sensors, the need to digest this information quickly to produce exploitable information, and so while maintaining a low false positive detection rate. Key features are adequate learning procedures and fast and secure classification accelerated by hardware. A challenge is to embed such security-oriented AI logic, while not compromising chip power budget and silicon area. This talk accounts for the opportunities permitted by the symbiotic encounter between chip security and AI.