Biblio
Computational Intelligence (CI) has a great potential in Security & Defense (S&D) applications. Nevertheless, such potential seems to be still under exploited. In this work we first review CI applications in the maritime domain, done in the past decades by NATO Nations. Then we discuss challenges and opportunities for CI in S&D. Finally we argue that a review of the academic training of military officers is highly recommendable, in order to allow them to understand, model and solve new problems, using CI techniques.
Dynamic Fuzzy Rule Interpolation (D-FRI) offers a dynamic rule base for fuzzy systems which is especially useful for systems with changing requirements and limited prior knowledge. This suggests a possible application of D-FRI in the area of network security due to the volatility of the traffic. A honeypot is a valuable tool in the field of network security for baiting attackers and collecting their information. However, typically designed with fewer resources they are not considered as a primary security tool for use in network security. Consequently, such honeypots can be vulnerable to many security attacks. One such attack is a spoofing attack which can cause severe damage to the honeypot, making it inefficient. This paper presents a vigilant dynamic honeypot based on the D-FRI approach for use in predicting and alerting of spoofing attacks on the honeypot. First, it proposes a technique for spoofing attack identification based on the analysis of simulated attack data. Then, the paper employs the identification technique to develop a D-FRI based vigilant dynamic honeypot, allowing the honeypot to predict and alert that a spoofing attack is taking place in the absence of matching rules. The resulting system is capable of learning and maintaining a dynamic rule base for more accurate identification of potential spoofing attacks with respect to the changing traffic conditions of the network.
Insider threats can cause immense damage to organizations of different types, including government, corporate, and non-profit organizations. Being an insider, however, does not necessarily equate to being a threat. Effectively identifying valid threats, and assessing the type of threat an insider presents, remain difficult challenges. In this work, we propose a novel breakdown of eight insider threat types, identified by using three insider traits: predictability, susceptibility, and awareness. In addition to presenting this framework for insider threat types, we implement a computational model to demonstrate the viability of our framework with synthetic scenarios devised after reviewing real world insider threat case studies. The results yield useful insights into how further investigation might proceed to reveal how best to gauge predictability, susceptibility, and awareness, and precisely how they relate to the eight insider types.
Real world applications of Wireless Sensor Networks such as border control, healthcare monitoring and target tracking require secure communications. Thus, during WSN setup, one of the first requirements is to distribute the keys to the sensor nodes which can be later used for securing the messages exchanged between sensors. The key management schemes in WSN secure the communication between a pair or a group of nodes. However, the storage capacity of the sensor nodes is limited which makes storage requirement as an important parameter for the evaluation of key management schemes. This paper classifies the existing key management schemes proposed for WSNs into three categories: storage inefficient, storage efficient and highly storage efficient key management schemes.
The increasing exploitation of the internet leads to new uncertainties, due to interdependencies and links between cyber and physical layers. As an example, the integration between telecommunication and physical processes, that happens when the power grid is managed and controlled, yields to epistemic uncertainty. Managing this uncertainty is possible using specific frameworks, usually coming from fuzzy theory such as Evidence Theory. This approach is attractive due to its flexibility in managing uncertainty by means of simple rule-based systems with data coming from heterogeneous sources. In this paper, Evidence Theory is applied in order to evaluate risk. Therefore, the authors propose a frame of discernment with a specific property among the elements based on a graph representation. This relationship leads to a smaller power set (called Reduced Power Set) that can be used as the classical power set, when the most common combination rules, such as Dempster or Smets, are applied. The paper demonstrates how the use of the Reduced Power Set yields to more efficient algorithms for combining evidences and to application of Evidence Theory for assessing risk.
The EIIM model for ER allows for creation and maintenance of persistent entity identity structures. It accomplishes this through a collection of batch configurations that allow updates and asserted fixes to be made to the Identity knowledgebase (IKB). The model also provides a batch IR configuration that provides no maintenance activity but instead allows access to the identity information. This batch IR configuration is limited in a few ways. It is driven by the same rules used for maintaining the IKB, has no inherent method to identity "close" matches, and can only identify and return the positive matches. Through the decoupling of this configuration and its movements into an interactive role under the umbrella of an Identity Management Service, a more robust access method can be provided for the use of identity information. This more robust access to the information improved the quality of the information along multiple Information Quality dimensions.
Future personal living environments feature an increasing number of convenience-, health- and security-related applications provided by distributed services, which do not only support users but require tasks such as installation, configuration and continuous administration. These tasks are becoming tiresome, complex and error-prone. One way to escape this situation is to enable service platforms to configure and manage themselves. The approach presented here extends services with semantic descriptions to enable platform-independent autonomous service level management using model driven architecture and autonomic computing concepts. It has been implemented as a OSGi-based semantic autonomic manager, whose concept, prototypical implementation and evaluation are presented.
The OpenFlow architecture is a proposal from the Clean Slate initiative to define a new Internet architecture where the network devices are simple, and the control and management plane is performed by a centralized controller. The simplicity and centralization architecture makes it reliable and inexpensive. However, this architecture does not provide mechanisms to detect conflicting in flows, allowing that unreachable flows can be configured in the network elements, and the network may not behave as expected. This paper proposes an approach to conflict detection using first-order logic to define possible antagonisms and employ an inference engine to detect conflicting flows before the OpenFlow controller implement in the network elements.
This paper addresses a robust methodology for developing a statistically sound, robust prognostic condition index and encapsulating this index as a series of highly accurate, transparent, human-readable rules. These rules can be used to further understand degradation phenomena and also provide transparency and trust for any underlying prognostic technique employed. A case study is presented on a wind turbine gearbox, utilising historical supervisory control and data acquisition (SCADA) data in conjunction with a physics of failure model. Training is performed without failure data, with the technique accurately identifying gearbox degradation and providing prognostic signatures up to 5 months before catastrophic failure occurred. A robust derivation of the Mahalanobis distance is employed to perform outlier analysis in the bivariate domain, enabling the rapid labelling of historical SCADA data on independent wind turbines. Following this, the RIPPER rule learner was utilised to extract transparent, human-readable rules from the labelled data. A mean classification accuracy of 95.98% of the autonomously derived condition was achieved on three independent test sets, with a mean kappa statistic of 93.96% reported. In total, 12 rules were extracted, with an independent domain expert providing critical analysis, two thirds of the rules were deemed to be intuitive in modelling fundamental degradation behaviour of the wind turbine gearbox.