Biblio
The future Internet has been a hot topic during the past decade and many approaches towards this future Internet, ranging from incremental evolution to complete clean slate ones, have been proposed. One of the proposition, LISP, advocates for the separation of the identifier and the locator roles of IP addresses to reduce BGP churn and BGP table size. Up to now, however, most studies concerning LISP have been theoretical and, in fact, little is known about the actual LISP deployment performance. In this paper, we fill this gap through measurement campaigns carried out on the LISP Beta Network. More precisely, we evaluate the performance of the two key components of the infrastructure: the control plane (i.e., the mapping system) and the interworking mechanism (i.e., communication between LISP and non-LISP sites). Our measurements highlight that performance offered by the LISP interworking infrastructure is strongly dependent on BGP routing policies. If we exclude misconfigured nodes, the mapping system typically provides reliable performance and relatively low median mapping resolution delays. Although the bias is not very important, control plane performance favors USA sites as a result of its larger LISP user base but also because European infrastructure appears to be less reliable.
Future wireless communications are made up of different wireless technologies. In such a scenario, cognitive and cooperative principles create a promising framework for the interaction of these systems. The opportunistic behavior of cognitive radio (CR) provides an efficient use of radio spectrum and makes wireless network setup easier. However more and more frequently, CR features are exploited by malicious attacks, e.g., denial-of-service (DoS). This paper introduces active radio frequency fingerprinting (RFF) with double application scenario. CRs could encapsulate common-control-channel (CCC) information in an existing channel using active RFF and avoiding any additional or dedicated link. On the other hand, a node inside a network could use the same technique to exchange a public key during the setup of secure communication. Results indicate how the active RFF aims to a valuable technique for cognitive radio manager (CRM) framework facilitating data exchange between CRs without any dedicated channel or additional radio resource.
The concept of smart cities envisions services that provide distraction-free support for citizens. To realize this vision, the services must adapt to the citizens' situations, behaviors and intents at runtime. This requires services to gather and process the context of their users. Mobile devices provide a promising basis for determining context in an automated manner on a large scale. However, despite the wide availability of versatile programmable mobile platforms such as Android and iOS, there are only few examples of smart city applications. One reason for this is that existing software platforms primarily focus on low-level resource management which requires application developers to repeatedly tackle many challenging tasks. Examples include efficient data acquisition, secure and privacy-preserving data distribution as well as interoperable data integration. In this paper, we describe the GAMBAS middleware which tries to simplify the development of smart city applications. To do this, GAMBAS introduces a Java-based runtime system with an associated software development kit (SDK). To clarify how the runtime system and the SDK can be used for application development, we describe two simple applications that highlight different middleware functions.
In this paper, we propose an adaptive specification-based intrusion detection system (IDS) for detecting malicious unmanned air vehicles (UAVs) in an airborne system in which continuity of operation is of the utmost importance. An IDS audits UAVs in a distributed system to determine if the UAVs are functioning normally or are operating under malicious attacks. We investigate the impact of reckless, random, and opportunistic attacker behaviors (modes which many historical cyber attacks have used) on the effectiveness of our behavior rule-based UAV IDS (BRUIDS) which bases its audit on behavior rules to quickly assess the survivability of the UAV facing malicious attacks. Through a comparative analysis with the multiagent system/ant-colony clustering model, we demonstrate a high detection accuracy of BRUIDS for compliant performance. By adjusting the detection strength, BRUIDS can effectively trade higher false positives for lower false negatives to cope with more sophisticated random and opportunistic attackers to support ultrasafe and secure UAV applications.
Up-to-date studies and surveys regarding IT security show, that companies of every size and branch nowadays are faced with the growing risk of cyber crime. Many tools, standards and best practices are in place to support enterprise IT security experts in dealing with the upcoming risks, whereas meanwhile especially small and medium sized enterprises(SMEs) feel helpless struggling with the growing threats. This article describes an approach, how SMEs can attain high quality assurance whether they are a victim of cyber crime, what kind of damage resulted from a certain attack and in what way remediation can be done. The focus on all steps of the analysis lies in the economic feasibility and the typical environment of SMEs.
Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.
In military operation or emergency response situations, very frequently a commander will need to assemble and dynamically manage Community of Interest (COI) mobile groups to achieve a critical mission assigned despite failure, disconnection or compromise of COI members. We combine the designs of COI hierarchical management for scalability and reconfigurability with COI dynamic trust management for survivability and intrusion tolerance to compose a scalable, reconfigurable, and survivable COI management protocol for managing COI mission-oriented mobile groups in heterogeneous mobile environments. A COI mobile group in this environment would consist of heterogeneous mobile entities such as communication-device-carried personnel/robots and aerial or ground vehicles operated by humans exhibiting not only quality of service (QoS) characters, e.g., competence and cooperativeness, but also social behaviors, e.g., connectivity, intimacy and honesty. A COI commander or a subtask leader must measure trust with both social and QoS cognition depending on mission task characteristics and/or trustee properties to ensure successful mission execution. In this paper, we present a dynamic hierarchical trust management protocol that can learn from past experiences and adapt to changing environment conditions, e.g., increasing misbehaving node population, evolving hostility and node density, etc. to enhance agility and maximize application performance. With trust-based misbehaving node detection as an application, we demonstrate how our proposed COI trust management protocol is resilient to node failure, disconnection and capture events, and can help maximize application performance in terms of minimizing false negatives and positives in the presence of mobile nodes exhibiting vastly distinct QoS and social behaviors.
In military operation or emergency response situations, very frequently a commander will need to assemble and dynamically manage Community of Interest (COI) mobile groups to achieve a critical mission assigned despite failure, disconnection or compromise of COI members. We combine the designs of COI hierarchical management for scalability and reconfigurability with COI dynamic trust management for survivability and intrusion tolerance to compose a scalable, reconfigurable, and survivable COI management protocol for managing COI mission-oriented mobile groups in heterogeneous mobile environments. A COI mobile group in this environment would consist of heterogeneous mobile entities such as communication-device-carried personnel/robots and aerial or ground vehicles operated by humans exhibiting not only quality of service (QoS) characters, e.g., competence and cooperativeness, but also social behaviors, e.g., connectivity, intimacy and honesty. A COI commander or a subtask leader must measure trust with both social and QoS cognition depending on mission task characteristics and/or trustee properties to ensure successful mission execution. In this paper, we present a dynamic hierarchical trust management protocol that can learn from past experiences and adapt to changing environment conditions, e.g., increasing misbehaving node population, evolving hostility and node density, etc. to enhance agility and maximize application performance. With trust-based misbehaving node detection as an application, we demonstrate how our proposed COI trust management protocol is resilient to node failure, disconnection and capture events, and can help maximize application performance in terms of minimizing false negatives and positives in the presence of mobile nodes exhibiting vastly distinct QoS and social behaviors.
Conventional cellular systems are dimensioned according to a worst case scenario, and they are designed to ensure ubiquitous coverage with an always-present wireless channel irrespective of the spatial and temporal demand of service. A more energy conscious approach will require an adaptive system with a minimum amount of overhead that is available at all locations and all times but becomes functional only when needed. This approach suggests a new clean slate system architecture with a logical separation between the ability to establish availability of the network and the ability to provide functionality or service. Focusing on the physical layer frame of such an architecture, this paper discusses and formulates the overhead reduction that can be achieved in next generation cellular systems as compared with the Long Term Evolution (LTE). Considering channel estimation as a performance metric whilst conforming to time and frequency constraints of pilots spacing, we show that the overhead gain does not come at the expense of performance degradation.
Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.
ID/password-based authentication is commonly used in network services. Some users set different ID/password pairs for different services, but other users reuse a pair of ID/password to other services. Such recycling allows the list attack in which an adversary tries to spoof a target user by using a list of IDs and passwords obtained from other system by some means (an insider attack, malwares, or even a DB leakage). As a countermeasure agains the list attack, biometric authentication attracts much attention than before. In 2012, Hattori et al. proposed a cancelable biometrics authentication scheme (fundamental scheme) based on homomorphic encryption algorithms. In the scheme, registered biometric information (template) and biometric information to compare are encrypted, and the similarity between these biometric information is computed with keeping encrypted. Only the privileged entity (a decryption center), who has a corresponding decryption key, can obtain the similarity by decrypting the encrypted similarity and judge whether they are same or not. Then, Hirano et al. showed the replay attack against this scheme, and, proposed two enhanced authentication schemes. In this paper, we propose a spoofing attack against the fundamental scheme when the feature vector, which is obtained by digitalizing the analogue biometric information, is represented as a binary coding such as Iris Code and Competitive Code. The proposed attack uses an unexpected vector as input, whose distance to all possible binary vectors is constant. Since the proposed attack is independent from the replay attack, the attack is also applicable to two revised schemes by Hirano et al. as well. Moreover, this paper also discusses possible countermeasures to the proposed spoofing attack. In fact, this paper proposes a countermeasure by detecting such unexpected vector.
In this work, we seek to optimize the efficiency of secure general-purpose obfuscation schemes. We focus on the problem of optimizing the obfuscation of Boolean formulas and branching programs – this corresponds to optimizing the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013), and all subsequent works constructing general-purpose obfuscators. This core obfuscator builds upon approximate multilinear maps, where efficiency in proposed instantiations is closely tied to the maximum number of "levels" of multilinearity required. The most efficient previous construction of a core obfuscator, due to Barak, Garg, Kalai, Paneth, and Sahai (Eurocrypt 2014), required the maximum number of levels of multilinearity to be O(l s3.64), where s is the size of the Boolean formula to be obfuscated, and l s is the number of input bits to the formula. In contrast, our construction only requires the maximum number of levels of multilinearity to be roughly l s, or only s when considering a keyed family of formulas, namely a class of functions of the form fz(x)=phi(z,x) where phi is a formula of size s. This results in significant improvements in both the total size of the obfuscation and the running time of evaluating an obfuscated formula. Our efficiency improvement is obtained by generalizing the class of branching programs that can be directly obfuscated. This generalization allows us to achieve a simple simulation of formulas by branching programs while avoiding the use of Barrington's theorem, on which all previous constructions relied. Furthermore, the ability to directly obfuscate general branching programs (without bootstrapping) allows us to efficiently apply our construction to natural function classes that are not known to have polynomial-size formulas.
It is expected that clean-slate network designs will be implemented for wide-area network applications. Multi-tenancy in OpenFlow networks is an effective method to supporting a clean-slate network design, because the cost-effectiveness is improved by the sharing of substrate networks. To guarantee the programmability of OpenFlow for tenants, a complete flow space (i.e., header values of the data packets) virtualization is necessary. Wide-area substrate networks typically have multiple administrators. We therefore need to implement a flow space virtualization over multiple administration networks. In existing techniques, a third party is solely responsible for managing the mapping of header values for flow space virtualization for substrate network administrators and tenants, despite the severity of a third party failure. In this paper, we propose an AutoVFlow mechanism that allows flow space virtualization in a wide-area networks without the need for a third party. Substrate network administrators implement a flow space virtualization autonomously. They are responsible for virtualizing a flow space involving switches in their own substrate networks. Using a prototype of AutoVFlow, we measured the virtualization overhead, the results of which show a negligible amount of overhead.
Techniques for network security analysis have historically focused on the actions of the network hosts. Outside of forensic analysis, little has been done to detect or predict malicious or infected nodes strictly based on their association with other known malicious nodes. This methodology is highly prevalent in the graph analytics world, however, and is referred to as community detection. In this paper, we present a method for detecting malicious and infected nodes on both monitored networks and the external Internet. We leverage prior community detection and graphical modeling work by propagating threat probabilities across network nodes, given an initial set of known malicious nodes. We enhance prior work by employing constraints that remove the adverse effect of cyclic propagation that is a byproduct of current methods. We demonstrate the effectiveness of probabilistic threat propagation on the tasks of detecting botnets and malicious web destinations.
Specifics of an alias-free digitizer application for compressed digitizing and recording of wideband signals are considered. Signal sampling in this case is performed on the basis of picosecond resolution event timing, the digitizer actually is a subsystem of Event Timer A033-ET and specific events that are detected and then timed are the signal and reference sine-wave crossings. The used approach to development of this subsystem is described and some results of experimental studies are given.