Biblio
Current software platforms for service composition are based on orchestration, choreography or hierarchical orchestration. However, such approaches for service composition only support partial compositionality; thereby, increasing the complexity of SOA development. In this paper, we propose DX-MAN, a platform that supports total compositionality. We describe the main concepts of DX-MAN with the help of a case study based on the popular MusicCorp.
The wireless spectrum is a scarce resource, and the number of wireless terminals is constantly growing. One way to mitigate this strong constraint for wireless traffic is the use of dynamic mechanisms to utilize the spectrum, such as cognitive and software-defined radios. This is especially important for the upcoming wireless sensor and actuator networks in aircraft, where real-time guarantees play an important role in the network. Future wireless networks in aircraft need to be scalable, cater to the specific requirements of avionics (e.g., standardization and certification), and provide interoperability with existing technologies. In this paper, we demonstrate that dynamic network reconfigurability is a solution to the aforementioned challenges. We supplement this claim by surveying several flexible approaches in the context of wireless sensor and actuator networks in aircraft. More specifically, we examine the concept of dynamic resource management, accomplished through more flexible transceiver hardware and by employing dedicated spectrum agents. Subsequently, we evaluate the advantages of cross-layer network architectures which overcome the fixed layering of current network stacks in an effort to provide quality of service for event-based and time-triggered traffic. Lastly, the challenges related to implementation of the aforementioned mechanisms in wireless sensor and actuator networks in aircraft are elaborated, and key requirements to future research are summarized.
In 2007, Shacham published a seminal paper on Return-Oriented Programming (ROP), the first systematic formulation of code reuse. The paper has been highly influential, profoundly shaping the way we still think about code reuse today: an attacker analyzes the "geometry" of victim binary code to locate gadgets and chains these to craft an exploit. This model has spurred much research, with a rapid progression of increasingly sophisticated code reuse attacks and defenses over time. After ten years, the common perception is that state-of-the-art code reuse defenses are effective in significantly raising the bar and making attacks exceedingly hard. In this paper, we challenge this perception and show that an attacker going beyond "geometry" (static analysis) and considering the "dynamics" (dynamic analysis) of a victim program can easily find function call gadgets even in the presence of state-of-the-art code-reuse defenses. To support our claims, we present Newton, a run-time gadget-discovery framework based on constraint-driven dynamic taint analysis. Newton can model a broad range of defenses by mapping their properties into simple, stackable, reusable constraints, and automatically generate gadgets that comply with these constraints. Using Newton, we systematically map and compare state-of-the-art defenses, demonstrating that even simple interactions with popular server programs are adequate for finding gadgets for all state-of-the-art code-reuse defenses. We conclude with an nginx case study, which shows that a Newton-enabled attacker can craft attacks which comply with the restrictions of advanced defenses, such as CPI and context-sensitive CFI.
Distributed attacks originating from botnet-infected machines (bots) such as large-scale malware propagation campaigns orchestrated via spam emails can quickly affect other network infrastructures. As these attacks are made successful only by the fact that hundreds of infected machines engage in them collectively, their damage can be avoided if machines infected with a common botnet can be detected early rather than after an attack is launched. Prior studies have suggested that outgoing bot attacks are often preceded by other ``tell-tale'' malicious behaviour, such as communication with botnet controllers (C&C servers) that command botnets to carry out attacks. We postulate that observing similar behaviour occuring in a synchronised manner across multiple machines is an early indicator of a widespread infection of a single botnet, leading potentially to a large-scale, distributed attack. Intuitively, if we can detect such synchronised behaviour early enough on a few machines in the network, we can quickly contain the threat before an attack does any serious damage. In this work we present a measurement-driven analysis to validate this intuition. We empirically analyse the various stages of malicious behaviour that are observed in real botnet traffic, and carry out the first systematic study of the network behaviour that typically precedes outgoing bot attacks and is synchronised across multiple infected machines. We then implement as a proof-of-concept a set of analysers that monitor synchronisation in botnet communication to generate early infection and attack alerts. We show that with this approach, we can quickly detect nearly 80% of real-world spamming and port scanning attacks, and even demonstrate a novel capability of preventing these attacks altogether by predicting them before they are launched.
Since the Information Networks are added to the current electricity networks, the security and privacy of individuals is challenged. This combination of technologies creates vulnerabilities in the context of smart grid power which disrupt the consumer energy supply. Methods based on encryption are against the countermeasures attacks that have targeted the integrity and confidentiality factors. Although the cryptography strategies are used in Smart Grid, key management which is different in size from tens to millions of keys (for meters), is considered as the critical processes. The Key mismanagement causes to reveal the secret keys for attacker, a symmetric key distribution method is recently suggested by [7] which is based on a symmetric key distribution, this strategy is very suitable for smart electric meters. The problem with this method is its vulnerability to impersonating respondents attack. The proposed approach to solve this problem is to send the both side identifiers in encrypted form based on hash functions and a random value, the proposed solution is appropriate for devices such as meters that have very little computing power.
The security of Wireless Sensor Networks (WSNs) is vital in several applications such as the tracking and monitoring of endangered species such as pandas in a national park or soldiers in a battlefield. This kind of applications requires the anonymity of the source, known as Source Location Privacy (SLP). The main aim is to prevent an adversary from tracing back a real event to the originator by analyzing the network traffic. Previous techniques have achieved high anonymity such as Dummy Uniform Distribution (DUD), Dummy Adaptive Distribution (DAD) and Controlled Dummy Adaptive Distribution (CAD). However, these techniques increase the overall overhead of the network. To overcome this shortcoming, a new technique is presented: Exponential Dummy Adaptive Distribution (EDAD). In this technique, an exponential distribution is used instead of the uniform distribution to reduce the overhead without sacrificing the anonymity of the source. The exponential distribution improves the lifetime of the network since it decreases the number of transmitted packets within the network. It is straightforward and easy to implement because it has only one parameter $łambda$ that controls the transmitting rate of the network nodes. The conducted adversary model is global, which has a full view of the network and is able to perform sophisticated attacks such as rate monitoring and time correlation. The simulation results show that the proposed technique provides less overhead and high anonymity with reasonable delay and delivery ratio. Three different analysis models are developed to confirm the validation of our technique. These models are visualization model, a neural network model, and a steganography model.
Energy efficient High-Performance Computing (HPC) is becoming increasingly important. Recent ventures into this space have introduced an unlikely candidate to achieve exascale scientific computing hardware with a small energy footprint. ARM processors and embedded GPU accelerators originally developed for energy efficiency in mobile devices, where battery life is critical, are being repurposed and deployed in the next generation of supercomputers. Unfortunately, the performance of executing scientific workloads on many of these devices is largely unknown, yet the bulk of computation required in high-performance supercomputers is scientific. We present an analysis of one such scientific code, in the form of Gaussian Elimination, and evaluate both execution time and energy used on a range of embedded accelerator SoCs. These include three ARM CPUs and two mobile GPUs. Understanding how these low power devices perform on scientific workloads will be critical in the selection of appropriate hardware for these supercomputers, for how can we estimate the performance of tens of thousands of these chips if the performance of one is largely unknown?
Smart Internet of Things (IoT) applications will rely on advanced IoT platforms that not only provide access to IoT sensors and actuators, but also provide access to cloud services and data analytics. Future IoT platforms should thus provide connectivity and intelligence. One approach to connecting IoT devices, IoT networks to cloud networks and services is to use network federation mechanisms over the internet to create network slices across heterogeneous platforms. Network slices also need to be protected from potential external and internal threats. In this paper we describe an approach for enforcing global security policies in the federated cloud and IoT networks. Our approach allows a global security to be defined in the form of a single service manifest and enforced across all federation network segments. It relies on network function virtualisation (NFV) and service function chaining (SFC) to enforce the security policy. The approach is illustrated with two case studies: one for a user that wishes to securely access IoT devices and another in which an IoT infrastructure administrator wishes to securely access some remote cloud and data analytics services.
Digital forensic investigators today are faced with numerous problems when recovering footprints of criminal activity that involve the use of computer systems. Investigators need the ability to recover evidence in a forensically sound manner, even when criminals actively work to alter the integrity, veracity, and provenance of data, applications and software that are used to support illicit activities. In many ways, operating systems (OS) can be strengthened from a technological viewpoint to support verifiable, accurate, and consistent recovery of system data when needed for forensic collection efforts. In this paper, we extend the ideas for forensic-friendly OS design by proposing the use of a practical form of computing on encrypted data (CED) and computing with encrypted functions (CEF) which builds upon prior work on component encryption (in circuits) and white-box cryptography (in software). We conduct experiments on sample programs to provide analysis of the approach based on security and efficiency, illustrating how component encryption can strengthen key OS functions and improve tamper-resistance to anti-forensic activities. We analyze the tradeoff space for use of the algorithm in a holistic approach that provides additional security and comparable properties to fully homomorphic encryption (FHE).
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.
Here we explore the applicability of traditional sliding window based convolutional neural network (CNN) detection pipeline and region based object detection techniques such as Faster Region-based CNN (R-CNN) and Region-based Fully Convolutional Networks (R-FCN) on the problem of object detection in X-ray security imagery. Within this context, with limited dataset availability, we employ a transfer learning paradigm for network training tackling both single and multiple object detection problems over a number of R-CNN/R-FCN variants. The use of first-stage region proposal within the Faster RCNN and R-FCN provide superior results than traditional sliding window driven CNN (SWCNN) approach. With the use of Faster RCNN with VGG16, pretrained on the ImageNet dataset, we achieve 88.3 mAP for a six object class X-ray detection problem. The use of R-FCN with ResNet-101, yields 96.3 mAP for the two class firearm detection problem requiring 0.1 second computation per image. Overall we illustrate the comparative performance of these techniques as object localization strategies within cluttered X-ray security imagery.
Wireless sensor networks (WSNs) are one of the most rapidly developing information technologies and promise to have a variety of applications in Next Generation Networks (NGNs) including the IoT. In this paper, the focus will be on developing new methods for efficiently managing such large-scale networks composed of homogeneous wireless sensors/devices in urban environments such as homes, hospitals, stores and industrial compounds. Heterogeneous networks were proposed in a comparison with the homogeneous ones. The efficiency of these networks will depend on several optimization parameters such as the redundancy, as well as the percentages of coverage and energy saved. We tested the algorithm using different densities of sensors in the network and different values of tuning parameters for the optimization parameters. Obtained results show that our proposed algorithm performs better than the other greedy algorithm. Moreover, networks with more sensors maintain more redundancy and better percentage of coverage. However, it wastes more energy. The same method will be used for heterogeneous wireless sensors networks where devices have different characteristics and the network acts more efficient.
Service composition is currently done by (hierarchical) orchestration and choreography. However, these approaches do not support explicit control flow and total compositionality, which are crucial for the scalability of service-oriented systems. In this paper, we propose exogenous connectors for service composition. These connectors support both explicit control flow and total compositionality in hierarchical service composition. To validate and evaluate our proposal, we present a case study based on the popular MusicCorp.
According to the 2016 Internet Security Threat Report by Symantec, there are around 431 million variants of malware known. This effort focuses on malware used for spying on user's activities, remotely controlling devices, and identity and credential theft within a Windows based operating system. As Windows operating systems create and maintain a log of all events that are encountered, various malware are tested on virtual machines to determine what events they trigger in the Windows logs. The observations are compiled into Operating System specific lookup tables that can then be used to find the tested malware on other computers with the same Operating System.
6L0WPAN is a communication protocol for Internet of Things. 6LoWPAN is IPv6 protocol modified for low power and lossy personal area networks. 6LoWPAN inherits threats from its predecessors IPv4 and IPv6. IP spoofing is a known attack prevalent in IPv4 and IPv6 networks but there are new vulnerabilities which creates new paths, leading to the attack. This study performs the experimental study to check the feasibility of performing IP spoofing attack on 6LoWPAN Network. Intruder misuses 6LoWPAN control messages which results into wrong IPv6-MAC binding in router. Attack is also simulated in cooja simulator. Simulated results are analyzed for finding cost to the attacker in terms of energy and memory consumption.
Trust networks have been widely used to mitigate the data sparsity and cold-start problems of collaborative filtering. Recently, some approaches have been proposed which exploit explicit signed trust relationships, i.e., trust and distrust relationships. These approaches ignore the fact that users despite trusting/distrusting each other in a trust network may have different preferences in real-life. Most of these approaches also handle the notion of the transitivity of distrust as well as trust. However, other existing work observed that trust is transitive while distrust is intransitive. Moreover, explicit signed trust relationships are fairly sparse and may not contribute to infer true preferences of users. In this paper, we propose to create implicit signed trust relationships and exploit them along with explicit signed trust relationship to solve sparsity problem of trust relationships. We also confirm the similarity (resp. dissimilarity) of implicit and explicit trust (resp. distrust) relationships by using the similarity score between users so that users' true preferences can be inferred. In addition to these strategies, we also propose a matrix factorization model that simultaneously exploits implicit and explicit signed trust relationships along with rating information and also handles transitivity of trust and intransitivity of distrust. Extensive experiments on Epinions dataset show that the proposed approach outperforms existing approaches in terms of accuracy.
Presented at the Symposium and Bootcamp in the Science of Security (HotSoS 2017), poster session in Hanover, MD, April 4-5, 2017.
In a world where traditional notions of privacy are increasingly challenged by the myriad companies that collect and analyze our data, it is important that decision-making entities are held accountable for unfair treatments arising from irresponsible data usage. Unfortunately, a lack of appropriate methodologies and tools means that even identifying unfair or discriminatory effects can be a challenge in practice. We introduce the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities. We instantiate the UA framework in FairTest, the first comprehensive tool that helps developers check data-driven applications for unfair user treatment. It enables scalable and statistically rigorous investigation of associations between application outcomes (such as prices or premiums) and sensitive user attributes (such as race or gender). Furthermore, FairTest provides debugging capabilities that let programmers rule out potential confounders for observed unfair effects. We report on use of FairTest to investigate and in some cases address disparate impact, offensive labeling, and uneven rates of algorithmic error in four data-driven applications. As examples, our results reveal subtle biases against older populations in the distribution of error in a predictive health application and offensive racial labeling in an image tagger.
In this paper, a novel method to do feature selection to detect botnets at their phase of Command and Control (C&C) is presented. A major problem is that researchers have proposed features based on their expertise, but there is no a method to evaluate these features since some of these features could get a lower detection rate than other. To this aim, we find the feature set based on connections of botnets at their phase of C&C, that maximizes the detection rate of these botnets. A Genetic Algorithm (GA) was used to select the set of features that gives the highest detection rate. We used the machine learning algorithm C4.5, this algorithm did the classification between connections belonging or not to a botnet. The datasets used in this paper were extracted from the repositories ISOT and ISCX. Some tests were done to get the best parameters in a GA and the algorithm C4.5. We also performed experiments in order to obtain the best set of features for each botnet analyzed (specific), and for each type of botnet (general) too. The results are shown at the end of the paper, in which a considerable reduction of features and a higher detection rate than the related work presented were obtained.



