Biblio
Fog computing extends cloud computing technology to the edge of the infrastructure to support dynamic computation for IoT applications. Reduced latency and location awareness in objects' data access is attained by displacing workloads from the central cloud to edge devices. Doing so, it reduces raw data transfers from target objects to the central cloud, thus overcoming communication bottlenecks. This is a key step towards the pervasive uptake of next generation IoT-based services. In this work we study efficient orchestration of applications in fog computing, where a fog application is the cascade of a cloud module and a fog module. The problem results into a mixed integer non linear optimisation. It involves multiple constraints due to computation and communication demands of fog applications, available infrastructure resources and it accounts also the location of target IoT objects. We show that it is possible to reduce the complexity of the original problem with a related placement formulation, which is further solved using a greedy algorithm. This algorithm is the core placement logic of FogAtlas, a fog computing platform based on existing virtualization technologies. Extensive numerical results validate the model and the scalability of the proposed algorithm, showing performance close to the optimal solution with respect to the number of served applications.
Control-Flow Hijacking attacks are the dominant attack vector against C/C++ programs. Control-Flow Integrity (CFI) solutions mitigate these attacks on the forward edge, i.e., indirect calls through function pointers and virtual calls. Protecting the backward edge is left to stack canaries, which are easily bypassed through information leaks. Shadow Stacks are a fully precise mechanism for protecting backwards edges, and should be deployed with CFI mitigations. We present a comprehensive analysis of all possible shadow stack mechanisms along three axes: performance, compatibility, and security. For performance comparisons we use SPEC CPU2006, while security and compatibility are qualitatively analyzed. Based on our study, we renew calls for a shadow stack design that leverages a dedicated register, resulting in low performance overhead, and minimal memory overhead, but sacrifices compatibility. We present case studies of our implementation of such a design, Shadesmar, on Phoronix and Apache to demonstrate the feasibility of dedicating a general purpose register to a security monitor on modern architectures, and Shadesmar's deployability. Our comprehensive analysis, including detailed case studies for our novel design, allows compiler designers and practitioners to select the correct shadow stack design for different usage scenarios. Shadow stacks belong to the class of defense mechanisms that require metadata about the program's state to enforce their defense policies. Protecting this metadata for deployed mitigations requires in-process isolation of a segment of the virtual address space. Prior work on defenses in this class has relied on information hiding to protect metadata. We show that stronger guarantees are possible by repurposing two new Intel x86 extensions for memory protection (MPX), and page table control (MPK). Building on our isolation efforts with MPX and MPK, we present the design requirements for a dedicated hardware mechanism to support intra-process memory isolation, and discuss how such a mechanism can empower the next wave of highly precise software security mitigations that rely on partially isolated information in a process.
With the remarkable success of deep learning, Deep Neural Networks (DNNs) have been applied as dominant tools to various machine learning domains. Despite this success, however, it has been found that DNNs are surprisingly vulnerable to malicious attacks; adding a small, perceptually indistinguishable perturbations to the data can easily degrade classification performance. Adversarial training is an effective defense strategy to train a robust classifier. In this work, we propose to utilize the generator to learn how to create adversarial examples. Unlike the existing approaches that create a one-shot perturbation by a deterministic generator, we propose a recursive and stochastic generator that produces much stronger and diverse perturbations that comprehensively reveal the vulnerability of the target classifier. Our experiment results on MNIST and CIFAR-10 datasets show that the classifier adversarially trained with our method yields more robust performance over various white-box and black-box attacks.
Personal privacy is an important issue when publishing social network data. An attacker may have information to reidentify private data. So, many researchers developed anonymization techniques, such as k-anonymity, k-isomorphism, l-diversity, etc. In this paper, we focus on graph k-degree anonymity by editing edges. Our method is divided into two steps. First, we propose an efficient algorithm to find a new degree sequence with theoretically minimal edit cost. Second, we insert and delete edges based on the new degree sequence to achieve k-degree anonymity.
The advent of smart grids offers us the opportunity to better manage the electricity grids. One of the most interesting challenges in the modern grids is the consumer demand management. Indeed, the development in Information and Communication Technologies (ICTs) encourages the development of demand-side management systems. In this paper, we propose a distributed energy demand scheduling approach that uses minimal interactions between consumers to optimize the energy demand. We formulate the consumption scheduling as a constrained optimization problem and use game theory to solve this problem. On one hand, the proposed approach aims to reduce the total energy cost of a building's consumers. This imposes the cooperation between all the consumers to achieve the collective goal. On the other hand, the privacy of each user must be protected, which means that our distributed approach must operate with a minimal information exchange. The performance evaluation shows that the proposed approach reduces the total energy cost, each consumer's individual cost, as well as the peak to average ratio.
There are two types of network architectures are presents those are wired network and wireless network. MANETs is one of the examples of wireless network. Each and every network has their own features which make them different from other types of network. Some of the features of MANETs are; infrastructure less network, mobility, dynamic network topology which make it different and more popular from wired network but these features also generate different problems for achieving security due to the absence of centralized authority inside network as well as sending of data due to its mobility features. Achieving security in wired network is little-bit easy compare to MANETs because in wired network user need to just protect main centralized authority for achieving security whereas in MANETs there is no centralized authority available so protecting server in MANETs is difficult compare to wired network. Data sending and receiving process is also easy in wired network but mobility features makes this data sending and receiving process difficult in MANETs. Protecting server or central repository without making use of secrete sharing in wired network will create so many challenges and problem in terms of security. The proposed system makes use of Secrete sharing method to protect server from malicious nodes and `A New particle Swarm Optimization Method for MANETs' (NPSOM) for performing data sending and receiving operation in optimization way. NPSOM technique get equated with the steady particle swarm optimizer (PSO) technique. PSO was essentially designed by Kennedy, Eberhart in 1995. These methods are based upon 4 dissimilar types of parameters. These techniques were encouraged by common performance of animals, some of them are bird assembling and fish tuition, ant colony. The proposed system converts this PSO in the form of MANETs where Particle is nothing but the nodes in the network, Swarm means collection of multiple nodes and Optimization means finding the best and nearer root to reach to destination. Each and every element study about their own previous best solution which they are having with them for the given optimization problem, likewise they see for the groups previous best solution which they got for the same problem and finally they correct its solution depending on these values. This same process gets repeated for finding of the best and optimal solutions value. NPSOM technique, used in proposed system there every element changes its location according to the solution which they got previously and which is poorest as well as their collection's earlier poorest solution for finding best, optimal value. In this proposed system we are concentrating on, sidestepping element's and collections poorest solution which they got before.
Humans have created many pioneers of art from the beginning of time. There are not many notable achievements by an artificial intelligence to create something visually captivating in the field of art. However, some breakthroughs were made in the past few years by learning the differences between the content and style of an image using convolution neural networks and texture synthesis. But most of the approaches have the limitations on either processing time, choosing a certain style image or altering the weight ratio of style image. Therefore, we are to address these restrictions and provide a system which allows any style image selection with a user defined style weight ratio in minimum time possible.
Aiming at the phenomenon that the urban traffic is complex at present, the optimization algorithm of the traditional logistic distribution path isn't sensitive to the change of road condition without strong application in the actual logistics distribution, the optimization algorithm research of logistics distribution path based on the deep belief network is raised. Firstly, build the traffic forecast model based on the deep belief network, complete the model training and conduct the verification by learning lots of traffic data. On such basis, combine the predicated road condition with the traffic network to build the time-share traffic network, amend the access set and the pheromone variable of ant algorithm in accordance with the time-share traffic network, and raise the optimization algorithm of logistics distribution path based on the traffic forecasting. Finally, verify the superiority and application value of the algorithm in the actual distribution through the optimization algorithm contrast test with other logistics distribution paths.
Software verification has been well applied in safety critical areas and has shown the ability to provide better quality assurance for modern software. However, as lines of code and complexity of software systems increase, the scalability of verification becomes a challenge. In this paper, we present an automatic software verification framework TSV to address the scalability issues: (i) the extended structural abstraction and property-guided program slicing to solve large-scale program verification problem, saving time and memory without losing accuracy; (ii) automatically select different verification methods according to the program and property context to improve the verification efficiency. For evaluation, we compare TSV's different configurations with existing C program verifiers based on open benchmarks. We found that TSV with auto-selection performs better than with bounded model checking only or with extended structural abstraction only. Compared to existing tools such as CMBC and CPAChecker, it acquires 10%-20% improvement of accuracy and 50%-90% improvement of memory consumption.
In last decades, the web and online services have revolutionized the modern world. However, by increasing our dependence on online services, as a result, online security threats are also increasing rapidly. One of the most common online security threats is a so-called Phishing attack, the purpose of which is to mimic a legitimate website such as online banking, e-commerce or social networking website in order to obtain sensitive data such as user-names, passwords, financial and health-related information from potential victims. The problem of detecting phishing websites has been addressed many times using various methodologies from conventional classifiers to more complex hybrid methods. Recent advancements in deep learning approaches suggested that the classification of phishing websites using deep learning neural networks should outperform the traditional machine learning algorithms. However, the results of utilizing deep neural networks heavily depend on the setting of different learning parameters. In this paper, we propose a swarm intelligence based approach to parameter setting of deep learning neural network. By applying the proposed approach to the classification of phishing websites, we were able to improve their detection when compared to existing algorithms.
``Style transfer'' among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.