Biblio
In the computer based solutions of the problems in today's world; if the problem has a high complexity value, different requirements can be addressed such as necessity of simultaneous operation of many computers, the long processing times for the operation of algorithms, and computers with hardware features that can provide high performance. For this reason, it is inevitable to use a computer based on quantum physics in the near future in order to make today's cryptosystems unsafe, search the servers and other information storage centers on internet very quickly, solve optimization problems in the NP-hard category with a very wide solution space and analyze information on large-scale data processing and to process high-resolution image for artificial intelligence applications. In this study, an examination of quantum approaches and quantum computers, which will be widely used in the near future, was carried out and the areas in which such innovation can be used was evaluated. Malicious or non-malicious use of quantum computers with this capacity, the advantages and disadvantages of the high performance which it provides were examined under the head of security, the effect of this recent technology on the existing security systems was investigated.
The advent of smart grids offers us the opportunity to better manage the electricity grids. One of the most interesting challenges in the modern grids is the consumer demand management. Indeed, the development in Information and Communication Technologies (ICTs) encourages the development of demand-side management systems. In this paper, we propose a distributed energy demand scheduling approach that uses minimal interactions between consumers to optimize the energy demand. We formulate the consumption scheduling as a constrained optimization problem and use game theory to solve this problem. On one hand, the proposed approach aims to reduce the total energy cost of a building's consumers. This imposes the cooperation between all the consumers to achieve the collective goal. On the other hand, the privacy of each user must be protected, which means that our distributed approach must operate with a minimal information exchange. The performance evaluation shows that the proposed approach reduces the total energy cost, each consumer's individual cost, as well as the peak to average ratio.
Industrial cluster is an important organization form and carrier of development of small and medium-sized enterprises, and information service platform is an important facility of industrial cluster. Improving the credibility of the network platform is conducive to eliminate the adverse effects of distrust and information asymmetry on industrial clusters. The decentralization, transparency, openness, and intangibility of block chain technology make it an inevitable choice for trustworthiness optimization of industrial cluster network platform. This paper first studied on trusted standard of industry cluster network platform and construct a new trusted framework of industry cluster network platform. Then the paper focus on trustworthiness optimization of data layer and application layer of the platform. The purpose of this paper is to build an industrial cluster network platform with data access, information trustworthiness, function availability, high-speed and low consumption, and promote the sustainable and efficient development of industrial cluster.
Aiming at the phenomenon that the urban traffic is complex at present, the optimization algorithm of the traditional logistic distribution path isn't sensitive to the change of road condition without strong application in the actual logistics distribution, the optimization algorithm research of logistics distribution path based on the deep belief network is raised. Firstly, build the traffic forecast model based on the deep belief network, complete the model training and conduct the verification by learning lots of traffic data. On such basis, combine the predicated road condition with the traffic network to build the time-share traffic network, amend the access set and the pheromone variable of ant algorithm in accordance with the time-share traffic network, and raise the optimization algorithm of logistics distribution path based on the traffic forecasting. Finally, verify the superiority and application value of the algorithm in the actual distribution through the optimization algorithm contrast test with other logistics distribution paths.
The recently developed deep belief network (DBN) has been shown to be an effective methodology for solving time series forecasting problems. However, the performance of DBN is seriously depended on the reasonable setting of hyperparameters. At present, random search, grid search and Bayesian optimization are the most common methods of hyperparameters optimization. As an alternative, a state-of-the-art derivative-free optimizer-negative correlation search (NCS) is adopted in this paper to decide the sizes of DBN and learning rates during the training processes. A comparative analysis is performed between the proposed method and other popular techniques in the time series forecasting experiment based on two types of time series datasets. Experiment results statistically affirm the efficiency of the proposed model to obtain better prediction results compared with conventional neural network models.
To date, numerous ways have been created to learn a fusion solution from data. However, a gap exists in terms of understanding the quality of what was learned and how trustworthy the fusion is for future-i.e., new-data. In part, the current paper is driven by the demand for so-called explainable AI (XAI). Herein, we discuss methods for XAI of the Choquet integral (ChI), a parametric nonlinear aggregation function. Specifically, we review existing indices, and we introduce new data-centric XAI tools. These various XAI-ChI methods are explored in the context of fusing a set of heterogeneous deep convolutional neural networks for remote sensing.
The emerging Internet of Things (IoT) applications that leverage ubiquitous connectivity and big data are facilitating the realization of smart everything initiatives. IoT-enabled infrastructures have naturally a multi-layer system architecture with an overlaid or underlaid device network and its coexisting infrastructure network. The connectivity between different components in these two heterogeneous networks plays an important role in delivering real-time information and ensuring a high-level situational awareness. However, IoT- enabled infrastructures face cyber threats due to the wireless nature of communications. Therefore, maintaining the network connectivity in the presence of adversaries is a critical task for the infrastructure network operators. In this paper, we establish a three-player three-stage game-theoretic framework including two network operators and one attacker to capture the secure design of multi- layer infrastructure networks by allocating limited resources. We use subgame perfect Nash equilibrium (SPE) to characterize the strategies of players with sequential moves. In addition, we assess the efficiency of the equilibrium network by comparing with its team optimal solution counterparts in which two network operators can coordinate. We further design a scalable algorithm to guide the construction of the equilibrium IoT-enabled infrastructure networks. Finally, we use case studies on the emerging paradigm of Internet of Battlefield Things (IoBT) to corroborate the obtained results.
The Internet of things (IoT) is revolutionizing the management and control of automated systems leading to a paradigm shift in areas, such as smart homes, smart cities, health care, and transportation. The IoT technology is also envisioned to play an important role in improving the effectiveness of military operations in battlefields. The interconnection of combat equipment and other battlefield resources for coordinated automated decisions is referred to as the Internet of battlefield things (IoBT). IoBT networks are significantly different from traditional IoT networks due to battlefield specific challenges, such as the absence of communication infrastructure, heterogeneity of devices, and susceptibility to cyber-physical attacks. The combat efficiency and coordinated decision-making in war scenarios depends highly on real-time data collection, which in turn relies on the connectivity of the network and information dissemination in the presence of adversaries. This paper aims to build the theoretical foundations of designing secure and reconfigurable IoBT networks. Leveraging the theories of stochastic geometry and mathematical epidemiology, we develop an integrated framework to quantify the information dissemination among heterogeneous network devices. Consequently, a tractable optimization problem is formulated that can assist commanders in cost effectively planning the network and reconfiguring it according to the changing mission requirements.
We develop a contingency planning methodology for how a firm would build a global supply chain network with reserve manufacturing capacity which can be strategically deployed by the firm in the event actual demand exceeds forecast. The contingency planning approach is comprised of: (1) a strategic network design model for finding the profit maximizing plant locations, manufacturing capacity and inventory investments, and production level and product distribution; and (2) a scenario planning and risk assessment scheme to analyze the costs and benefits of alternative levels of manufacturing capacity and inventory investments. We develop an efficient heuristic procedure to solve the model. We show numerically how a firm would use our approach to explore and weigh the potential upside benefits and downside risks of alternative strategies.
Training a feed-forward network for the fast neural style transfer of images has proven successful, but the naive extension of processing videos frame by frame is prone to producing flickering results. We propose the first end-to-end network for online video style transfer, which generates temporally coherent stylized video sequences in near realtime. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures consistency over a longer period of time. Our network can incorporate different image stylization networks and clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitude faster.
Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.
``Style transfer'' among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.
Wireless sensor networks have achieved the substantial research interest in the present time because of their unique features such as fault tolerance, autonomous operation etc. The coverage maximization while considering the resource scarcity is a crucial problem in the wireless sensor networks. The approaches which address these problems and maximize the network lifetime are considered prominent. The node scheduling is such mechanism to address this issue. The scheduling strategy which addresses the target coverage problem based on coverage probability and trust values is proposed in Energy Efficient Coverage Protocol (EECP). In this paper the optimized decision rules is obtained by using the rough set theory to determine the number of active nodes. The results show that the proposed extension results in the lesser number of decision rules to consider in determination of node states in the network, hence it improves the network efficiency by reducing the number of packets transmitted and reducing the overhead.
With the rapid and radical evolution of information and communication technology, energy consumption for wireless communication is growing at a staggering rate, especially for wireless multimedia communication. Recently, reducing energy consumption in wireless multimedia communication has attracted increasing attention. In this paper, we propose an energy-efficient wireless image transmission scheme based on adaptive block compressive sensing (ABCS) and SoftCast, which is called ABCS-SoftCast. In ABCS-SoftCast, the compression distortion and transmission distortion are considered in a joint manner, and the energy-distortion model is formulated for each image block. Then, the sampling rate (SR) and power allocation factors of each image block are optimized simultaneously. Comparing with conventional SoftCast scheme, experimental results demonstrate that the energy consumption can be greatly reduced even when the receiving image qualities are approximately the same.
With the advancement of unmanned aerial vehicles (UAV), 3D wireless mesh networks will play a crucial role in next generation mission critical wireless networks. Along with providing coverage over difficult terrain, it provides better spectral utilization through 3D spatial reuse. However, being a wireless network, 3D meshes are vulnerable to jamming/disruptive attacks. A jammer can disrupt the communication, as well as control of the network by intelligently causing interference to a set of nodes. This paper presents a distributed mechanism of avoiding jamming attacks by means of 3D spatial filtering where adaptive beam nulling is used to keep the jammer in null region in order to bypass jamming. Kalman filter based tracking mechanism is used to estimate the most likely trajectory of the jammer from noisy observation of the jammer's position. A beam null border is determined by calculating confidence region of jammer's current and next position estimates. An optimization goal is presented to calculate optimal beam null that minimizes the number of deactivated links while maximizing the higher value of confidence for keeping the jammer inside the null. The survivability of a 3D mesh network with a mobile jammer is studied through simulation that validates an 96.65% reduction in the number of jammed nodes.
We present a formal method for computing the best security provisioning for Internet of Things (IoT) scenarios characterized by a high degree of mobility. The security infrastructure is intended as a security resource allocation plan, computed as the solution of an optimization problem that minimizes the risk of having IoT devices not monitored by any resource. We employ the shortfall as a risk measure, a concept mostly used in the economics, and adapt it to our scenario. We show how to compute and evaluate an allocation plan, and how such security solutions address the continuous topology changes that affect an IoT environment.
This study proposes to apply an efficient formulation to solve the stochastic security-constrained generation capacity expansion planning (GCEP) problem using an improved method to directly compute the generalized generation distribution factors (GGDF) and the line outage distribution factors (LODF) in order to model the pre- and the post-contingency constraints based on the only application of the partial transmission distribution factors (PTDF). The classical DC-based formulation has been reformulated in order to include the security criteria solving both pre- and post-contingency constraints simultaneously. The methodology also takes into account the load uncertainty in the optimization problem using a two-stage multi-period model, and a clustering technique is used as well to reduce load scenarios (stochastic problem). The main advantage of this methodology is the feasibility to quickly compute the LODF especially with multiple-line outages (N-m). This idea could speed up contingency analyses and improve significantly the security-constrained analyses applied to GCEP problems. It is worth to mentioning that this approach is carried out without sacrificing optimality.
Nowadays the application of integrated management systems (IMS) attracts the attention of top management from various organizations. However, there is an important problem of running the security audits in IMS and realization of complex checks of different ISO standards in full scale with the essential reducing of available resources.
Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.
The pattern recognition in the sparse representation (SR) framework has been very successful. In this model, the test sample can be represented as a sparse linear combination of training samples by solving a norm-regularized least squares problem. However, the value of regularization parameter is always indiscriminating for the whole dictionary. To enhance the group concentration of the coefficients and also to improve the sparsity, we propose a new SR model called adaptive sparse representation classifier(ASRC). In ASRC, a sparse coefficient strengthened item is added in the objective function. The model is solved by the artificial bee colony (ABC) algorithm with variable step to speed up the convergence. Also, a partition strategy for large scale dictionary is adopted to lighten bee's load and removes the irrelevant groups. Through different data sets, we empirically demonstrate the property of the new model and its recognition performance.