Biblio
With the rapid development of Internet of things (IOT) and big data, the number of network terminal devices and big data transmission are increasing rapidly. Traditional cloud computing faces a great challenge in dealing with this massive amount of data. Fog computing which extends the computing at the edge of the network can provide computation and data storage. Attribute based-encryption can effectively achieve the fine-grained access control. However, the computational complexity of the encryption and decryption is growing linearly with the increase of the number of attributes. In order to reduce the computational cost and guarantee the confidentiality of data, distributed access control with outsourced computation in fog computing is proposed in this paper. In our proposed scheme, fog device takes most of computational cost in encryption and decryption phase. The computational cost of the receiver and sender can be reduced. Moreover, the private key of the user is generated by multi-authority which can enhance the security of data. The analysis of security and performance shows that our proposed scheme proves to be effective and secure.
Quickest detection of false data injection attacks (FDIAs) in dynamic smart grids is considered in this paper. The unknown time-varying state variables of the smart grid and the FDIAs impose a significant challenge for designing a computationally efficient detector. To address this challenge, we propose new Cumulative-Sum-type algorithms with computational complex scaling linearly with the number of meters. Moreover, for any constraint on the expected false alarm period, a lower bound on the threshold employed in the proposed algorithm is provided. For any given threshold employed in the proposed algorithm, an upper bound on the worstcase expected detection delay is also derived. The proposed algorithm is numerically investigated in the context of an IEEE standard power system under FDIAs, and is shown to outperform some representative algorithm in the test case.
The improvement of the implementation of the RSA cryptographic algorithm for encrypting / decoding information flows based on the use of the vector-modular method of modular exponential is presented in this paper. This makes it possible to replace the complex operation of modular multiplication with the addition operation, which increases the speed of the RSA cryptosystem. The scheme of algorithms of modular multiplication and modular exponentiation is presented. The analytical and graphical comparison of the time complexities of the proposed and known approaches shows that the use of the vector-modular method reduces the temporal complexity of the modular exponential compared to the classical one.
In the paradigm of network coding, information-theoretic security is considered in the presence of wiretappers, who can access one arbitrary edge subset up to a certain size, referred to as the security level. Secure network coding is applied to prevent the leakage of the source information to the wiretappers. In this paper, we consider the problem of secure network coding for flexible pairs of information rate and security level with any fixed dimension (equal to the sum of rate and security level). We present a novel approach for designing a secure linear network code (SLNC) such that the same SLNC can be applied for all the rate and security-level pairs with the fixed dimension. We further develop a polynomial-time algorithm for efficient implementation and prove that there is no penalty on the required field size for the existence of SLNCs in terms of the best known lower bound by Guang and Yeung. Finally, by applying our approach as a crucial building block, we can construct a family of SLNCs that not only can be applied to all possible pairs of rate and security level but also share a common local encoding kernel at each intermediate node in the network.
In the computer based solutions of the problems in today's world; if the problem has a high complexity value, different requirements can be addressed such as necessity of simultaneous operation of many computers, the long processing times for the operation of algorithms, and computers with hardware features that can provide high performance. For this reason, it is inevitable to use a computer based on quantum physics in the near future in order to make today's cryptosystems unsafe, search the servers and other information storage centers on internet very quickly, solve optimization problems in the NP-hard category with a very wide solution space and analyze information on large-scale data processing and to process high-resolution image for artificial intelligence applications. In this study, an examination of quantum approaches and quantum computers, which will be widely used in the near future, was carried out and the areas in which such innovation can be used was evaluated. Malicious or non-malicious use of quantum computers with this capacity, the advantages and disadvantages of the high performance which it provides were examined under the head of security, the effect of this recent technology on the existing security systems was investigated.
{We study secret sharing schemes for general (non-threshold) access structures. A general secret sharing scheme for n parties is associated to a monotone function F:\0,1\n$\rightarrowłbrace$0,1\}. In such a scheme, a dealer distributes shares of a secret s among n parties. Any subset of parties T {$\subseteq$} [n] should be able to put together their shares and reconstruct the secret s if F(T)=1, and should have no information about s if F(T)=0. One of the major long-standing questions in information-theoretic cryptography is to minimize the (total) size of the shares in a secret-sharing scheme for arbitrary monotone functions F. There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general F has shares of size 2n-o(n), but the best lower bound is {$Ømega$}(n2/logn). Indeed, the exponential share size is a direct result of the fact that in all known secret-sharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for F. Indeed, several researchers have suggested the existence of a representation size barrier which implies that the right answer is closer to the upper bound, namely, 2n-o(n). In this work, we overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size 20.994n and a linear secret sharing scheme for any access structure with shares of size 20.999n. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size 2Õ({$\surd$}n) for 2n n/2 monotone access structures, out of a total of 2n n/2{$\cdot$} (1+O(logn/n)) of them. Our construction builds on recent works that construct better protocols for the conditional disclosure of secrets (CDS) problem.
The IRC botnet is the earliest and most significant botnet group that has a significant impact. Its characteristic is to control multiple zombies hosts through the IRC protocol and constructing command control channels. Relevant research analyzes the large amount of network traffic generated by command interaction between the botnet client and the C&C server. Packet capture traffic monitoring on the network is currently a more effective detection method, but this information does not reflect the essential characteristics of the IRC botnet. The increase in the amount of erroneous judgments has often occurred. To identify whether the botnet control server is a homogenous botnet, dynamic network communication characteristic curves are extracted. For unequal time series, dynamic time warping distance clustering is used to identify the homologous botnets by category, and in order to improve detection. Speed, experiments will use SAX to reduce the dimension of the extracted curve, reducing the time cost without reducing the accuracy.
Resilient control systems should efficiently restore control into physical systems not only after the sabotage of themselves, but also after breaking physical systems. To enhance resilience of control systems, given an originally minimal-input controlled linear-time invariant(LTI) physical system, we address the problem of efficient control recovery into it after removing a known system vertex by finding the minimum number of inputs. According to the minimum input theorem, given a digraph embedded into LTI model and involving a precomputed maximum matching, this problem is modeled into recovering controllability of it after removing a known network vertex. Then, we recover controllability of the residual network by efficiently finding a maximum matching rather than recomputation. As a result, except for precomputing a maximum matching and the following removed vertex, the worst-case execution time of control recovery into the residual LTI physical system is linear.
Feature extraction and feature selection are the first tasks in pre-processing of input logs in order to detect cybersecurity threats and attacks by utilizing data mining techniques in the field of Artificial Intelligence. When it comes to the analysis of heterogeneous data derived from different sources, these tasks are found to be time-consuming and difficult to be managed efficiently. In this paper, we present an approach for handling feature extraction and feature selection utilizing machine learning algorithms for security analytics of heterogeneous data derived from different network sensors. The approach is implemented in Apache Spark, using its python API, named pyspark.
Machine learning (ML) algorithms provide a good solution for many security sensitive applications, they themselves, however, face the threats of adversary attacks. As a key problem in machine learning, how to design robust feature selection algorithms against these attacks becomes a hot issue. The current researches on defending evasion attacks mainly focus on wrapped adversarial feature selection algorithm, i.e., WAFS, which is dependent on the classification algorithms, and time cost is very high for large-scale data. Since mRMR (minimum Redundancy and Maximum Relevance) algorithm is one of the most popular filter algorithms for feature selection without considering any classifier during feature selection process. In this paper, we propose a novel adversary-aware feature selection algorithm under filter model based on mRMR, named FAFS. The algorithm, on the one hand, takes the correlation between a single feature and a label, and the redundancy between features into account; on the other hand, when selecting features, it not only considers the generalization ability in the absence of attack, but also the robustness under attack. The performance of four algorithms, i.e., mRMR, TWFS (Traditional Wrapped Feature Selection algorithm), WAFS, and FAFS is evaluated on spam filtering and PDF malicious detection in the Perfect Knowledge attack scenarios. The experiment results show that FAFS has a better performance under evasion attacks with less time complexity, and comparable classification accuracy.
Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in polylogarithmic time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating interleaved queries and updates to this randomized data structure may be of independent interest.
Successive interference cancellation (SIC) receiver is adopted by power domain non-orthogonal multiple access (NOMA) at the receiver side as the baseline receiver scheme taking the forthcoming expected mobile device evolution into account. Development technologies and advanced techniques are boldly being considered in order to achieve power saving in many networks, to reach sustainability and reliability in communication due to envisioned huge amount of data delivery. In this paper, we propose a novel scheme of NOMA-SIC for the sake of balancing the trade-off between system performance and complexity. In the proposed scheme, each SIC level is comprised by a matching filter (MF), a MF detector and a regenerator. In simulations, the proposed scheme demonstrates the best performance on power saving, of which energy efficiency increases with an increase in the number of NOMA device pairs.
To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.
In this paper, we investigate the Bayesian filtering problem for discrete nonlinear dynamical systems which contain random parameters. An augmented cubature Kalman filter (CKF) is developed to deal with the random parameters, where the state vector is enlarged by incorporating the random parameters. The corresponding number of cubature points is increased, so the augmented CKF method requires more computational complexity. However, the estimation accuracy is improved in comparison with that of the classical CKF method which uses the nominal values of the random parameters. An application to the mobile source localization with time difference of arrival (TDOA) measurements and random sensor positions is provided where the simulation results illustrate that the augmented CKF method leads to a superior performance in comparison with the classical CKF method.
Stochastic Computing (SC) is an alternative design paradigm particularly useful for applications where cost is critical. SC has been applied to neural networks, as neural networks are known for their high computational complexity. However previous work in this area has critical limitations such as the fully-parallel architecture assumption, which prevent them from being applicable to recent ones such as convolutional neural networks, or ConvNets. This paper presents the first SC architecture for ConvNets, shows its feasibility, with detailed analyses of implementation overheads. Our SC-ConvNet is a hybrid between SC and conventional binary design, which is a marked difference from earlier SC-based neural networks. Though this might seem like a compromise, it is a novel feature driven by the need to support modern ConvNets at scale, which commonly have many, large layers. Our proposed architecture also features hybrid layer composition, which helps achieve very high recognition accuracy. Our detailed evaluation results involving functional simulation and RTL synthesis suggest that SC-ConvNets are indeed competitive with conventional binary designs, even without considering inherent error resilience of SC.
Different organizations or countries maybe adopt different PKI trust model in real applications. On a large scale, all certification authorities (CA) and end entities construct a huge mesh network. PKI trust model exhibits unstructured mesh network as a whole. However, mesh trust model worsens computational complexity in certification path processing when the number of PKI domains increases. This paper proposes an enhanced mesh trust model for PKI. Keys generation and signature are fulfilled in Trusted Platform Module (TPM) for higher security level. An algorithm is suggested to improve the performance of certification path processing in this model. This trust model is less complex but more efficient and robust than the existing PKI trust models.
We consider the problem of verifying the security of finitely many sessions of a protocol that tosses coins in addition to standard cryptographic primitives against a Dolev-Yao adversary. Two properties are investigated here - secrecy, which asks if no adversary interacting with a protocol P can determine a secret sec with probability textgreater 1 - p; and indistinguishability, which asks if the probability observing any sequence 0$øverline$ in P1 is the same as that of observing 0$øverline$ in P2, under the same adversary. Both secrecy and indistinguishability are known to be coNP-complete for non-randomized protocols. In contrast, we show that, for randomized protocols, secrecy and indistinguishability are both decidable in coNEXPTIME. We also prove a matching lower bound for the secrecy problem by reducing the non-satisfiability problem of monadic first order logic without equality.
Constraints such as separation-of-duty are widely used to specify requirements that supplement basic authorization policies. However, the existence of constraints (and authorization policies) may mean that a user is unable to fulfill her/his organizational duties because access to resources is denied. In short, there is a tension between the need to protect resources (using policies and constraints) and the availability of resources. Recent work on workflow satisfiability and resiliency in access control asks whether this tension compromises the ability of an organization to achieve its objectives. In this paper, we develop a new method of specifying constraints which subsumes much related work and allows a wider range of constraints to be specified. The use of such constraints leads naturally to a range of questions related to "policy existence", where a positive answer means that an organization's objectives can be realized. We provide an overview of our results establishing that some policy existence questions, notably for those instances that are restricted to user-independent constraints, are fixed-parameter tractable.
In Vehicular networks, privacy, especially the vehicles' location privacy is highly concerned. Several pseudonymous based privacy protection mechanisms have been established and standardized in the past few years by IEEE and ETSI. However, vehicular networks are still vulnerable to Sybil attack. In this paper, a Sybil attack detection method based on k-Nearest Neighbours (kNN) classification algorithm is proposed. In this method, vehicles are classified based on the similarity in their driving patterns. Furthermore, the kNN methods' high runtime complexity issue is also optimized. The simulation results show that our detection method can reach a high detection rate while keeping error rate low.
Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.
Swarm Intelligence (SI) algorithms, such as Fish School Search (FSS), are well known as useful tools that can be used to achieve a good solution in a reasonable amount of time for complex optimization problems. And when problems increase in size and complexity, some increase in population size or number of iterations might be needed in order to achieve a good solution. In extreme cases, the execution time can be huge and other approaches, such as parallel implementations, might help to reduce it. This paper investigates the relation and trade off involving these three aspects in SI algorithms, namely population size, number of iterations, and problem complexity. The results with a parallel implementations of FSS show that increasing the population size is beneficial for finding good solutions. However, we observed an asymptotic behavior of the results, i.e. increasing the population over a certain threshold only leads to slight improvements.