Biblio
In recent years, Edge Computing (EC) has attracted increasing attention for its advantages in handling latencysensitive and compute-intensive applications. It is becoming a widespread solution to solve the last mile problem of cloud computing. However, in actual EC deployments, data confidentiality becomes an unignorable issue because edge devices may be untrusted. In this paper, a secure and efficient edge computing scheme based on linear coding is proposed. Generally, linear coding can be utilized to achieve data confidentiality by encoding random blocks with original data blocks before they are distributed to unreliable edge nodes. However, the addition of a large amount of irrelevant random blocks also brings great communication overhead and high decoding complexities. In this paper, we focus on the design of secure coded edge computing using orthogonal vector to protect the information theoretic security of the data matrix stored on edge nodes and the input matrix uploaded by the user device, while to further reduce the communication overhead and decoding complexities. In recent years, Edge Computing (EC) has attracted increasing attention for its advantages in handling latencysensitive and compute-intensive applications. It is becoming a widespread solution to solve the last mile problem of cloud computing. However, in actual EC deployments, data confidentiality becomes an unignorable issue because edge devices may be untrusted. In this paper, a secure and efficient edge computing scheme based on linear coding is proposed. Generally, linear coding can be utilized to achieve data confidentiality by encoding random blocks with original data blocks before they are distributed to unreliable edge nodes. However, the addition of a large amount of irrelevant random blocks also brings great communication overhead and high decoding complexities. In this paper, we focus on the design of secure coded edge computing using orthogonal vector to protect the information theoretic security of the data matrix stored on edge nodes and the input matrix uploaded by the user device, while to further reduce the communication overhead and decoding complexities.
Physical Unclonable Functions (PUFs) are considered as an attractive low-cost security anchor. The unique features of PUFs are dependent on the Nanoscale variations introduced during the manufacturing variations. Most PUFs exhibit an unreliability problem due to aging and inherent sensitivity to the environmental conditions. As a remedy to the reliability issue, helper data algorithms are used in practice. A helper data algorithm generates and stores the helper data in the enrollment phase in a secure environment. The generated helper data are used then for error correction, which can transform the unique feature of PUFs into a reproducible key. The key can be used to encrypt secret data in the security scheme. In contrast, this work shows that the fuzzy PUFs can be used to secret important data directly by an error-tolerant protocol without the enrollment phase and error-correction algorithm. In our proposal, the secret data is locked in a vault leveraging the unique fuzzy pattern of PUF. Although the noise exists, the data can then be released only by this unique PUF. The evaluation was performed on the most prominent intrinsic PUF - DRAM PUF. The test results demonstrate that our proposal can reach an acceptable reconstruction rate in various environment. Finally, the security analysis of the new proposal is discussed.
Inference of unknown opinions with uncertain, adversarial (e.g., incorrect or conflicting) evidence in large datasets is not a trivial task. Without proper handling, it can easily mislead decision making in data mining tasks. In this work, we propose a highly scalable opinion inference probabilistic model, namely Adversarial Collective Opinion Inference (Adv-COI), which provides a solution to infer unknown opinions with high scalability and robustness under the presence of uncertain, adversarial evidence by enhancing Collective Subjective Logic (CSL) which is developed by combining SL and Probabilistic Soft Logic (PSL). The key idea behind the Adv-COI is to learn a model of robust ways against uncertain, adversarial evidence which is formulated as a min-max problem. We validate the out-performance of the Adv-COI compared to baseline models and its competitive counterparts under possible adversarial attacks on the logic-rule based structured data and white and black box adversarial attacks under both clean and perturbed semi-synthetic and real-world datasets in three real world applications. The results show that the Adv-COI generates the lowest mean absolute error in the expected truth probability while producing the lowest running time among all.
Scientific experiments and observations store massive amounts of data in various scientific file formats. Metadata, which describes the characteristics of the data, is commonly used to sift through massive datasets in order to locate data of interest to scientists. Several indexing data structures (such as hash tables, trie, self-balancing search trees, sparse array, etc.) have been developed as part of efforts to provide an efficient method for locating target data. However, efficient determination of an indexing data structure remains unclear in the context of scientific data management, due to the lack of investigation on metadata, metadata queries, and corresponding data structures. In this study, we perform a systematic study of the metadata search essentials in the context of scientific data management. We study a real-world astronomy observation dataset and explore the characteristics of the metadata in the dataset. We also study possible metadata queries based on the discovery of the metadata characteristics and evaluate different data structures for various types of metadata attributes. Our evaluation on real-world dataset suggests that trie is a suitable data structure when prefix/suffix query is required, otherwise hash table should be used. We conclude our study with a summary of our findings. These findings provide a guideline and offers insights in developing metadata indexing methodologies for scientific applications.
The globalization of supply chain makes semiconductor chips susceptible to various security threats. Design obfuscation techniques have been widely investigated to thwart intellectual property (IP) piracy attacks. Key distribution among IP providers, system integration team, and end users remains as a challenging problem. This work proposes an orthogonal obfuscation method, which utilizes an orthogonal matrix to authenticate obfuscation keys, rather than directly examining each activation key. The proposed method hides the keys by using an orthogonal obfuscation algorithm to increasing the key retrieval time, such that the primary keys for IP cores will not be leaked. The simulation results show that the proposed method reduces the key retrieval time by 36.3% over the baseline. The proposed obfuscation methods have been successfully applied to ISCAS'89 benchmark circuits. Experimental results indicate that the orthogonal obfuscation only increases the area by 3.4% and consumes 4.7% more power than the baseline1.
Security issues severely restrict the development and popularization of cloud computing. As a way of data leakage, covert channel greatly threatens the security of cloud platform. This paper introduces the types and research status of covert channels, and discusses the classical detection and interference methods of time-covert channels on cloud platforms for shared memory time covert channels.
In this paper, we propose a robust Nash strategy for a class of uncertain Markov jump delay stochastic systems (UMJDSSs) via static output feedback (SOF). After establishing the extended bounded real lemma for UMJDSS, the conditions for the existence of a robust Nash strategy set are determined by means of cross coupled stochastic matrix inequalities (CCSMIs). In order to solve the SOF problem, an heuristic algorithm is developed based on the algebraic equations and the linear matrix inequalities (LMIs). In particular, it is shown that robust convergence is guaranteed under a new convergence condition. Finally, a practical numerical example based on the congestion control for active queue management is provided to demonstrate the reliability and usefulness of the proposed design scheme.
Genetic Programming Hyper-heuristic (GPHH) has been successfully applied to automatically evolve effective routing policies to solve the complex Uncertain Capacitated Arc Routing Problem (UCARP). However, GPHH typically ignores the interpretability of the evolved routing policies. As a result, GP-evolved routing policies are often very complex and hard to be understood and trusted by human users. In this paper, we aim to improve the interpretability of the GP-evolved routing policies. To this end, we propose a new Multi-Objective GP (MOGP) to optimise the performance and size simultaneously. A major issue here is that the size is much easier to be optimised than the performance, and the search tends to be biased to the small but poor routing policies. To address this issue, we propose a simple yet effective Two-Stage GPHH (TS-GPHH). In the first stage, only the performance is to be optimised. Then, in the second stage, both objectives are considered (using our new MOGP). The experimental results showed that TS-GPHH could obtain much smaller and more interpretable routing policies than the state-of-the-art single-objective GPHH, without deteriorating the performance. Compared with traditional MOGP, TS-GPHH can obtain a much better and more widespread Pareto front.
PRIME protocol is a narrowband power line communication protocol whose security is based on Advanced Encryption Standard. However, the key expansion process of AES algorithm is not unidirectional, and each round of keys are linearly related to each other, it is less difficult for eavesdroppers to crack AES encryption algorithm, leading to threats to the security of PRIME protocol. To solve this problem, this paper proposes an improvement of PRIME protocol based on chaotic cryptography. The core of this method is to use Chebyshev chaotic mapping and Logistic chaotic mapping to generate each round of key in the key expansion process of AES algorithm, In this way, the linear correlation between the key rounds can be reduced, making the key expansion process unidirectional, increasing the crack difficulty of AES encryption algorithm, and improving the security of PRIME protocol.
Aiming at the incomplete and incomplete security mechanism of wireless access system in emergency communication network, this paper proposes a security mechanism requirement construction method for wireless access system based on security evaluation standard. This paper discusses the requirements of security mechanism construction in wireless access system from three aspects: the definition of security issues, the construction of security functional components and security assurance components. This method can comprehensively analyze the security threats and security requirements of wireless access system in emergency communication network, and can provide correct and reasonable guidance and reference for the establishment of security mechanism.
The process of release of a single domain wall from the closure domain structure at the microwire ends and the process of nucleation of the reversed domain in regions far from the microwire ends were studied using the technique that consists in determining the critical parameters of the rectangular magnetic field pulse (magnitude-Hpc and length-τc) needed for free domain wall production. Since these processes can be influenced by the magnitude of the magnetic field before or after the application of the field pulse (Hi, τ), we propose a modified experiment in which the so-called three-level pulse is used. The three-level pulse starts from the first level, then continues with the second measuring rectangular pulse (Hi, τ), which ends at the third field level. Based on the results obtained in experiments using three-level field pulses, it has been shown that reversed domains are not present in the remanent state in regions far from the microwire ends. Some modification of the theoretical model of a single domain wall trapped in a potential well will be needed for an adequate description of the depinning processes.
With the rapid increase in the use of mobile devices in people's daily lives, mobile data traffic is exploding in recent years. In the edge computing environment where edge servers are deployed around mobile users, caching popular data on edge servers can ensure mobile users' fast access to those data and reduce the data traffic between mobile users and the centralized cloud. Existing studies consider the data cache problem with a focus on the reduction of network delay and the improvement of mobile devices' energy efficiency. In this paper, we attack the data caching problem in the edge computing environment from the service providers' perspective, who would like to maximize their venues of caching their data. This problem is complicated because data caching produces benefits at a cost and there usually is a trade-off in-between. In this paper, we formulate the data caching problem as an integer programming problem, and maximizes the revenue of the service provider while satisfying a constraint for data access latency. Extensive experiments are conducted on a real-world dataset that contains the locations of edge servers and mobile users, and the results reveal that our approach significantly outperform the baseline approaches.
How to evaluate software reliability based on historical data of embedded software projects is one of the problems we have to face in practical engineering. Therefore, we establish a software reliability evaluation model based on code metrics. This evaluation technique requires the aggregation of software code metrics into project metrics. Statistical value methods, metric distribution methods, and econometric methods are commonly-used aggregation methods. What are the differences between these methods in the software reliability evaluation process, and which methods can improve the accuracy of the reliability assessment model we have established are our concerns. In view of these concerns, we conduct an empirical study on the application of software code metric aggregation methods based on actual projects. We find the distribution of code metrics for the projects under study. Using these distribution laws, we optimize the aggregation method of code metrics and improve the accuracy of the software reliability evaluation model.
Hyperspectral image (HSIs) with abundant spectral information but limited labeled dataset endows the rationality and necessity of semi-supervised spectral-based classification methods. Where, the utilizing approach of spectral information is significant to classification accuracy. In this paper, we propose a novel semi-supervised method based on generative adversarial network (GAN) with folded spectrum (FS-GAN). Specifically, the original spectral vector is folded to 2D square spectrum as input of GAN, which can generate spectral texture and provide larger receptive field over both adjacent and non-adjacent spectral bands for deep feature extraction. The generated fake folded spectrum, the labeled and unlabeled real folded spectrum are then fed to the discriminator for semi-supervised learning. A feature matching strategy is applied to prevent model collapse. Extensive experimental comparisons demonstrate the effectiveness of the proposed method.