Biblio
Filters: Keyword is Metrics [Clear All Filters]
Game Theory Based Multi-agent Cooperative Anti-jamming for Mobile Ad Hoc Networks. 2022 IEEE 8th International Conference on Computer and Communications (ICCC). :901–905.
.
2022. Currently, mobile ad hoc networks (MANETs) are widely used due to its self-configuring feature. However, it is vulnerable to the malicious jammers in practice. Traditional anti-jamming approaches, such as channel hopping based on deterministic sequences, may not be the reliable solution against intelligent jammers due to its fixed patterns. To address this problem, we propose a distributed game theory-based multi-agent anti-jamming (DMAA) algorithm in this paper. It enables each user to exploit all information from its neighboring users before the network attacks, and derive dynamic local policy knowledge to overcome intelligent jamming attacks efficiently as well as guide the users to cooperatively hop to the same channel with high probability. Simulation results demonstrate that the proposed algorithm can learn an optimal policy to guide the users to avoid malicious jamming more efficiently and rapidly than the random and independent Q-learning baseline algorithms,
Cost-Efficient Network Protection Games Against Uncertain Types of Cyber-Attackers. 2022 IEEE International Symposium on Technologies for Homeland Security (HST). :1–7.
.
2022. This paper considers network protection games for a heterogeneous network system with N nodes against cyber-attackers of two different types of intentions. The first type tries to maximize damage based on the value of each net-worked node, while the second type only aims at successful infiltration. A defender, by applying defensive resources to networked nodes, can decrease those nodes' vulnerabilities. Meanwhile, the defender needs to balance the cost of using defensive resources and potential security benefits. Existing literature shows that, in a Nash equilibrium, the defender should adopt different resource allocation strategies against different types of attackers. However, it could be difficult for the defender to know the type of incoming cyber-attackers. A Bayesian game is investigated considering the case that the defender is uncertain about the attacker's type. We demonstrate that the Bayesian equilibrium defensive resource allocation strategy is a mixture of the Nash equilibrium strategies from the games against the two types of attackers separately.
Traceability Method of Network Attack Based on Evolutionary Game. 2022 International Conference on Networking and Network Applications (NaNA). :232–236.
.
2022. Cyberspace is vulnerable to continuous malicious attacks. Traceability of network attacks is an effective defense means to curb and counter network attacks. In this paper, the evolutionary game model is used to analyze the network attack and defense behavior. On the basis of the quantification of attack and defense benefits, the replication dynamic learning mechanism is used to describe the change process of the selection probability of attack and defense strategies, and finally the evolutionary stability strategies and their solution curves of both sides are obtained. On this basis, the attack behavior is analyzed, and the probability curve of attack strategy and the optimal attack strategy are obtained, so as to realize the effective traceability of attack behavior.
Game model of attack and defense for underwater wireless sensor networks. 2022 IEEE 10th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). 10:559–563.
.
2022. At present, the research on the network security problem of underwater wireless sensors is still few, and since the underwater environment is exposed, passive security defense technology is not enough to deal with unknown security threats. Aiming at this problem, this paper proposes an offensive and defensive game model from the finite rationality of the network attack and defense sides, combined with evolutionary game theory. The replicated dynamic equation is introduced to analyze the evolution trend of strategies under different circumstances, and the selection algorithm of optimal strategy is designed, which verifies the effectiveness of this model through simulation and provides guidance for active defense technology.
ISSN: 2693-2865
AI and Security: A Game Perspective. 2022 14th International Conference on COMmunication Systems & NETworkS (COMSNETS). :393–396.
.
2022. In this short paper, we survey some work at the intersection of Artificial Intelligence (AI) and security that are based on game theoretic considerations, and particularly focus on the author's (our) contribution in these areas. One half of this paper focuses on applications of game theoretic and learning reasoning for addressing security applications such as in public safety and wildlife conservation. In the second half, we present recent work that attacks the learning components of these works, leading to sub-optimal defense allocation. We finally end by pointing to issues and potential research problems that can arise due to data quality in the real world.
ISSN: 2155-2509
CySec Game: A Framework and Tool for Cyber Risk Assessment and Security Investment Optimization in Critical Infrastructures. 2022 Resilience Week (RWS). :1–6.
.
2022. Cyber physical system (CPS) Critical infrastructures (CIs) like the power and energy systems are increasingly becoming vulnerable to cyber attacks. Mitigating cyber risks in CIs is one of the key objectives of the design and maintenance of these systems. These CPS CIs commonly use legacy devices for remote monitoring and control where complete upgrades are uneconomical and infeasible. Therefore, risk assessment plays an important role in systematically enumerating and selectively securing vulnerable or high-risk assets through optimal investments in the cybersecurity of the CPS CIs. In this paper, we propose a CPS CI security framework and software tool, CySec Game, to be used by the CI industry and academic researchers to assess cyber risks and to optimally allocate cybersecurity investments to mitigate the risks. This framework uses attack tree, attack-defense tree, and game theory algorithms to identify high-risk targets and suggest optimal investments to mitigate the identified risks. We evaluate the efficacy of the framework using the tool by implementing a smart grid case study that shows accurate analysis and feasible implementation of the framework and the tool in this CPS CI environment.
Secure Wireless Sensor Network Energy Optimization Model with Game Theory and Deep Learning Algorithm. 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS). 1:1746–1751.
.
2022. Rational and smart decision making by means of strategic interaction and mathematical modelling is the key aspect of Game theory. Security games based on game theory are used extensively in cyberspace for various levels of security. The contemporary security issues can be modelled and analyzed using game theory as a robust mathematical framework. The attackers, defenders and the adversarial as well as defensive interactions can be captured using game theory. The security games equilibrium evaluation can help understand the attackers' strategies and potential threats at a deeper level for efficient defense. Wireless sensor network (WSN) designs are greatly benefitted by game theory. A deep learning adversarial network algorithm is used in combination with game theory enabling energy efficiency, optimal data delivery and security in a WSN. The trade-off between energy resource utilization and security is balanced using this technique.
ISSN: 2575-7288
Multi-data Image Steganography using Generative Adversarial Networks. 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom). :454–459.
.
2022. The success of deep learning based steganography has shifted focus of researchers from traditional steganography approaches to deep learning based steganography. Various deep steganographic models have been developed for improved security, capacity and invisibility. In this work a multi-data deep learning steganography model has been developed using a well known deep learning model called Generative Adversarial Networks (GAN) more specifically using deep convolutional Generative Adversarial Networks (DCGAN). The model is capable of hiding two different messages, meant for two different receivers, inside a single cover image. The proposed model consists of four networks namely Generator, Steganalyzer Extractor1 and Extractor2 network. The Generator hides two secret messages inside one cover image which are extracted using two different extractors. The Steganalyzer network differentiates between the cover and stego images generated by the generator network. The experiment has been carried out on CelebA dataset. Two commonly used distortion metrics Peak signal-to-Noise ratio (PSNR) and Structural Similarity Index Metric (SSIM) are used for measuring the distortion in the stego image The results of experimentation show that the stego images generated have good imperceptibility and high extraction rates.
Improving Anomaly Detection with a Self-Supervised Task Based on Generative Adversarial Network. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3563–3567.
.
2022. Existing anomaly detection models show success in detecting abnormal images with generative adversarial networks on the insufficient annotation of anomalous samples. However, existing models cannot accurately identify the anomaly samples which are close to the normal samples. We assume that the main reason is that these methods ignore the diversity of patterns in normal samples. To alleviate the above issue, this paper proposes a novel anomaly detection framework based on generative adversarial network, called ADe-GAN. More concretely, we construct a self-supervised learning task to fully explore the pattern information and latent representations of input images. In model inferring stage, we design a new abnormality score approach by jointly considering the pattern information and reconstruction errors to improve the performance of anomaly detection. Extensive experiments show that the ADe-GAN outperforms the state-of-the-art methods over several real-world datasets.
ISSN: 2379-190X
Generative Adversarial Networks for Remote Sensing. 2022 2nd International Conference on Big Data, Artificial Intelligence and Risk Management (ICBAR). :108–112.
.
2022. Generative adversarial networks (GANs) have been increasingly popular among deep learning methods. With many GANs-based models developed since its emergence, among which are conditional generative adversarial networks, progressive growing of generative adversarial networks, Wasserstein generative adversarial networks and so on. These frameworks are currently widely applied in areas such as remote sensing cybersecurity, medical, and architecture. Especially, they have solved problems of cloud removal, semantic segmentation, image-to-image translation and data argumentation in remote sensing. For example, WGANs and ProGANs can be applied in data argumentation, and cGANs can be applied in semantic argumentation and image-to-image translation. This article provides an overview of structures of multiple GANs-based models and what areas they can be applied in remote sensing.
Test Case Filtering based on Generative Adversarial Networks. 2022 IEEE 23rd International Conference on High Performance Switching and Routing (HPSR). :65–69.
.
2022. Fuzzing is a popular technique for finding soft-ware vulnerabilities. Despite their success, the state-of-art fuzzers will inevitably produce a large number of low-quality inputs. In recent years, Machine Learning (ML) based selection strategies have reported promising results. However, the existing ML-based fuzzers are limited by the lack of training data. Because the mutation strategy of fuzzing can not effectively generate useful input, it is prohibitively expensive to collect enough inputs to train models. In this paper, propose a generative adversarial networks based solution to generate a large number of inputs to solve the problem of insufficient data. We implement the proposal in the American Fuzzy Lop (AFL), and the experimental results show that it can find more crashes at the same time compared with the original AFL.
ISSN: 2325-5609
SCGAN: Generative Adversarial Networks of Skip Connection for Face Image Inpainting. 2022 Ninth International Conference on Social Networks Analysis, Management and Security (SNAMS). :1–6.
.
2022. Deep learning has been widely applied for jobs involving face inpainting, however, there are usually some problems, such as incoherent inpainting edges, lack of diversity of generated images and other problems. In order to get more feature information and improve the inpainting effect, we therefore propose a Generative Adversarial Network of Skip Connection (SCGAN), which connects the encoder layers and the decoder layers by skip connection in the generator. The coherence and consistency of the image inpainting edges are improved, and the finer features of the image inpainting are refined, simultaneously using the discriminator's local and global double discriminators model. We also employ WGAN-GP loss to enhance model stability during training, prevent model collapse, and increase the variety of inpainting face images. Finally, experiments on the CelebA dataset and the LFW dataset are performed, and the model's performance is assessed using the PSNR and SSIM indices. Our model's face image inpainting is more realistic and coherent than that of other models, and the model training is more reliable.
ISSN: 2831-7343
Adversarial Networks-Based Speech Enhancement with Deep Regret Loss. 2022 5th International Conference on Networking, Information Systems and Security: Envisage Intelligent Systems in 5g//6G-based Interconnected Digital Worlds (NISS). :1–6.
.
2022. Speech enhancement is often applied for speech-based systems due to the proneness of speech signals to additive background noise. While speech processing-based methods are traditionally used for speech enhancement, with advancements in deep learning technologies, many efforts have been made to implement them for speech enhancement. Using deep learning, the networks learn mapping functions from noisy data to clean ones and then learn to reconstruct the clean speech signals. As a consequence, deep learning methods can reduce what is so-called musical noise that is often found in traditional speech enhancement methods. Currently, one popular deep learning architecture for speech enhancement is generative adversarial networks (GAN). However, the cross-entropy loss that is employed in GAN often causes the training to be unstable. So, in many implementations of GAN, the cross-entropy loss is replaced with the least-square loss. In this paper, to improve the training stability of GAN using cross-entropy loss, we propose to use deep regret analytic generative adversarial networks (Dragan) for speech enhancements. It is based on applying a gradient penalty on cross-entropy loss. We also employ relativistic rules to stabilize the training of GAN. Then, we applied it to the least square and Dragan losses. Our experiments suggest that the proposed method improve the quality of speech better than the least-square loss on several objective quality metrics.
Analysis and Research of Generative Adversarial Network in Anomaly Detection. 2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP). :1700–1703.
.
2022. In recent years, generative adversarial networks (GAN) have become a research hotspot in the field of deep learning. Researchers apply them to the field of anomaly detection and are committed to effectively and accurately identifying abnormal images in practical applications. In anomaly detection, traditional supervised learning algorithms have limitations in training with a large number of known labeled samples. Therefore, the anomaly detection model of unsupervised learning GAN is the research object for discussion and research. Firstly, the basic principles of GAN are introduced. Secondly, several typical GAN-based anomaly detection models are sorted out in detail. Then by comparing the similarities and differences of each derivative model, discuss and summarize their respective advantages, limitations and application scenarios. Finally, the problems and challenges faced by GAN in anomaly detection are discussed, and future research directions are prospected.
Adversarial AutoEncoder and Generative Adversarial Networks for Semi-Supervised Learning Intrusion Detection System. 2022 RIVF International Conference on Computing and Communication Technologies (RIVF). :584–589.
.
2022. As one of the defensive solutions against cyberattacks, an Intrusion Detection System (IDS) plays an important role in observing the network state and alerting suspicious actions that can break down the system. There are many attempts of adopting Machine Learning (ML) in IDS to achieve high performance in intrusion detection. However, all of them necessitate a large amount of labeled data. In addition, labeling attack data is a time-consuming and expensive human-labor operation, it makes existing ML methods difficult to deploy in a new system or yields lower results due to a lack of labels on pre-trained data. To address these issues, we propose a semi-supervised IDS model that leverages Generative Adversarial Networks (GANs) and Adversarial AutoEncoder (AAE), called a semi-supervised adversarial autoencoder (SAAE). Our SAAE experimental results on two public datasets for benchmarking ML-based IDS, including NF-CSE-CIC-IDS2018 and NF-UNSW-NB15, demonstrate the effectiveness of AAE and GAN in case of using only a small number of labeled data. In particular, our approach outperforms other ML methods with the highest detection rates in spite of the scarcity of labeled data for model training, even with only 1% labeled data.
ISSN: 2162-786X
Security-Alert Screening with Oversampling Based on Conditional Generative Adversarial Networks. 2022 17th Asia Joint Conference on Information Security (AsiaJCIS). :1–7.
.
2022. Imbalanced class distribution can cause information loss and missed/false alarms for deep learning and machine-learning algorithms. The detection performance of traditional intrusion detection systems tend to degenerate due to skewed class distribution caused by the uneven allocation of observations in different kinds of attacks. To combat class imbalance and improve network intrusion detection performance, we adopt the conditional generative adversarial network (CTGAN) that enables the generation of samples of specific classes of interest. CTGAN builds on the generative adversarial networks (GAN) architecture to model tabular data and generate high quality synthetic data by conditionally sampling rows from the generated model. Oversampling using CTGAN adds instances to the minority class such that both data in the majority and the minority class are of equal distribution. The generated security alerts are used for training classifiers that realize critical alert detection. The proposed scheme is evaluated on a real-world dataset collected from security operation center of a large enterprise. The experiment results show that detection accuracy can be substantially improved when CTGAN is adopted to produce a balanced security-alert dataset. We believe the proposed CTGAN-based approach can cast new light on building effective systems for critical alert detection with reduced missed/false alarms.
ISSN: 2765-9712
Optimization of Encrypted Communication Model Based on Generative Adversarial Network. 2022 International Conference on Blockchain Technology and Information Security (ICBCTIS). :20–24.
.
2022. With the progress of cryptography computer science, designing cryptographic algorithms using deep learning is a very innovative research direction. Google Brain designed a communication model using generation adversarial network and explored the encrypted communication algorithm based on machine learning. However, the encrypted communication model it designed lacks quantitative evaluation. When some plaintexts and keys are leaked at the same time, the security of communication cannot be guaranteed. This model is optimized to enhance the security by adjusting the optimizer, modifying the activation function, and increasing batch normalization to improve communication speed of optimization. Experiments were performed on 16 bits and 64 bits plaintexts communication. With plaintext and key leak rate of 0.75, the decryption error rate of the decryptor is 0.01 and the attacker can't guess any valid information about the communication.
Profiled Side-Channel Attack on Cryptosystems Based on the Binary Syndrome Decoding Problem. IEEE Transactions on Information Forensics and Security. 17:3407–3420.
.
2022. The NIST standardization process for post-quantum cryptography has been drawing the attention of researchers to the submitted candidates. One direction of research consists in implementing those candidates on embedded systems and that exposes them to physical attacks in return. The Classic McEliece cryptosystem, which is among the four finalists of round 3 in the Key Encapsulation Mechanism category, builds its security on the hardness of the syndrome decoding problem, which is a classic hard problem in code-based cryptography. This cryptosystem was recently targeted by a laser fault injection attack leading to message recovery. Regrettably, the attack setting is very restrictive and it does not tolerate any error in the faulty syndrome. Moreover, it depends on the very strong attacker model of laser fault injection, and does not apply to optimised implementations of the algorithm that make optimal usage of the machine words capacity. In this article, we propose a to change the angle and perform a message-recovery attack that relies on side-channel information only. We improve on the previously published work in several key aspects. First, we show that side-channel information, obtained with power consumption analysis, is sufficient to obtain an integer syndrome, as required by the attack framework. This is done by leveraging classic machine learning techniques that recover the Hamming weight information very accurately. Second, we put forward a computationally-efficient method, based on a simple dot product and information-set decoding algorithms, to recover the message from the, possibly inaccurate, recovered integer syndrome. Finally, we present a masking countermeasure against the proposed attack.
Conference Name: IEEE Transactions on Information Forensics and Security
The Mother of All Leakages: How to Simulate Noisy Leakages via Bounded Leakage (Almost) for Free. IEEE Transactions on Information Theory. 68:8197–8227.
.
2022. We show that the most common flavors of noisy leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to a small statistical simulation error and a slight loss in the leakage parameter. The latter holds true in particular for one of the most used noisy-leakage models, where the noisiness is measured using the conditional average min-entropy (Naor and Segev, CRYPTO’09 and SICOMP’12). Our reductions between noisy and bounded leakage are achieved in two steps. First, we put forward a new leakage model (dubbed the dense leakage model) and prove that dense leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to small statistical distance. Second, we show that the most common noisy-leakage models fall within the class of dense leakage, with good parameters. Third, we prove lower bounds on the amount of bounded leakage required for simulation with sub-constant error, showing that our reductions are nearly optimal. In particular, our results imply that useful general simulation of noisy leakage based on statistical distance and mutual information is impossible. We also provide a complete picture of the relationships between different noisy-leakage models. Our result finds applications to leakage-resilient cryptography, where we are often able to lift security in the presence of bounded leakage to security in the presence of noisy leakage, both in the information-theoretic and in the computational setting. Remarkably, this lifting procedure makes only black-box use of the underlying schemes. Additionally, we show how to use lower bounds in communication complexity to prove that bounded-collusion protocols (Kumar, Meka, and Sahai, FOCS’19) for certain functions do not only require long transcripts, but also necessarily need to reveal enough information about the inputs.
Conference Name: IEEE Transactions on Information Theory
Cross-Layer Design for UAV-Based Streaming Media Transmission. IEEE Transactions on Circuits and Systems for Video Technology. 32:4710–4723.
.
2022. Unmanned Aerial Vehicle (UAV)-based streaming media transmission may become unstable when the bit rate generated by the source load exceeds the channel capacity owing to the UAV location and speed change. The change of the location can affect the network connection, leading to reduced transmission rate; the change of the flying speed can increase the video payload due to more I-frames. To improve the transmission reliability, in this paper we design a Client-Server-Ground&User (C-S-G&U) framework, and propose an algorithm of splitting-merging stream (SMS) for multi-link concurrent transmission. We also establish multiple transport links and configure the routing rules for the cross-layer design. The multi-link transmission can achieve higher throughput and significantly smaller end-to-end delay than a single-link especially in a heavy load situation. The audio and video data are packaged into the payload by the Real-time Transport Protocol (RTP) before being transmitted over the User Datagram Protocol (UDP). The forward error correction (FEC) algorithm is implemented to promote the reliability of the UDP transmission, and an encryption algorithm to enhance security. In addition, we propose a Quality of Service (QoS) strategy so that the server and the user can control the UAV to adapt its transmission mode dynamically, according to the load, delay, and packet loss. Our design has been implemented on an engineering platform, whose efficacy has been verified through comprehensive experiments.
Conference Name: IEEE Transactions on Circuits and Systems for Video Technology
LRVP: Lightweight Real-Time Verification of Intradomain Forwarding Paths. IEEE Systems Journal. 16:6309–6320.
.
2022. The correctness of user traffic forwarding paths is an important goal of trusted transmission. Many network security issues are related to it, i.e., denial-of-service attacks, route hijacking, etc. The current path-aware network architecture can effectively overcome this issue through path verification. At present, the main problems of path verification are high communication and high computation overhead. To this aim, this article proposes a lightweight real-time verification mechanism of intradomain forwarding paths in autonomous systems to achieve a path verification architecture with no communication overhead and low computing overhead. The problem situation is that a packet finally reaches the destination, but its forwarding path is inconsistent with the expected path. The expected path refers to the packet forwarding path determined by the interior gateway protocols. If the actual forwarding path is different from the expected one, it is regarded as an incorrect forwarding path. This article focuses on the most typical intradomain routing environment. A few routers are set as the verification routers to block the traffic with incorrect forwarding paths and raise alerts. Experiments prove that this article effectively solves the problem of path verification and the problem of high communication and computing overhead.
Conference Name: IEEE Systems Journal
Two-Stage AES Encryption Method Based on Stochastic Error of a Neural Network. 2022 IEEE 16th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET). :381–385.
.
2022. This paper proposes a new two-stage encryption method to increase the cryptographic strength of the AES algorithm, which is based on stochastic error of a neural network. The composite encryption key in AES neural network cryptosystem are the weight matrices of synaptic connections between neurons and the metadata about the architecture of the neural network. The stochastic nature of the prediction error of the neural network provides an ever-changing pair key-ciphertext. Different topologies of the neural networks and the use of various activation functions increase the number of variations of the AES neural network decryption algorithm. The ciphertext is created by the forward propagation process. The encryption result is reversed back to plaintext by the reverse neural network functional operator.
Performance Evaluation of Multilevel Coded FEC with Register-Transfer-Level Emulation. 2022 27th OptoElectronics and Communications Conference (OECC) and 2022 International Conference on Photonics in Switching and Computing (PSC). :1—3.
.
2022. We demonstrated hardware emulations to evaluate the error-correction performance for a FEC scheme with multilevel coding. It has enabled the measurement of BER to reach the order of 10−14 for the decoded signal.
A Novel Interleaving Scheme for Concatenated Codes on Burst-Error Channel. 2022 27th Asia Pacific Conference on Communications (APCC). :309—314.
.
2022. With the rapid development of Ethernet, RS (544, 514) (KP4-forward error correction), which was widely used in high-speed Ethernet standards for its good performance-complexity trade-off, may not meet the demands of next-generation Ethernet for higher data transmission speed and better decoding performance. A concatenated code based on KP4-FEC has become a good solution because of its low complexity and excellent compatibility. For concatenated codes, aside from the selection of outer and inner codes, an efficient interleaving scheme is also very critical to deal with different channel conditions. Aiming at burst errors in wired communication, we propose a novel matrix interleaving scheme for concatenated codes which set the outer code as KP4-FEC and the inner code as Bose-Chaudhuri-Hocquenghem (BCH) code. In the proposed scheme, burst errors are evenly distributed to each BCH code as much as possible to improve their overall decoding efficiency. Meanwhile, the bit continuity in each symbol of the RS codeword is guaranteed during transmission, so the number of symbols affected by burst errors is minimized. Simulation results demonstrate that the proposed interleaving scheme can achieve a better decoding performance on burst-error channels than the original scheme. In some cases, the extra coding gain at the bit-error-rate (BER) of 1 × 10−15 can even reach 1 dB.
Design and Implementation of English Grammar Error Correction System Based on Deep Learning. 2022 3rd International Conference on Information Science and Education (ICISE-IE). :78—81.
.
2022. At present, our English error correction algorithm is slightly general, the error correction ability is also very limited, and its accuracy rate is also low, so it is very necessary to improve. This article will further explore the problem of syntax error correction, and the corresponding algorithm model will also be proposed. Based on deep learning technology to improve the error correction rate of English grammar, put forward the corresponding solution, put forward the Sep2seq-based English grammar error correction system model, and carry out a series of rectifications to improve its efficiency and accuracy. The basic architecture of TensorFLOW is used to implement the model, and the success of the error correction algorithm model is proved, which brings great improvement and progress to the success of error correction.