Biblio
For modern Automatic Test Equipment (ATE), one of the most daunting tasks conducting Information Assurance (IA). In addition, there is a desire to Network ATE to allow for information sharing and deployment of software. This is complicated by the fact that typically ATE are “unmanaged” systems in that most are configured, deployed, and then mostly left alone. This results in systems that are not patched with the latest Operating System updates and in fact may be running on legacy Operating Systems which are no longer supported (like Windows XP or Windows 7 for instance). A lot of this has to do with the cost of keeping a system updated on a continuous basis and regression testing the Test Program Sets (TPS) that run on them. Given that an Automated Test System can have thousands of Test Programs running on it, the cost and time involved in doing complete regression testing on all the Test Programs can be extremely expensive. In addition to the Test Programs themselves some Test Programs rely on third party Software and / or custom developed software that is required for the Test Programs to run. Add to this the requirement to perform software steering through all the Test Program paths, the length of time required to validate a Test Program could be measured in months in some cases. If system updates are performed once a month like some Operating System updates this could consume all the available time of the Test Station or require a fleet of Test Stations to be dedicated just to do the required regression testing. On the other side of the coin, a Test System running an old unpatched Operating System is a prime target for any manner of virus or other IA issues. This paper will discuss some of the pro's and con's of a managed Test System and how it might be accomplished.
In the process of crowdsourced testing service, the intellectual property of crowdsourced testing has been faced with problems such as code plagiarism, difficulties in confirming rights and unreliability of data. Blockchain is a decentralized, tamper-proof distributed ledger, which can help solve current problems. This paper proposes an intellectual property right confirmation system oriented to crowdsourced testing services, combined with blockchain, IPFS (Interplanetary file system), digital signature, code similarity detection to realize the confirmation of crowdsourced testing intellectual property. The performance test shows that the system can meet the requirements of normal crowdsourcing business as well as high concurrency situations.
With a record 400Gbps 100-piece-FPGA implementation, we investigate performance of the potential FEC schemes for OIF-800GZR. By comparing the power dissipation and correction threshold at 10−15 BER, we proposed the simplified OFEC for the 800G-ZR FEC.
This paper introduces lronMask, a new versatile verification tool for masking security. lronMask is the first to offer the verification of standard simulation-based security notions in the probing model as well as recent composition and expandability notions in the random probing model. It supports any masking gadgets with linear randomness (e.g. addition, copy and refresh gadgets) as well as quadratic gadgets (e.g. multiplication gadgets) that might include non-linear randomness (e.g. by refreshing their inputs), while providing complete verification results for both types of gadgets. We achieve this complete verifiability by introducing a new algebraic characterization for such quadratic gadgets and exhibiting a complete method to determine the sets of input shares which are necessary and sufficient to perform a perfect simulation of any set of probes. We report various benchmarks which show that lronMask is competitive with state-of-the-art verification tools in the probing model (maskVerif, scVerif, SILVEH, matverif). lronMask is also several orders of magnitude faster than VHAPS -the only previous tool verifying random probing composability and expandability- as well as SILVEH -the only previous tool providing complete verification for quadratic gadgets with nonlinear randomness. Thanks to this completeness and increased performance, we obtain better bounds for the tolerated leakage probability of state-of-the-art random probing secure compilers.
Objective measures are ubiquitous in the formulation, design and implementation of deep space missions. Tour durations, flyby altitudes, propellant budgets, power consumption, and other metrics are essential to developing and managing NASA missions. But beyond the simple metrics of cost and workforce, it has been difficult to identify objective, quantitative measures that assist in evaluating choices made during formulation or implementation phases in terms of their impact on flight operations. As part of the development of the Europa Clipper Mission system, a set of operations metrics have been defined along with the necessary design information and software tooling to calculate them. We have applied these methods and metrics to help assess the impact to the flight team on the six options for the Clipper Tour that are currently being vetted for selection in the fall of 2021. To generate these metrics, the Clipper MOS team first designed the set of essential processes by which flight operations will be conducted, using a standard approach and template to identify (among other aspects) timelines for each process, along with their time constraints (e.g., uplinks for sequence execution). Each of the resulting 50 processes is documented in a common format and concurred by stakeholders. Process timelines were converted into generic schedules and workforce-loaded using COTS scheduling software, based on the inputs of the process authors and domain experts. Custom code was generated to create an operations schedule for a specific portion of Clipper's prime mission, with instances of a given process scheduled based on specific timing rules (e.g., process X starts once per week on Thursdays) or relative to mission events (e.g., sequence generation process begins on a Monday, at least three weeks before each Europa closest approach). Over a 5-month period, and for each of six Clipper candidate tours, the result was a 20,000+ line, workforce-loaded schedule that documents all of the process-driven work effort at the level of individual roles, along with a significant portion of the level-of-effort work. Post-processing code calculated the absolute and relative number of work hours during a nominal 5 day / 40 hour work week, the work effort during 2nd and 3rd shift, as well as 1st shift on weekends. The resultant schedules and shift tables were used to generate objective measures that can be related to both human factors and to operational risk and showed that Clipper tours which utilize 6:1 resonant (21.25 day) orbits instead of 4:1 resonant (14.17 day) orbits during the first dozen or so Europa flybys are advantageous to flight operations. A similar approach can be extended to assist missions in more objective assessments of a number of mission issues and trades, including tour selection and spacecraft design for operability.
Recently, Cloud Computing became one of today’s great innovations for provisioning Information Technology (IT) resources. Moreover, a new model has been introduced named Fog Computing, which addresses Cloud Computing paradigm issues regarding time delay and high cost. However, security challenges are still a big concern about the vulnerabilities to both Cloud and Fog Computing systems. Man- in- the- Middle (MITM) is considered one of the most destructive attacks in a Fog Computing context. Moreover, it’s very complex to detect MiTM attacks as it is performed passively at the Software-Defined Networking (SDN) level, also the Fog Computing paradigm is ideally suitable for MITM attacks. In this paper, a MITM mitigation scheme will be proposed consisting of an SDN network (Fog Leaders) which controls a layer of Fog Nodes. Furthermore, Multi-Path TCP (MPTCP) has been used between all edge devices and Fog Nodes to improve resource utilization and security. The proposed solution performance evaluation has been carried out in a simulation environment using Mininet, Ryu SDN controller and Multipath TCP (MPTCP) Linux kernel. The experimental results showed that the proposed solution improves security, network resiliency and resource utilization without any significant overheads compared to the traditional TCP implementation.
Currently, research on 5G communication is focusing increasingly on communication techniques. The previous studies have primarily focused on the prevention of communications disruption. To date, there has not been sufficient research on network anomaly detection as a countermeasure against on security aspect. 5g network data will be more complex and dynamic, intelligent network anomaly detection is necessary solution for protecting the network infrastructure. However, since the AI-based network anomaly detection is dependent on data, it is difficult to collect the actual labeled data in the industrial field. Also, the performance degradation in the application process to real field may occur because of the domain shift. Therefore, in this paper, we research the intelligent network anomaly detection technique based on domain adaptation (DA) in 5G edge network in order to solve the problem caused by data-driven AI. It allows us to train the models in data-rich domains and apply detection techniques in insufficient amount of data. For Our method will contribute to AI-based network anomaly detection for improving the security for 5G edge network.
The Network Security and Risk (NSR) management team in an enterprise is responsible for maintaining the network which includes switches, routers, firewalls, controllers, etc. Due to the ever-increasing threat of capitalizing on the vulnerabilities to create cyber-attacks across the globe, a major objective of the NSR team is to keep network infrastructure safe and secure. NSR team ensures this by taking proactive measures of periodic audits of network devices. Further external auditors are engaged in the audit process. Audit information is primarily stored in an internal database of the enterprise. This generic approach could result in a trust deficit during external audits. This paper proposes a method to improve the security and integrity of the audit information by using blockchain technology, which can greatly enhance the trust factor between the auditors and enterprises.
We present an online framework for learning and updating security policies in dynamic IT environments. It includes three components: a digital twin of the target system, which continuously collects data and evaluates learned policies; a system identification process, which periodically estimates system models based on the collected data; and a policy learning process that is based on reinforcement learning. To evaluate our framework, we apply it to an intrusion prevention use case that involves a dynamic IT infrastructure. Our results demonstrate that the framework automatically adapts security policies to changes in the IT infrastructure and that it outperforms a state-of-the-art method.
With the objective to eliminate the input current sensor in a totem-pole boost power factor corrector (PFC) for its low-cost design, a novel discretized sampling-based robust control scheme is proposed in this work. The proposed control methodology proves to be beneficial due to its ease of implementation and its ability to support high-frequency operation, while being able to eliminate one sensor and, thus, enhancing reliability and cost-effectiveness. In addition, detailed closed-loop stability analysis is carried out for the controller in discrete domain to ascertain brisk dynamic operation when subjected to sudden load fluctuations. To establish the robustness of the proposed control scheme, a detailed sensitivity analysis of the closed-loop performance metrics with respect to undesired changes and inherent uncertainty in system parameters is presented in this article. A comparison with the state-of-the-art (SOA) methods is provided, and conclusive results in terms of better dynamic performance are also established. To verify and elaborate on the specifics of the proposed scheme, a detailed simulation study is conducted, and the results show 25% reduction in response time as compared to SOA approaches. A 500-W boost PFC prototype is developed and tested with the proposed control scheme to evaluate and benchmark the system steady-state and dynamic performance. A total harmonic distortion of 1.68% is obtained at the rated load with a resultant power factor of 0.998 (lag), which proves the effectiveness and superiority of the proposed control scheme.
Conference Name: IEEE Journal of Emerging and Selected Topics in Industrial Electronics
In this paper, we quantify elements representing video features and we propose the bitrate prediction of compressed encoding video using deep learning. Particularly, to overcome disadvantage that we cannot predict bitrate of compression video by using Constant Rate Factor (CRF), we use deep learning. We can find element of video feature with relationship of bitrate when we compress the video, and we can confirm its possibility to find relationship through various deep learning techniques.
Face recognition is a biometric technique that uses a computer or machine to facilitate the recognition of human faces. The advantage of this technique is that it can detect faces without direct contact with the device. In its application, the security of face recognition data systems is still not given much attention. Therefore, this study proposes a technique for securing data stored in the face recognition system database. It implements the Viola-Jones Algorithm, the Kanade-Lucas-Tomasi Algorithm (KLT), and the Principal Component Analysis (PCA) algorithm by applying a database security algorithm using XOR encryption. Several tests and analyzes have been performed with this method. The histogram analysis results show no visual information related to encrypted images with plain images. In addition, the correlation value between the encrypted and plain images is weak, so it has high security against statistical attacks with an entropy value of around 7.9. The average time required to carry out the introduction process is 0.7896 s.
Control room video surveillance is an important source of information for ensuring public safety. To facilitate the process, a Decision-Support System (DSS) designed for the security task force is vital and necessary to take decisions rapidly using a sea of information. In case of mission critical operation, Situational Awareness (SA) which consists of knowing what is going on around you at any given time plays a crucial role across a variety of industries and should be placed at the center of our DSS. In our approach, SA system will take advantage of the human factor thanks to the reinforcement signal whereas previous work on this field focus on improving knowledge level of DSS at first and then, uses the human factor only for decision-making. In this paper, we propose a situational awareness-centric decision-support system framework for mission-critical operations driven by Quality of Experience (QoE). Our idea is inspired by the reinforcement learning feedback process which updates the environment understanding of our DSS. The feedback is injected by a QoE built on user perception. Our approach will allow our DSS to evolve according to the context with an up-to-date SA.
Considered sensitive information by the ISO/IEC 24745, biometric data should be stored and used in a protected way. If not, privacy and security of end-users can be compromised. Also, the advent of quantum computers demands quantum-resistant solutions. This work proposes the use of Kyber and Saber public key encryption (PKE) algorithms together with homomorphic encryption (HE) in a face recognition system. Kyber and Saber, both based on lattice cryptography, were two finalists of the third round of NIST post-quantum cryptography standardization process. After the third round was completed, Kyber was selected as the PKE algorithm to be standardized. Experimental results show that recognition performance of the non-protected face recognition system is preserved with the protection, achieving smaller sizes of protected templates and keys, and shorter execution times than other HE schemes reported in literature that employ lattices. The parameter sets considered achieve security levels of 128, 192 and 256 bits.
ISSN: 1617-5468
In this paper, an overall introduction of fingerprint encryption algorithm is made, and then a fingerprint encryption algorithm with error correction is designed by adding error correction mechanism. This new fingerprint encryption algorithm can produce stochastic key in the form of multinomial coefficient by using the binary system sequencer, encrypt fingerprint, and use the Lagrange difference value to restore the multinomial during authenticating. Due to using the cyclic redundancy check code to find out the most accurate key, the accuracy of this algorithm can be ensured. Experimental result indicates that the fuzzy vault algorithm with error correction can well realize the template protection, and meet the requirements of biological information security protection. In addition, it also indicates that the system's safety performance can be enhanced by chanaing the key's length.
In the context of big data era, in order to prevent malicious access and information leakage during data services, researchers put forward a location big data encryption method based on privacy protection in practical exploration. According to the problems arising from the development of information network in recent years, users often encounter the situation of randomly obtaining location information in the network environment, which not only threatens their privacy security, but also affects the effective transmission of information. Therefore, this study proposed the privacy protection as the core position of big data encryption method, must first clear position with large data representation and positioning information, distinguish between processing position information and the unknown information, the fuzzy encryption theory, dynamic location data regrouping, eventually build privacy protection as the core of the encryption algorithm. The empirical results show that this method can not only effectively block the intrusion of attack data, but also effectively control the error of position data encryption.
Cyberspace is the fifth largest activity space after land, sea, air and space. Safeguarding Cyberspace Security is a major issue related to national security, national sovereignty and the legitimate rights and interests of the people. With the rapid development of artificial intelligence technology and its application in various fields, cyberspace security is facing new challenges. How to help the network security personnel grasp the security trend at any time, help the network security monitoring personnel respond to the alarm information quickly, and facilitate the tracking and processing of the monitoring personnel. This paper introduces a method of using situational awareness micro application actual combat attack and defense robot to quickly feed back the network attack information to the monitoring personnel, timely report the attack information to the information reporting platform and automatically block the malicious IP.
The power industrial control system is an important part of the national critical Information infrastructure. Its security is related to the national strategic security and has become an important target of cyber attacks. In order to solve the problem that the vulnerability detection technology of power industrial control system cannot meet the requirement of non-destructive, this paper proposes an industrial control vulnerability analysis technology combined with dynamic and static analysis technology. On this basis, an industrial control non-destructive vulnerability detection system is designed, and a simulation verification platform is built to verify the effectiveness of the industrial control non-destructive vulnerability detection system. These provide technical support for the safety protection research of the power industrial control system.
ISSN: 2693-289X
In order to solve the problem of untargeted data security grading methods in the process of power grid data governance, this paper analyzes the mainstream data security grading standards at home and abroad, investigates and sorts out the characteristics of power grid data security grading requirements, and proposes a method that considers national, social, and A grid data security classification scheme for the security impact of four dimensions of individuals and enterprises. The plan determines the principle of power grid data security classification. Based on the basic idea of “who will be affected to what extent and to what extent when the power grid data security is damaged”, it defines three classification factors that need to be considered: the degree of impact, the scope of influence, and the objects of influence, and the power grid data is divided into five security levels. In the operation stage of power grid data security grading, this paper sorts out the experience and gives the recommended grading process. This scheme basically conforms to the status quo of power grid data classification, and lays the foundation for power grid data governance.
The scale of the intelligent networked vehicle market is expanding rapidly, and network security issues also follow. A Situational Awareness (SA) system can detect, identify, and respond to security risks from a global perspective. In view of the discrete and weak correlation characteristics of perceptual data, this paper uses the Fly Optimization Algorithm (FOA) based on dynamic adjustment of the optimization step size to improve the convergence speed, and optimizes the extraction model of security situation element of the Internet of Vehicles (IoV), based on Probabilistic Neural Network (PNN), to improve the accuracy of element extraction. Through the comparison of experimental algorithms, it is verified that the algorithm has fast convergence speed, high precision and good stability.
In recent years, in order to continuously promote the construction of safe cities, security monitoring equipment has been widely used all over the country. How to use computer vision technology to realize effective intelligent analysis of violence in video surveillance is very important to maintain social stability and ensure people's life and property safety. Video surveillance system has been widely used because of its intuitive and convenient advantages. However, the existing video monitoring system has relatively single function, and generally only has the functions of monitoring video viewing, query and playback. In addition, relevant researchers pay less attention to the complex abnormal behavior of violence, and relevant research often ignores the differences between violent behaviors in different scenes. At present, there are two main problems in video abnormal behavior event detection: the video data of abnormal behavior is less and the definition of abnormal behavior in different scenes cannot be clearly distinguished. The main existing methods are to model normal behavior events first, and then define videos that do not conform to the normal model as abnormal, among which the learning method of video space-time feature representation based on deep learning shows a good prospect. In the face of massive surveillance videos, it is necessary to use deep learning to identify violent behaviors, so that the machine can learn to identify human actions, instead of manually monitoring camera images to complete the alarm of violent behaviors. Network training mainly uses video data set to identify network training.
The main intention of edge computing is to improve network performance by storing and computing data at the edge of the network near the end user. However, its rapid development largely ignores security threats in large-scale computing platforms and their capable applications. Therefore, Security and privacy are crucial need for edge computing and edge computing based environment. Security vulnerabilities in edge computing systems lead to security threats affecting edge computing networks. Therefore, there is a basic need for an intrusion detection system (IDS) designed for edge computing to mitigate security attacks. Due to recent attacks, traditional algorithms may not be possibility for edge computing. This article outlines the latest IDS designed for edge computing and focuses on the corresponding methods, functions and mechanisms. This review also provides deep understanding of emerging security attacks in edge computing. This article proves that although the design and implementation of edge computing IDS have been studied previously, the development of efficient, reliable and powerful IDS for edge computing systems is still a crucial task. At the end of the review, the IDS developed will be introduced as a future prospect.
Secrete message protection has become a focal point of the network security domain due to the problems of violating the network use policies and unauthorized access of the public network. These problems have led to data protection techniques such as cryptography, and steganography. Cryptography consists of encrypting secrete message to a ciphertext format and steganography consists of concealing the secrete message in codes that make up a digital file, such as an image, audio, and video. Steganography, which is different from cryptography, ensures hiding a secret message for secure transmission over the public network. This paper presents a steganographic approach using digital images for data hiding that aims to providing higher performance by combining fuzzy logic type I to pre-process the cover image and difference expansion techniques. The previous methods have used the original cover image to embed the secrete message. This paper provides a new method that first identifies the edges of a cover image and then proceeds with a difference expansion to embed the secrete message. The experimental results of this work identified an improvement of 10% of the existing method based on increased payload capacity and the visibility of the stego image.
In this work, we consider the application of the nonstationary channel polarization theory on the wiretap channel model with non-stationary blocks. Particularly, we present a time-bit coding scheme which is a secure polar codes that constructed on the virtual bit blocks by using the non-stationary channel polarization theory. We have proven that this time-bit coding scheme achieves reliability, strong security and the secrecy capacity. Also, compared with regular secure polar coding methods, our scheme has a lower coding complexity for non-stationary channel blocks.
Biometric security is the fastest growing area that receives considerable attention over the past few years. Digital hiding and encryption technologies provide an effective solution to secure biometric information from intentional or accidental attacks. Visual cryptography is the approach utilized for encrypting the information which is in the form of visual information for example images. Meanwhile, the biometric template stored in the databases are generally in the form of images, the visual cryptography could be employed effectively for encrypting the template from the attack. This study develops a share creation with improved encryption process for secure biometric verification (SCIEP-SBV) technique. The presented SCIEP-SBV technique majorly aims to attain security via encryption and share creation (SC) procedure. Firstly, the biometric images undergo SC process to produce several shares. For encryption process, homomorphic encryption (HE) technique is utilized in this work. To further improve the secrecy, an improved bald eagle search (IBES) approach was exploited in this work. The simulation values of the SCIEP-SBV system are tested on biometric images. The extensive comparison study demonstrated the improved outcomes of the SCIEP-SBV technique over compared methods.