Visible to the public Biblio

Found 1140 results

Filters: First Letter Of Title is E  [Clear All Filters]
2021-05-18
Zheng, Wei, Gao, Jialiang, Wu, Xiaoxue, Xun, Yuxing, Liu, Guoliang, Chen, Xiang.  2020.  An Empirical Study of High-Impact Factors for Machine Learning-Based Vulnerability Detection. 2020 IEEE 2nd International Workshop on Intelligent Bug Fixing (IBF). :26–34.
Ahstract-Vulnerability detection is an important topic of software engineering. To improve the effectiveness and efficiency of vulnerability detection, many traditional machine learning-based and deep learning-based vulnerability detection methods have been proposed. However, the impact of different factors on vulnerability detection is unknown. For example, classification models and vectorization methods can directly affect the detection results and code replacement can affect the features of vulnerability detection. We conduct a comparative study to evaluate the impact of different classification algorithms, vectorization methods and user-defined variables and functions name replacement. In this paper, we collected three different vulnerability code datasets. These datasets correspond to different types of vulnerabilities and have different proportions of source code. Besides, we extract and analyze the features of vulnerability code datasets to explain some experimental results. Our findings from the experimental results can be summarized as follows: (i) the performance of using deep learning is better than using traditional machine learning and BLSTM can achieve the best performance. (ii) CountVectorizer can improve the performance of traditional machine learning. (iii) Different vulnerability types and different code sources will generate different features. We use the Random Forest algorithm to generate the features of vulnerability code datasets. These generated features include system-related functions, syntax keywords, and user-defined names. (iv) Datasets without user-defined variables and functions name replacement will achieve better vulnerability detection results.
Cho, Sunghwan, Chen, Gaojie, Coon, Justin P..  2020.  Enhancing Security in VLC Systems Through Beamforming. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–6.
This paper proposes a novel zero-forcing (ZF) beamforming strategy that can simultaneously cope with active and passive eavesdroppers (EDs) in visible light communication systems. A related optimization problem is formulated to maximize the signal-to-noise ratio (SNR) of the legitimate user (UE) while suppressing the SNR of active ED to zero and constraining the average SNR of passive EDs. The proposed beamforming directs the transmission along a particular eigenmode related to the null space of the active ED channel and the intensity of the passive ED point process. An inverse free preconditioned Krylov subspace projection method is used to find the eigenmode. The numerical results show that the proposed ZF beamforming scheme yields better performance relative to a traditional ZF beamforming scheme in the sense of increasing the SNR of the UE and reducing the secrecy outage probability.
Soderi, Simone.  2020.  Enhancing Security in 6G Visible Light Communications. 2020 2nd 6G Wireless Summit (6G SUMMIT). :1–5.
This paper considers improving the confidentiality of the next generation of wireless communications by using the watermark-based blind physical layer security (WBPLSec) in Visible Light Communications (VLCs). Since the growth of wireless applications and service, the demand for a secure and fast data transfer connection requires new technology solutions capable to ensure the best countermeasure against security attacks. VLC is one of the most promising new wireless communication technology, due to the possibility of using environmental artificial lights as data transfer channel in free-space. On the other hand, VLCs are even inherently susceptible to eavesdropping attacks. This work proposes an innovative scheme in which red, green, blue (RGB) light-emitting-diodes (LEDs) and three color-tuned photo-diodes (PDs) are used to secure a VLC by using a jamming receiver in conjunction with the spread spectrum watermarking technique. To the best of the author's knowledge, this is the first work that deals with physical layer security on VLC by using RGB LEDs.
2021-05-13
Aghabagherloo, Alireza, Mohajeri, Javad, Salmasizadeh, Mahmoud, Feghhi, Mahmood Mohassel.  2020.  An Efficient Anonymous Authentication Scheme Using Registration List in VANETs. 2020 28th Iranian Conference on Electrical Engineering (ICEE). :1—5.

Nowadays, Vehicular Ad hoc Networks (VANETs) are popularly known as they can reduce traffic and road accidents. These networks need several security requirements, such as anonymity, data authentication, confidentiality, traceability and cancellation of offending users, unlinkability, integrity, undeniability and access control. Authentication of the data and sender are most important security requirements in these networks. So many authentication schemes have been proposed up to now. One of the well-known techniques to provide users authentication in these networks is the authentication based on the smartcard (ASC). In this paper, we propose an ASC scheme that not only provides necessary security requirements such as anonymity, traceability and unlinkability in the VANETs but also is more efficient than the other schemes in the literatures.

Nie, Guanglai, Zhang, Zheng, Zhao, Yufeng.  2020.  The Executors Scheduling Algorithm for the Web Server Based on the Attack Surface. 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications( AEECA). :281–287.
In the existing scheduling algorithms of mimicry structure, the random algorithm cannot solve the problem of large vulnerability window in the process of random scheduling. Based on known vulnerabilities, the algorithm with diversity and complexity as scheduling indicators can not only fail to meet the characteristic requirements of mimic's endogenous security for defense, but also cannot analyze the unknown vulnerabilities and measure the continuous differences in time of mimic Executive Entity. In this paper, from the Angle of attack surface is put forward based on mimicry attack the mimic Executive Entity scheduling algorithm, its resources to measure analysis method and mimic security has intrinsic consistency, avoids the random algorithm to vulnerability and modeling using known vulnerabilities targeted, on time at the same time can ensure the diversity of the Executive body, to mimic the attack surface web server scheduling system in continuous time is less, and able to form a continuous differences. Experiments show that the minimum symbiotic resource scheduling algorithm based on time continuity is more secure than the random scheduling algorithm.
2021-05-05
Singh, Sukhpreet, Jagdev, Gagandeep.  2020.  Execution of Big Data Analytics in Automotive Industry using Hortonworks Sandbox. 2020 Indo – Taiwan 2nd International Conference on Computing, Analytics and Networks (Indo-Taiwan ICAN). :158—163.

The market landscape has undergone dramatic change because of globalization, shifting marketing conditions, cost pressure, increased competition, and volatility. Transforming the operation of businesses has been possible because of the astonishing speed at which technology has witnessed the change. The automotive industry is on the edge of a revolution. The increased customer expectations, changing ownership, self-driving vehicles and much more have led to the transformation of automobiles, applications, and services from artificial intelligence, sensors, RFID to big data analysis. Large automobiles industries have been emphasizing the collection of data to gain insight into customer's expectations, preferences, and budgets alongside competitor's policies. Statistical methods can be applied to historical data, which has been gathered from various authentic sources and can be used to identify the impact of fixed and variable marketing investments and support automakers to come up with a more effective, precise, and efficient approach to target customers. Proper analysis of supply chain data can disclose the weak links in the chain enabling to adopt timely countermeasures to minimize the adverse effects. In order to fully gain benefit from analytics, the collaboration of a detailed set of capabilities responsible for intersecting and integrating with multiple functions and teams across the business is required. The effective role played by big data analysis in the automobile industry has also been expanded in the research paper. The research paper discusses the scope and challenges of big data. The paper also elaborates on the working technology behind the concept of big data. The paper illustrates the working of MapReduce technology that executes in the back end and is responsible for performing data mining.

Zhu, Jianping, HOU, RUI, Wang, XiaoFeng, Wang, Wenhao, Cao, Jiangfeng, Zhao, Boyan, Wang, Zhongpu, Zhang, Yuhui, Ying, Jiameng, Zhang, Lixin et al..  2020.  Enabling Rack-scale Confidential Computing using Heterogeneous Trusted Execution Environment. 2020 IEEE Symposium on Security and Privacy (SP). :1450—1465.

With its huge real-world demands, large-scale confidential computing still cannot be supported by today's Trusted Execution Environment (TEE), due to the lack of scalable and effective protection of high-throughput accelerators like GPUs, FPGAs, and TPUs etc. Although attempts have been made recently to extend the CPU-like enclave to GPUs, these solutions require change to the CPU or GPU chips, may introduce new security risks due to the side-channel leaks in CPU-GPU communication and are still under the resource constraint of today's CPU TEE.To address these problems, we present the first Heterogeneous TEE design that can truly support large-scale compute or data intensive (CDI) computing, without any chip-level change. Our approach, called HETEE, is a device for centralized management of all computing units (e.g., GPUs and other accelerators) of a server rack. It is uniquely designed to work with today's data centres and clouds, leveraging modern resource pooling technologies to dynamically compartmentalize computing tasks, and enforce strong isolation and reduce TCB through hardware support. More specifically, HETEE utilizes the PCIe ExpressFabric to allocate its accelerators to the server node on the same rack for a non-sensitive CDI task, and move them back into a secure enclave in response to the demand for confidential computing. Our design runs a thin TCB stack for security management on a security controller (SC), while leaving a large set of software (e.g., AI runtime, GPU driver, etc.) to the integrated microservers that operate enclaves. An enclaves is physically isolated from others through hardware and verified by the SC at its inception. Its microserver and computing units are restored to a secure state upon termination.We implemented HETEE on a real hardware system, and evaluated it with popular neural network inference and training tasks. Our evaluations show that HETEE can easily support the CDI tasks on the real-world scale and incurred a maximal throughput overhead of 2.17% for inference and 0.95% for training on ResNet152.

2021-05-03
Le, Son N., Srinivasan, Sudarshan K., Smith, Scott C..  2020.  Exploiting Dual-Rail Register Invariants for Equivalence Verification of NCL Circuits. 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS). :21–24.
Equivalence checking is one of the most scalable and useful verification techniques in industry. NULL Convention Logic (NCL) circuits utilize dual-rail signals (i.e., two wires to represent one bit of DATA), where the wires are inverses of each other during a DATA wavefront. In this paper, a technique that exploits this invariant at NCL register boundaries is proposed to improve the efficiency of equivalence verification of NCL circuits.
Kolomoitcev, V. S..  2020.  Effectiveness of Options for Designing a Pattern of Secure Access ‘Connecting Node’. 2020 Wave Electronics and its Application in Information and Telecommunication Systems (WECONF). :1–5.
The purpose of the work was to study the fault- tolerant pattern of secure access of computer system nodes to external network resources - the pattern of secure access `Connecting node'. The pattern of secure access `Connecting node' includes a group/cluster (or several groups) of routers, a computing node that includes hardware and software for information protection and communication channels that connect it to the end nodes of the computing system and the external network (network resources that are not controlled by the information protection system). The efficiency assessment and comparative analysis of options for designing a pattern of secure access `Connecting node' according to various efficiency criteria were carried out. In this work, an assessment of the individual and comprehensive efficiency index was carried out. It was assumed that the system is recoverable. The effectiveness of using some options of designing a pattern of secure access in terms of the operational availability factor, as well as a group of parameters - the operational availability factor, service delays of information protection system and the grade of information protection.
2021-04-27
Samuel, J., Aalab, K., Jaskolka, J..  2020.  Evaluating the Soundness of Security Metrics from Vulnerability Scoring Frameworks. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :442—449.

Over the years, a number of vulnerability scoring frameworks have been proposed to characterize the severity of known vulnerabilities in software-dependent systems. These frameworks provide security metrics to support decision-making in system development and security evaluation and assurance activities. When used in this context, it is imperative that these security metrics be sound, meaning that they can be consistently measured in a reproducible, objective, and unbiased fashion while providing contextually relevant, actionable information for decision makers. In this paper, we evaluate the soundness of the security metrics obtained via several vulnerability scoring frameworks. The evaluation is based on the Method for DesigningSound Security Metrics (MDSSM). We also present several recommendations to improve vulnerability scoring frameworks to yield more sound security metrics to support the development of secure software-dependent systems.

Junosza-Szaniawski, K., Nogalski, D., Wójcik, A..  2020.  Exact and approximation algorithms for sensor placement against DDoS attacks. 2020 15th Conference on Computer Science and Information Systems (FedCSIS). :295–301.
In DDoS attack (Distributed Denial of Service), an attacker gains control of many network users by a virus. Then the controlled users send many requests to a victim, leading to lack of its resources. DDoS attacks are hard to defend because of distributed nature, large scale and various attack techniques. One of possible ways of defense is to place sensors in the network that can detect and stop an unwanted request. However, such sensors are expensive so there is a natural question about a minimum number of sensors and their optimal placement to get the required level of safety. We present two mixed integer models for optimal sensor placement against DDoS attacks. Both models lead to a trade-off between the number of deployed sensors and the volume of uncontrolled flow. Since above placement problems are NP-hard, two efficient heuristics are designed, implemented and compared experimentally with exact linear programming solvers.
2021-04-09
Bhattacharya, M. P., Zavarsky, P., Butakov, S..  2020.  Enhancing the Security and Privacy of Self-Sovereign Identities on Hyperledger Indy Blockchain. 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1—7.
Self-sovereign identities provide user autonomy and immutability to individual identities and full control to their identity owners. The immutability and control are possible by implementing identities in a decentralized manner on blockchains that are specially designed for identity operations such as Hyperledger Indy. As with any type of identity, self-sovereign identities too deal with Personally Identifiable Information (PII) of the identity holders and comes with the usual risks of privacy and security. This study examined certain scenarios of personal data disclosure via credential exchanges between such identities and risks of man-in-the-middle attacks in the blockchain based identity system Hyperledger Indy. On the basis of the findings, the paper proposes the following enhancements: 1) A novel attribute sensitivity score model for self-sovereign identity agents to ascertain the sensitivity of attributes shared in credential exchanges 2) A method of mitigating man-in-the-middle attacks between peer self-sovereign identities and 3) A novel quantitative model for determining a credential issuer's reputation based on the number of issued credentials in a window period, which is then utilized to calculate an overall confidence level score for the issuer.
2021-04-08
Ameer, S., Benson, J., Sandhu, R..  2020.  The EGRBAC Model for Smart Home IoT. 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI). :457–462.
The Internet of Things (IoT) is enabling smart houses, where multiple users with complex social relationships interact with smart devices. This requires sophisticated access control specification and enforcement models, that are currently lacking. In this paper, we introduce the extended generalized role based access control (EGRBAC) model for smart home IoT. We provide a formal definition for EGRBAC and illustrate its features with a use case. A proof-of-concept demonstration utilizing AWS-IoT Greengrass is discussed in the appendix. EGRBAC is a first step in developing a comprehensive family of access control models for smart home IoT.
Walia, K. S., Shenoy, S., Cheng, Y..  2020.  An Empirical Analysis on the Usability and Security of Passwords. 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI). :1–8.
Security and usability are two essential aspects of a system, but they usually move in opposite directions. Sometimes, to achieve security, usability has to be compromised, and vice versa. Password-based authentication systems require both security and usability. However, to increase password security, absurd rules are introduced, which often drive users to compromise the usability of their passwords. Users tend to forget complex passwords and use techniques such as writing them down, reusing them, and storing them in vulnerable ways. Enhancing the strength while maintaining the usability of a password has become one of the biggest challenges for users and security experts. In this paper, we define the pronounceability of a password as a means to measure how easy it is to memorize - an aspect we associate with usability. We examine a dataset of more than 7 million passwords to determine whether the usergenerated passwords are secure. Moreover, we convert the usergenerated passwords into phonemes and measure the pronounceability of the phoneme-based representations. We then establish a relationship between the two and suggest how password creation strategies can be adapted to better align with both security and usability.
Feng, X., Wang, D., Lin, Z., Kuang, X., Zhao, G..  2020.  Enhancing Randomization Entropy of x86-64 Code while Preserving Semantic Consistency. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1–12.

Code randomization is considered as the basis of mitigation against code reuse attacks, fundamentally supporting some recent proposals such as execute-only memory (XOM) that aims at dynamic return-oriented programming (ROP) attacks. However, existing code randomization methods are hard to achieve a good balance between high-randomization entropy and semantic consistency. In particular, they always ignore code semantic consistency, incurring performance loss and incompatibility with current security schemes, e.g., control flow integrity (CFI). In this paper, we present an enhanced code randomization method termed as HCRESC, which can improve the randomization entropy significantly, meanwhile ensure the semantic consistency between variants and the original code. HCRESC reschedules instructions within the range of functions rather than basic blocks, thus producing more variants of the original code and preserving the code's semantic. We implement HCRESC on Linux platform of x86-64 architecture and demonstrate that HCRESC can increase the randomization entropy of x86-64 code over than 120% compared with existing methods while ensuring control flow and size of the code unaltered.

Shi, S., Li, J., Wu, H., Ren, Y., Zhi, J..  2020.  EFM: An Edge-Computing-Oriented Forwarding Mechanism for Information-Centric Networks. 2020 3rd International Conference on Hot Information-Centric Networking (HotICN). :154–159.
Information-Centric Networking (ICN) has attracted much attention as a promising future network design, which presents a paradigm shift from host-centric to content-centric. However, in edge computing scenarios, there is still no specific ICN forwarding mechanism to improve transmission performance. In this paper, we propose an edge-oriented forwarding mechanism (EFM) for edge computing scenarios. The rationale is to enable edge nodes smarter, such as acting as agents for both consumers and providers to improve content retrieval and distribution. On the one hand, EFM can assist consumers: the edge router can be used either as a fast content repository to satisfy consumers’ requests or as a smart delegate of consumers to request content from upstream nodes. On the other hand, EFM can assist providers: EFM leverages the optimized in-network recovery/retransmission to detect packet loss or even accelerate the content distribution. The goal of our research is to improve the performance of edge networks. Simulation results based on ndnSIM indicate that EFM can enable efficient content retrieval and distribution, friendly to both consumers and providers.
2021-03-29
Normatov, S., Rakhmatullaev, M..  2020.  Expert system with Fuzzy logic for protecting Scientific Information Resources. 2020 International Conference on Information Science and Communications Technologies (ICISCT). :1—4.

Analysis of the state of development of research on the protection of valuable scientific and educational databases, library resources, information centers, publishers show the importance of information security, especially in corporate information networks and systems for data exchange. Corporate library networks include dozens and even hundreds of libraries for active information exchange, and they (libraries) are equipped with information security tools to varying degrees. The purpose of the research is to create effective methods and tools to protect the databases of the scientific and educational resources from unauthorized access in libraries and library networks using fuzzy logic methods.

Papakonstantinou, N., Linnosmaa, J., Bashir, A. Z., Malm, T., Bossuyt, D. L. V..  2020.  Early Combined Safety - Security Defense in Depth Assessment of Complex Systems. 2020 Annual Reliability and Maintainability Symposium (RAMS). :1—7.

Safety and security of complex critical infrastructures is very important for economic, environmental and social reasons. The interdisciplinary and inter-system dependencies within these infrastructures introduce difficulties in the safety and security design. Late discovery of safety and security design weaknesses can lead to increased costs, additional system complexity, ineffective mitigation measures and delays to the deployment of the systems. Traditionally, safety and security assessments are handled using different methods and tools, although some concepts are very similar, by specialized experts in different disciplines and are performed at different system design life-cycle phases.The methodology proposed in this paper supports a concurrent safety and security Defense in Depth (DiD) assessment at an early design phase and it is designed to handle safety and security at a high level and not focus on specific practical technologies. It is assumed that regardless of the perceived level of security defenses in place, a determined (motivated, capable and/or well-funded) attacker can find a way to penetrate a layer of defense. While traditional security research focuses on removing vulnerabilities and increasing the difficulty to exploit weaknesses, our higher-level approach focuses on how the attacker's reach can be limited and to increase the system's capability for detection, identification, mitigation and tracking. The proposed method can assess basic safety and security DiD design principles like Redundancy, Physical separation, Functional isolation, Facility functions, Diversity, Defense lines/Facility and Computer Security zones, Safety classes/Security Levels, Safety divisions and physical gates/conduits (as defined by the International Atomic Energy Agency (IAEA) and international standards) concurrently and provide early feedback to the system engineer. A prototype tool is developed that can parse the exported project file of the interdisciplinary model. Based on a set of safety and security attributes, the tool is able to assess aspects of the safety and security DiD capabilities of the design. Its results can be used to identify errors, improve the design and cut costs before a formal human expert inspection. The tool is demonstrated on a case study of an early conceptual design of a complex system of a nuclear power plant.

Erulanova, A., Soltan, G., Baidildina, A., Amangeldina, M., Aset, A..  2020.  Expert System for Assessing the Efficiency of Information Security. 2020 7th International Conference on Electrical and Electronics Engineering (ICEEE). :355—359.

The paper considers an expert system that provides an assessment of the state of information security in authorities and organizations of various forms of ownership. The proposed expert system allows to evaluate the state of compliance with the requirements of both organizational and technical measures to ensure the protection of information, as well as the level of compliance with the requirements of the information protection system in general. The expert assessment method is used as a basic method for assessing the state of information protection. The developed expert system provides a significant reduction in routine operations during the audit of information security. The results of the assessment are presented quite clearly and provide an opportunity for the leadership of the authorities and organizations to make informed decisions to further improve the information protection system.

Anell, S., Gröber, L., Krombholz, K..  2020.  End User and Expert Perceptions of Threats and Potential Countermeasures. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :230—239.

Experts often design security and privacy technology with specific use cases and threat models in mind. In practice however, end users are not aware of these threats and potential countermeasures. Furthermore, mis-conceptions about the benefits and limitations of security and privacy technology inhibit large-scale adoption by end users. In this paper, we address this challenge and contribute a qualitative study on end users' and security experts' perceptions of threat models and potential countermeasures. We follow an inductive research approach to explore perceptions and mental models of both security experts and end users. We conducted semi-structured interviews with 8 security experts and 13 end users. Our results suggest that in contrast to security experts, end users neglect acquaintances and friends as attackers in their threat models. Our findings highlight that experts value technical countermeasures whereas end users try to implement trust-based defensive methods.

Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

Oğuz, K., Korkmaz, İ, Korkmaz, B., Akkaya, G., Alıcı, C., Kılıç, E..  2020.  Effect of Age and Gender on Facial Emotion Recognition. 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). :1—6.

New research fields and applications on human computer interaction will emerge based on the recognition of emotions on faces. With such aim, our study evaluates the features extracted from faces to recognize emotions. To increase the success rate of these features, we have run several tests to demonstrate how age and gender affect the results. The artificial neural networks were trained by the apparent regions on the face such as eyes, eyebrows, nose, mouth, and jawline and then the networks are tested with different age and gender groups. According to the results, faces of older people have a lower performance rate of emotion recognition. Then, age and gender based groups are created manually, and we show that performance rates of facial emotion recognition have increased for the networks that are trained using these particular groups.

Grochol, D., Sekanina, L..  2020.  Evolutionary Design of Hash Functions for IPv6 Network Flow Hashing. 2020 IEEE Congress on Evolutionary Computation (CEC). :1–8.
Fast and high-quality network flow hashing is an essential operation in many high-speed network systems such as network monitoring probes. We propose a multi-objective evolutionary design method capable of evolving hash functions for IPv4 and IPv6 flow hashing. Our approach combines Cartesian genetic programming (CGP) with Non-dominated sorting genetic algorithm II (NSGA-II) and aims to optimize not only the quality of hashing, but also the execution time of the hash function. The evolved hash functions are evaluated on real data sets collected in computer network and compared against other evolved and conventionally created hash functions.
Feng, G., Zhang, C., Si, Y., Lang, L..  2020.  An Encryption and Decryption Algorithm Based on Random Dynamic Hash and Bits Scrambling. 2020 International Conference on Communications, Information System and Computer Engineering (CISCE). :317–320.
This paper proposes a stream cipher algorithm. Its main principle is conducting the binary random dynamic hash with the help of key. At the same time of calculating the hash mapping address of plaintext, change the value of plaintext through bits scrambling, and then map it to the ciphertext space. This encryption method has strong randomness, and the design of hash functions and bits scrambling is flexible and diverse, which can constitute a set of encryption and decryption methods. After testing, the code evenness of the ciphertext obtained using this method is higher than that of the traditional method under some extreme conditions..
Aigner, A., Khelil, A..  2020.  An Effective Semantic Security Metric for Industrial Cyber-Physical Systems. 2020 IEEE Conference on Industrial Cyberphysical Systems (ICPS). 1:87—92.

The emergence of Industrial Cyber-Physical Systems (ICPS) in today's business world is still steadily progressing to new dimensions. Although they bring many new advantages to business processes and enable automation and a wider range of service capability, they also propose a variety of new challenges. One major challenge, which is introduced by such System-of-Systems (SoS), lies in the security aspect. As security may not have had that significant role in traditional embedded system engineering, a generic way to measure the level of security within an ICPS would provide a significant benefit for system engineers and involved stakeholders. Even though many security metrics and frameworks exist, most of them insufficiently consider an SoS context and the challenges of such environments. Therefore, we aim to define a security metric for ICPS, which measures the level of security during the system design, tests, and integration as well as at runtime. For this, we try to focus on a semantic point of view, which on one hand has not been considered in security metric definitions yet, and on the other hand allows us to handle the complexity of SoS architectures. Furthermore, our approach allows combining the critical characteristics of an ICPS, like uncertainty, required reliability, multi-criticality and safety aspects.