Visible to the public Biblio

Found 918 results

Filters: First Letter Of Title is T  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S [T] U V W X Y Z   [Show ALL]
T
Liu, Ximing, Li, Yingjiu, Deng, Robert H..  2018.  Typing-Proof: Usable, Secure and Low-Cost Two-Factor Authentication Based on Keystroke Timings. Proceedings of the 34th Annual Computer Security Applications Conference. :53-65.

Two-factor authentication (2FA) systems provide another layer of protection to users' accounts beyond password. Traditional hardware token based 2FA and software token based 2FA are not burdenless to users since they require users to read, remember, and type a onetime code in the process, and incur high costs in deployments or operations. Recent 2FA mechanisms such as Sound-Proof, reduce or eliminate users' interactions for the proof of the second factor; however, they are not designed to be used in certain settings (e.g., quiet environments or PCs without built-in microphones), and they are not secure in the presence of certain attacks (e.g., sound-danger attack and co-located attack). To address these problems, we propose Typing-Proof, a usable, secure and low-cost two-factor authentication mechanism. Typing-Proof is similar to software token based 2FA in a sense that it uses password as the first factor and uses a registered phone to prove the second factor. During the second-factor authentication procedure, it requires a user to type any random code on a login computer and authenticates the user by comparing the keystroke timing sequence of the random code recorded by the login computer with the sounds of typing random code recorded by the user's registered phone. Typing-Proof can be reliably used in any settings and requires zero user-phone interaction in the most cases. It is practically secure and immune to the existing attacks to recent 2FA mechanisms. In addition, Typing-Proof enables significant cost savings for both service providers and users.

Leinonen, Juho, Longi, Krista, Klami, Arto, Ahadi, Alireza, Vihavainen, Arto.  2016.  Typing Patterns and Authentication in Practical Programming Exams. Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education. :160–165.

In traditional programming courses, students have usually been at least partly graded using pen and paper exams. One of the problems related to such exams is that they only partially connect to the practice conducted within such courses. Testing students in a more practical environment has been constrained due to the limited resources that are needed, for example, for authentication. In this work, we study whether students in a programming course can be identified in an exam setting based solely on their typing patterns. We replicate an earlier study that indicated that keystroke analysis can be used for identifying programmers. Then, we examine how a controlled machine examination setting affects the identification accuracy, i.e. if students can be identified reliably in a machine exam based on typing profiles built with data from students' programming assignments from a course. Finally, we investigate the identification accuracy in an uncontrolled machine exam, where students can complete the exam at any time using any computer they want. Our results indicate that even though the identification accuracy deteriorates when identifying students in an exam, the accuracy is high enough to reliably identify students if the identification is not required to be exact, but top k closest matches are regarded as correct.

Kurilova, Darya, Omar, Cyrus, Nistor, Ligia, Chung, Benjamin, Potanin, Alex, Aldrich, Jonathan.  2014.  Type-specific Languages to Fight Injection Attacks. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :18:1–18:2.

Injection vulnerabilities have topped rankings of the most critical web application vulnerabilities for several years [1, 2]. They can occur anywhere where user input may be erroneously executed as code. The injected input is typically aimed at gaining unauthorized access to the system or to private information within it, corrupting the system's data, or disturbing system availability. Injection vulnerabilities are tedious and difficult to prevent.

Haller, Istvan, Jeon, Yuseok, Peng, Hui, Payer, Mathias, Giuffrida, Cristiano, Bos, Herbert, van der Kouwe, Erik.  2016.  TypeSan: Practical Type Confusion Detection. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :517–528.

The low-level C++ programming language is ubiquitously used for its modularity and performance. Typecasting is a fundamental concept in C++ (and object-oriented programming in general) to convert a pointer from one object type into another. However, downcasting (converting a base class pointer to a derived class pointer) has critical security implications due to potentially different object memory layouts. Due to missing type safety in C++, a downcasted pointer can violate a programmer's intended pointer semantics, allowing an attacker to corrupt the underlying memory in a type-unsafe fashion. This vulnerability class is receiving increasing attention and is known as type confusion (or bad-casting). Several existing approaches detect different forms of type confusion, but these solutions are severely limited due to both high run-time performance overhead and low detection coverage. This paper presents TypeSan, a practical type-confusion detector which provides both low run-time overhead and high detection coverage. Despite improving the coverage of state-of-the-art techniques, TypeSan significantly reduces the type-confusion detection overhead compared to other solutions. TypeSan relies on an efficient per-object metadata storage service based on a compact memory shadowing scheme. Our scheme treats all the memory objects (i.e., globals, stack, heap) uniformly to eliminate extra checks on the fast path and relies on a variable compression ratio to minimize run-time performance and memory overhead. Our experimental results confirm that TypeSan is practical, even when explicitly checking almost all the relevant typecasts in a given C++ program. Compared to the state of the art, TypeSan yields orders of magnitude higher coverage at 4–10 times lower performance overhead on SPEC and 2 times on Firefox. As a result, our solution offers superior protection and is suitable for deployment in production software. Moreover, our highly efficient metadata storage back-end is potentially useful for other defenses that require memory object tracking.

Zhang, Haoyuan, Li, Huang, Oliveira, Bruno C. d. S..  2017.  Type-Safe Modular Parsing. Proceedings of the 10th ACM SIGPLAN International Conference on Software Language Engineering. :2–13.

Over the years a lot of effort has been put on solving extensibility problems, while retaining important software engineering properties such as modular type-safety and separate compilation. Most previous work focused on operations that traverse and process extensible Abstract Syntax Tree (AST) structures. However, there is almost no work on operations that build such extensible ASTs, including parsing. This paper investigates solutions for the problem of modular parsing. We focus on semantic modularity and not just syntactic modularity. That is, the solutions should not only allow complete parsers to be built out of modular parsing components, but also enable the parsing components to be modularly type-checked and separately compiled. We present a technique based on parser combinators that enables modular parsing. Interestingly, the modularity requirements for modular parsing rule out several existing parser combinator approaches, which rely on some non-modular techniques. We show that Packrat parsing techniques, provide solutions for such modularity problems, and enable reasonable performance in a modular setting. Extensibility is achieved using multiple inheritance and Object Algebras. To evaluate the approach we conduct a case study based on the “Types and Programming Languages” interpreters. The case study shows the effectiveness at reusing parsing code from existing interpreters, and the total parsing code is 69% shorter than an existing code base using a non-modular parsing approach.

Omar, Cyrus, Chung, Benjamin, Kurilova, Darya, Potanin, Alex, Aldrich, Jonathan.  2013.  Type-directed, whitespace-delimited parsing for embedded DSLs. Proceedings of the First Workshop on the Globalization of Domain Specific Languages. :8–11.
Domain-specific languages improve ease-of-use, expressiveness and verifiability, but defining and using different DSLs within a single application remains difficult. We introduce an approach for embedded DSLs where 1) whitespace delimits DSL-governed blocks, and 2) the parsing and type checking phases occur in tandem so that the expected type of the block determines which domain-specific parser governs that block. We argue that this approach occupies a sweet spot, providing high expressiveness and ease-of-use while maintaining safe composability. We introduce the design, provide examples and describe an ongoing implementation of this strategy in the Wyvern programming language. We also discuss how a more conventional keyword-directed strategy for parsing of DSLs can arise as a special case of this type-directed strategy.
Stepanov, Daniil, Akhin, Marat, Belyaev, Mikhail.  2021.  Type-Centric Kotlin Compiler Fuzzing: Preserving Test Program Correctness by Preserving Types. 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST). :318—328.
Kotlin is a relatively new programming language from JetBrains: its development started in 2010 with release 1.0 done in early 2016. The Kotlin compiler, while slowly and steadily becoming more and more mature, still crashes from time to time on the more tricky input programs, not least because of the complexity of its features and their interactions. This makes it a great target for fuzzing, even the basic forms of which can find a significant number of Kotlin compiler crashes. There is a problem with fuzzing, however, closely related to the cause of the crashes: generating a random, non-trivial and semantically valid Kotlin program is hard. In this paper, we talk about type-centriccompilerfuzzing in the form of type-centricenumeration, an approach inspired by skeletal program enumeration [1] and based on a combination of generative and mutation-based fuzzing, which solves this problem by focusing on program types. After creating the skeleton program, we fill the typed holes with fragments of suitable type, created via generation and enhanced by semantic-aware mutation. We implemented this approach in our Kotlin compiler fuzzing framework called Backend Bug Finder (BBF) and did an extensive evaluation, not only testing the real-world feasibility of our approach, but also comparing it to other compiler fuzzing techniques. The results show our approach to be significantly better compared to other fuzzing approaches at generating semantically valid Kotlin programs, while creating more interesting crash-inducing inputs at the same time. We managed to find more than 50 previously unknown compiler crashes, of which 18 were considered important after their triage by the compiler team.
Cortier, Veronique, Grimm, Niklas, Lallemand, Joseph, Maffei, Matteo.  2017.  A Type System for Privacy Properties. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :409–423.
Mature push button tools have emerged for checking trace properties (e.g. secrecy or authentication) of security protocols. The case of indistinguishability-based privacy properties (e.g. ballot privacy or anonymity) is more complex and constitutes an active research topic with several recent propositions of techniques and tools. We explore a novel approach based on type systems and provide a (sound) type system for proving equivalence of protocols, for a bounded or an unbounded number of sessions. The resulting prototype implementation has been tested on various protocols of the literature. It provides a significant speed-up (by orders of magnitude) compared to tools for a bounded number of sessions and complements in terms of expressiveness other state-of-the-art tools, such as ProVerif and Tamarin: e.g., we show that our analysis technique is the first one to handle a faithful encoding of the Helios e-voting protocol in the context of an untrusted ballot box.
Monaro, Merylin, Spolaor, Riccardo, Li, QianQian, Conti, Mauro, Gamberini, Luciano, Sartori, Giuseppe.  2017.  Type Me the Truth!: Detecting Deceitful Users via Keystroke Dynamics. Proceedings of the 12th International Conference on Availability, Reliability and Security. :60:1–60:6.

In this paper, we propose a novel method, based on keystroke dynamics, to distinguish between fake and truthful personal information written via a computer keyboard. Our method does not need any prior knowledge about the user who is providing data. To our knowledge, this is the first work that associates the typing human behavior with the production of lies regarding personal information. Via experimental analysis involving 190 subjects, we assess that this method is able to distinguish between truth and lies on specific types of autobiographical information, with an accuracy higher than 75%. Specifically, for information usually required in online registration forms (e.g., name, surname and email), the typing behavior diverged significantly between truthful or untruthful answers. According to our results, keystroke analysis could have a great potential in detecting the veracity of self-declared information, and it could be applied to a large number of practical scenarios requiring users to input personal data remotely via keyboard.

Shahrak, M. Z., Ye, M., Swaminathan, V., Wei, S..  2016.  Two-way real time multimedia stream authentication using physical unclonable functions. 2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP). :1–4.

Multimedia authentication is an integral part of multimedia signal processing in many real-time and security sensitive applications, such as video surveillance. In such applications, a full-fledged video digital rights management (DRM) mechanism is not applicable due to the real time requirement and the difficulties in incorporating complicated license/key management strategies. This paper investigates the potential of multimedia authentication from a brand new angle by employing hardware-based security primitives, such as physical unclonable functions (PUFs). We show that the hardware security approach is not only capable of accomplishing the authentication for both the hardware device and the multimedia stream but, more importantly, introduce minimum performance, resource, and power overhead. We justify our approach using a prototype PUF implementation on Xilinx FPGA boards. Our experimental results on the real hardware demonstrate the high security and low overhead in multimedia authentication obtained by using hardware security approaches.

de Souza Donato, Robson, de Aguiar, Marlius Hudson, Cruz, Roniel Ferreira, Vitorino, Montiê Alves, de Rossiter Corrêa, Maurício Beltrão.  2021.  Two-Switch Zeta-Based Single-Phase Rectifier With Inherent Power Decoupling And No Extra Buffer Circuit. 2021 IEEE Applied Power Electronics Conference and Exposition (APEC). :1830–1836.
In some single-phase systems, power decoupling is necessary to balance the difference between constant power at load side and double-frequency ripple power at AC side. The application of active power decoupling methods aim to smooth this power oscillatory component, but, in general, these methods require the addition of many semiconductor devices and/or energy storage components, which is not lined up with achieving low cost, high efficiency and high power quality. This paper presents the analysis of a new single-phase rectifier based on zeta topology with power decoupling function and power factor correction using only two active switches and without extra reactive components. Its behavior is based on three stages of operation in a switching period, such that the power oscillating component is stored in one of the inherent zeta inductor. The theoretical foundation that justifies its operation is presented, as well as the simulation and experimental results to validate the applied concepts.
Zhou, B., He, J., Tan, M..  2020.  A Two-stage P2P Botnet Detection Method Based on Statistical Features. 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). :497—502.

P2P botnet has become one of the most serious threats to today's network security. It can be used to launch kinds of malicious activities, ranging from spamming to distributed denial of service attack. However, the detection of P2P botnet is always challenging because of its decentralized architecture. In this paper, we propose a two-stage P2P botnet detection method which only relies on several traffic statistical features. This method first detects P2P hosts based on three statistical features, and then distinguishes P2P bots from benign P2P hosts by means of another two statistical features. Experimental evaluations on real-world traffic datasets shows that our method is able to detect hidden P2P bots with a detection accuracy of 99.7% and a false positive rate of only 0.3% within 5 minutes.

Aljohani, Nader, Bretas, Arturo, Bretas, Newton G.  2022.  Two-Stage Optimization Framework for Detecting and Correcting Parameter Cyber-Attacks in Power System State Estimation. 2022 IEEE International Conference on Environment and Electrical Engineering and 2022 IEEE Industrial and Commercial Power Systems Europe (EEEIC / I&CPS Europe). :1—5.
One major tool of Energy Management Systems for monitoring the status of the power grid is State Estimation (SE). Since the results of state estimation are used within the energy management system, the security of the power system state estimation tool is most important. The research in this area is targeting detection of False Data Injection attacks on measurements. Though this aspect is crucial, SE also depends on database that are used to describe the relationship between measurements and systems' states. This paper presents a two-stage optimization framework to not only detect, but also correct cyber-attacks pertaining the measurements' model parameters used by the SE routine. In the first stage, an estimate of the line parameters ratios are obtained. In the second stage, the estimated ratios from stage I are used in a Bi-Level model for obtaining a final estimate of the measurements' model parameters. Hence, the presented framework does not only unify the detection and correction in a single optimization run, but also provide a monitoring scheme for the SE database that is typically considered static. In addition, in the two stages, linear programming framework is preserved. For validation, the IEEE 118 bus system is used for implementation. The results illustrate the effectiveness of the proposed model for detecting attacks in the database used in the state estimation process.
Wang, S., Mei, Y., Park, J., Zhang, M..  2019.  A Two-Stage Genetic Programming Hyper-Heuristic for Uncertain Capacitated Arc Routing Problem. 2019 IEEE Symposium Series on Computational Intelligence (SSCI). :1606—1613.

Genetic Programming Hyper-heuristic (GPHH) has been successfully applied to automatically evolve effective routing policies to solve the complex Uncertain Capacitated Arc Routing Problem (UCARP). However, GPHH typically ignores the interpretability of the evolved routing policies. As a result, GP-evolved routing policies are often very complex and hard to be understood and trusted by human users. In this paper, we aim to improve the interpretability of the GP-evolved routing policies. To this end, we propose a new Multi-Objective GP (MOGP) to optimise the performance and size simultaneously. A major issue here is that the size is much easier to be optimised than the performance, and the search tends to be biased to the small but poor routing policies. To address this issue, we propose a simple yet effective Two-Stage GPHH (TS-GPHH). In the first stage, only the performance is to be optimised. Then, in the second stage, both objectives are considered (using our new MOGP). The experimental results showed that TS-GPHH could obtain much smaller and more interpretable routing policies than the state-of-the-art single-objective GPHH, without deteriorating the performance. Compared with traditional MOGP, TS-GPHH can obtain a much better and more widespread Pareto front.

Kun Wen, Jiahai Yang, Fengjuan Cheng, Chenxi Li, Ziyu Wang, Hui Yin.  2014.  Two-stage detection algorithm for RoQ attack based on localized periodicity analysis of traffic anomaly. Computer Communication and Networks (ICCCN), 2014 23rd International Conference on. :1-6.

Reduction of Quality (RoQ) attack is a stealthy denial of service attack. It can decrease or inhibit normal TCP flows in network. Victims are hard to perceive it as the final network throughput is decreasing instead of increasing during the attack. Therefore, the attack is strongly hidden and it is difficult to be detected by existing detection systems. Based on the principle of Time-Frequency analysis, we propose a two-stage detection algorithm which combines anomaly detection with misuse detection. In the first stage, we try to detect the potential anomaly by analyzing network traffic through Wavelet multiresolution analysis method. According to different time-domain characteristics, we locate the abrupt change points. In the second stage, we further analyze the local traffic around the abrupt change point. We extract the potential attack characteristics by autocorrelation analysis. By the two-stage detection, we can ultimately confirm whether the network is affected by the attack. Results of simulations and real network experiments demonstrate that our algorithm can detect RoQ attacks, with high accuracy and high efficiency.

Su, H., Halak, B., Zwolinski, M..  2019.  Two-Stage Architectures for Resilient Lightweight PUFs. 2019 IEEE 4th International Verification and Security Workshop (IVSW). :19–24.
The following topics are dealt with: Internet of Things; invasive software; security of data; program testing; reverse engineering; product codes; binary codes; decoding; maximum likelihood decoding; field programmable gate arrays.
Peleshchak, Roman, Lytvyn, Vasyl, Kholodna, Nataliia, Peleshchak, Ivan, Vysotska, Victoria.  2022.  Two-Stage AES Encryption Method Based on Stochastic Error of a Neural Network. 2022 IEEE 16th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET). :381–385.
This paper proposes a new two-stage encryption method to increase the cryptographic strength of the AES algorithm, which is based on stochastic error of a neural network. The composite encryption key in AES neural network cryptosystem are the weight matrices of synaptic connections between neurons and the metadata about the architecture of the neural network. The stochastic nature of the prediction error of the neural network provides an ever-changing pair key-ciphertext. Different topologies of the neural networks and the use of various activation functions increase the number of variations of the AES neural network decryption algorithm. The ciphertext is created by the forward propagation process. The encryption result is reversed back to plaintext by the reverse neural network functional operator.
Sayoud, Akila, Djendi, Mohamed, Guessoum, Abderrezak.  2018.  A Two-Sensor Fast Adaptive Algorithm for Blind Speech Enhancement. Proceedings of the Fourth International Conference on Engineering & MIS 2018. :24:1–24:4.

This paper presents the enhancement of speech signals in a noisy environment by using a Two-Sensor Fast Normalized Least Mean Square adaptive algorithm combined with the backward blind source separation structure. A comparative study with other competitive algorithms shows the superiority of the proposed algorithm in terms of various objective criteria such as the segmental signal to noise ratio (SegSNR), the cepstral distance (CD), the system mismatch (SM) and the segmental mean square error (SegMSE).

Jeste, Manasi, Gokhale, Paresh, Tare, Shrawani, Chougule, Yutika, Chaudhari, Archana.  2020.  Two-point security system for doors/lockers using Machine learning and Internet Of Things. 2020 Fourth International Conference on Inventive Systems and Control (ICISC). :740—744.
The objective of the proposed research is to develop an IOT based security system with a two-point authentication. Human face recognition and fingerprint is a known method for access authentication. A combination of both technologies and integration of the system with IoT make will make the security system more efficient and reliable. Use of online platform google firebase is made for saving database and retrieving it in real-time. In this system access to the fingerprint (touch sensor) from mobile is proposed using an android app developed in android studio and authentication for the same is also proposed. On identification of both face and fingerprint together, access to door or locker is provided.
Ali, Muqeet, Reaz, Rezwana, Gouda, Mohamed.  2016.  Two-phase Nonrepudiation Protocols. Proceedings of the 7th International Conference on Computing Communication and Networking Technologies. :22:1–22:8.

A nonrepudiation protocol from party S to party R performs two tasks. First, the protocol enables party S to send to party R some text x along with a proof (that can convince a judge) that x was indeed sent by S. Second, the protocol enables party R to receive text x from S and to send to S a proof (that can convince a judge) that x was indeed received by R. A nonrepudiation protocol from one party to another is called two-phase iff the two parties execute the protocol as specified until one of the two parties receives its complete proof. Then and only then does this party refrain from sending any message specified by the protocol because these messages only help the other party complete its proof. In this paper, we present methods for specifying and verifying two-phase nonrepudiation protocols.

Krishna, M. B., Rodrigues, J. J. P. C..  2017.  Two-Phase Incentive-Based Secure Key System for Data Management in Internet of Things. 2017 IEEE International Conference on Communications (ICC). :1–6.

Internet of Things (IoT) distributed secure data management system is characterized by authentication, privacy policies to preserve data integrity. Multi-phase security and privacy policies ensure confidentiality and trust between the users and service providers. In this regard, we present a novel Two-phase Incentive-based Secure Key (TISK) system for distributed data management in IoT. The proposed system classifies the IoT user nodes and assigns low-level, high-level security keys for data transactions. Low-level secure keys are generic light-weight keys used by the data collector nodes and data aggregator nodes for trusted transactions. TISK phase-I Generic Service Manager (GSM-C) module verifies the IoT devices based on self-trust incentive and server-trust incentive levels. High-level secure keys are dedicated special purpose keys utilized by data manager nodes and data expert nodes for authorized transactions. TISK phase-II Dedicated Service Manager (DSM-C) module verifies the certificates issued by GSM-C module. DSM-C module further issues high-level secure keys to data manager nodes and data expert nodes for specific purpose transactions. Simulation results indicate that the proposed TISK system reduces the key complexity and key cost to ensure distributed secure data management in IoT network.

Zhou, Junkai, Zhou, Yi, Wei, Dandan.  2018.  A Two-Path Frequency Domain Algorithm for Stereophonic Acoustic Echo Cancellation. Proceedings of the 3rd International Conference on Multimedia Systems and Signal Processing. :94–98.
Stereophonic acoustic echo cancellation is widely used in the high quality audio/video teleconference systems to reduce the echoes coupling between microphones and loudspeakers. In the specific application scenarios, adaptive filters require a very high filter length to deal with the same long echo path. When time domain algorithm is used to estimate the echo path, the cost of complexity is very high, which can be optimized by frequency domain adaptive filters. In this paper, an efficient two-channel frequency domain algorithm is used to achieve this goal. Meanwhile, double-talk often occurs in the teleconference system, so the robustness of the algorithm is equally important. We also propose a robust two-path updating control transfer logic for stereophonic echo cancellation to solve the double-talk problems.
Ullah, Imtiaz, Mahmoud, Qusay H..  2019.  A Two-Level Hybrid Model for Anomalous Activity Detection in IoT Networks. 2019 16th IEEE Annual Consumer Communications Networking Conference (CCNC). :1–6.
In this paper we propose a two-level hybrid anomalous activity detection model for intrusion detection in IoT networks. The level-1 model uses flow-based anomaly detection, which is capable of classifying the network traffic as normal or anomalous. The flow-based features are extracted from the CICIDS2017 and UNSW-15 datasets. If an anomaly activity is detected then the flow is forwarded to the level-2 model to find the category of the anomaly by deeply examining the contents of the packet. The level-2 model uses Recursive Feature Elimination (RFE) to select significant features and Synthetic Minority Over-Sampling Technique (SMOTE) for oversampling and Edited Nearest Neighbors (ENN) for cleaning the CICIDS2017 and UNSW-15 datasets. Our proposed model precision, recall and F score for level-1 were measured 100% for the CICIDS2017 dataset and 99% for the UNSW-15 dataset, while the level-2 model precision, recall, and F score were measured at 100 % for the CICIDS2017 dataset and 97 % for the UNSW-15 dataset. The predictor we introduce in this paper provides a solid framework for the development of malicious activity detection in IoT networks.
Elzaher, Mahmoud F. Abd, Shalaby, Mohamed.  2021.  Two-level chaotic system versus non-autonomous modulation in the context of chaotic voice encryption. 2021 International Telecommunications Conference (ITC-Egypt). :1—6.
In this paper, two methods are introduced for securing voice communication. The first technique applies multilevel chaos-based block cipher and the second technique applies non-autonomous chaotic modulation. In the first approach, the encryption method is implemented by joining Arnold cat map with the Lorenz system. This method depends on permuting and substituting voice samples. Applying two levels of a chaotic system, enhances the security of the encrypted signal. the permutation process of the voice samples is implemented by applying Arnold cat map, then use Lorenz chaotic flow to create masking key and consequently substitute the permuted samples. In the second method, an encryption method based on non-autonomous modulation is implemented, in the master system, and the voice injection process is applied into one variable of the Lorenz chaotic flow without modifying the state of controls parameter. Non-autonomous modulation is proved to be more suitable than other techniques for securing real-time applications; it also masters the problems of chaotic parameter modulation and chaotic masking. A comparative study of these methods is presented.
Letychevskyi, Oleksandr.  2019.  Two-Level Algebraic Method for Detection of Vulnerabilities in Binary Code. 2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). 2:1074–1077.
This study introduces formal methods for detection of vulnerabilities in binary code. It considers the transformation of binary code into behavior algebra expressions and formalization of vulnerabilities. The detection method has two levels: behavior matching and symbolic execution with vulnerability pattern matching. This enables more efficient performance.