Sani, Abubakar Sadiq, Yuan, Dong, Meng, Ke, Dong, Zhao Yang.
2021.
R-Chain: A Universally Composable Relay Resilience Framework for Smart Grids. 2021 IEEE Power & Energy Society General Meeting (PESGM). :01–05.
Smart grids can be exposed to relay attacks (or wormhole attacks) resulting from weaknesses in cryptographic operations such as authentication and key derivation associated with process automation protocols. Relay attacks refer to attacks in which authentication is evaded without needing to attack the smart grid itself. By using a universal composability model that provides a strong security notion for designing cryptographic operations, we formulate the necessary relay resilience settings for strengthening authentication and key derivation and enhancing relay security in process automation protocols in this paper. We introduce R-Chain, a universally composable relay resilience framework that prevents bypass of cryptographic operations. Our framework provides an ideal chaining functionality that integrates all cryptographic operations such that all outputs from a preceding operation are used as input to the subsequent operation to support relay resilience. We apply R-Chain to provide relay resilience in a practical smart grid process automation protocol, namely WirelessHART.
Baumann, Christoph, Dam, Mads, Guanciale, Roberto, Nemati, Hamed.
2021.
On Compositional Information Flow Aware Refinement. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1–16.
The concepts of information flow security and refinement are known to have had a troubled relationship ever since the seminal work of McLean. In this work we study refinements that support changes in data representation and semantics, including the addition of state variables that may induce new observational power or side channels. We propose a new epistemic approach to ignorance-preserving refinement where an abstract model is used as a specification of a system's permitted information flows, that may include the declassification of secret information. The core idea is to require that refinement steps must not induce observer knowledge that is not already available in the abstract model. Our study is set in the context of a class of shared variable multiagent models similar to interpreted systems in epistemic logic. We demonstrate the expressiveness of our framework through a series of small examples and compare our approach to existing, stricter notions of information-flow secure refinement based on bisimulations and noninterference preservation. Interestingly, noninterference preservation is not supported “out of the box” in our setting, because refinement steps may introduce new secrets that are independent of secrets already present at abstract level. To support verification, we first introduce a “cube-shaped” unwinding condition related to conditions recently studied in the context of value-dependent noninterference, kernel verification, and secure compilation. A fundamental problem with ignorance-preserving refinement, caused by the support for general data and observation refinement, is that sequential composability is lost. We propose a solution based on relational pre-and postconditions and illustrate its use together with unwinding on the oblivious RAM construction of Chung and Pass.
Jiang, Hongpu, Yuan, Yuyu, Guo, Ting, Zhao, Pengqian.
2021.
Measuring Trust and Automatic Verification in Multi-Agent Systems. 2021 8th International Conference on Dependable Systems and Their Applications (DSA). :271—277.
Due to the shortage of resources and services, agents are often in competition with each other. Excessive competition will lead to a social dilemma. Under the viewpoint of breaking social dilemma, we present a novel trust-based logic framework called Trust Computation Logic (TCL) for measure method to find the best partners to collaborate and automatically verifying trust in Multi-Agent Systems (MASs). TCL starts from defining trust state in Multi-Agent Systems, which is based on contradistinction between behavior in trust behavior library and in observation. In particular, a set of reasoning postulates along with formal proofs were put forward to support our measure process. Moreover, we introduce symbolic model checking algorithms to formally and automatically verify the system. Finally, the trust measure method and reported experimental results were evaluated by using DeepMind’s Sequential Social Dilemma (SSD) multi-agent game-theoretic environments.
Telghamti, Samira, Derdouri, Lakhdhar.
2021.
Towards a Trust-based Model for Access Control for Graph-Oriented Databases. 2021 International Conference on Theoretical and Applicative Aspects of Computer Science (ICTAACS). :1—3.
Privacy and data security are critical aspects in databases, mainly when the latter are publically accessed such in social networks. Furthermore, for advanced databases, such as NoSQL ones, security models and security meta-data must be integrated to the business specification and data. In the literature, the proposed models for NoSQL databases can be considered as static, in the sense where the privileges for a given user are predefined and remain unchanged during job sessions. In this paper, we propose a novel model for NoSQL database access control that we aim that it will be dynamic. To be able to design such model, we have considered the Trust concept to compute the reputation degree for a given user that plays a given role.
R, Prasath, Rajan, Rajesh George.
2021.
Autonomous Application in Requirements Analysis of Information System Development for Producing a Design Model. 2021 2nd International Conference on Communication, Computing and Industry 4.0 (C2I4). :1—8.
The main technology of traditional information security is firewall, intrusion detection and anti-virus software, which is used in the first anti-outer defence, the first anti-service terminal defence terminal passive defence ideas, the complexity and complexity of these security technologies not only increase the complexity of the autonomous system, reduce the efficiency of the system, but also cannot solve the security problem of the information system, and cannot satisfy the security demand of the information system. After a significant stretch of innovative work, individuals utilize the secret word innovation, network security innovation, set forward the idea “confided in figuring” in view of the equipment security module support, Trusted processing from changing the customary protection thoughts, center around the safety efforts taken from the terminal to forestall framework assaults, from the foundation of the stage, the acknowledgment of the security of data frameworks. Believed figuring is chiefly worried about the security of the framework terminal, utilizing a progression of safety efforts to ensure the protection of clients to work on the security of independent frameworks. Its principle plan thought is implanted in a typical machine to oppose altering the equipment gadget - confided in stage module as the base of the trust, the utilization of equipment and programming innovation to join the trust of the base of trust through the trust bind level to the entire independent framework, joined with the security of information stockpiling insurance, client validation and stage respectability of the three significant safety efforts guarantee that the terminal framework security and unwavering quality, to guarantee that the terminal framework is consistently in a condition of conduct anticipated.
Yang, Liu, Zhang, Ping, Tao, Yang.
2021.
Malicious Nodes Detection Scheme Based On Dynamic Trust Clouds for Wireless Sensor Networks. 2021 6th International Symposium on Computer and Information Processing Technology (ISCIPT). :57—61.
The randomness, ambiguity and some other uncertainties of trust relationships in Wireless Sensor Networks (WSNs) make existing trust management methods often unsatisfactory in terms of accuracy. This paper proposes a trust evaluation method based on cloud model for malicious node detection. The conversion between qualitative and quantitative sensor node trust degree is achieved. Firstly, nodes cooperate with each other to establish a standard cloud template for malicious nodes and a standard cloud template for normal nodes, so that malicious nodes have a qualitative description to be either malicious or normal. Secondly, the trust cloud template obtained during the interactions is matched against the previous standard templates to achieve the detection of malicious nodes. Simulation results demonstrate that the proposed method greatly improves the accuracy of malicious nodes detection.
Choi, Heeyoung, Young, Kang Ju.
2021.
Practical Approach of Security Enhancement Method based on the Protection Motivation Theory. 2021 21st ACIS International Winter Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Winter). :96—97.
In order to strengthen information security, practical solutions to reduce information security stress are needed because the motivation of the members of the organization who use it is needed to work properly. Therefore, this study attempts to suggest the key factors that can enhance security while reducing the information security stress of organization members. To this end, based on the theory of protection motivation, trust and security stress in information security policies are set as mediating factors to explain changes in security reinforcement behavior, and risk, efficacy, and reaction costs of cyberattacks are considered as prerequisites. Our study suggests a solution to the security reinforcement problem by analyzing the factors that influence the behavior of organization members that can raise the protection motivation of the organization members.
Zhu, Jinhui, Chen, Liangdong, Liu, Xiantong, Zhao, Lincong, Shen, Peipei, Chen, Jinghan.
2021.
Trusted Model Based on Multi-dimensional Attributes in Edge Computing. 2021 2nd Asia Symposium on Signal Processing (ASSP). :95—100.
As a supplement to the cloud computing model, the edge computing model can use edge servers and edge devices to coordinate information processing on the edge of the network to help Internet of Thing (IoT) data storage, transmission, and computing tasks. In view of the complex and changeable situation of edge computing IoT scenarios, this paper proposes a multi-dimensional trust evaluation factor selection scheme. Improve the traditional trusted modeling method based on direct/indirect trust, introduce multi-dimensional trusted decision attributes and rely on the collaboration of edge servers and edge device nodes to infer and quantify the trusted relationship between nodes, and combine the information entropy theory to smoothly weight the calculation results of multi-dimensional decision attributes. Improving the current situation where the traditional trusted assessment scheme's dynamic adaptability to the environment and the lack of reliability of trusted assessment are relatively lacking. Simulation experiments show that the edge computing IoT multi-dimensional trust evaluation model proposed in this paper has better performance than the trusted model in related literature.
Khan, Muhammad Taimoor, Serpanos, Dimitrios, Shrobe, Howard.
2021.
Towards Scalable Security of Real-time Applications: A Formally Certified Approach. 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA ). :01—04.
In this paper, we present our ongoing work to develop an efficient and scalable verification method to achieve runtime security of real-time applications with strict performance requirements. The method allows to specify (functional and non-functional) behaviour of a real-time application and a set of known attacks/threats. The challenge here is to prove that the runtime application execution is at the same time (i) correct w.r.t. the functional specification and (ii) protected against the specified set of attacks, without violating any non-functional specification (e.g., real-time performance). To address the challenge, first we classify the set of attacks into computational, data integrity and communication attacks. Second, we decompose each class into its declarative properties and definitive properties. A declarative property specifies an attack as a one big-step relation between initial and final state without considering intermediate states, while a definitive property specifies an attack as a composition of many small-step relations considering all intermediate states between initial and final state. Semantically, the declarative property of an attack is equivalent to its corresponding definitive property. Based on the decomposition and the adequate specification of underlying runtime environment (e.g., compiler, processor and operating system), we prove rigorously that the application execution in a particular runtime environment is protected against declarative properties without violating runtime performance specification of the application. Furthermore, from the specification, we generate a security monitor that assures that the application execution is secure against each class of attacks at runtime without hindering real-time performance of the application.
Viand, Alexander, Jattke, Patrick, Hithnawi, Anwar.
2021.
SoK: Fully Homomorphic Encryption Compilers. 2021 IEEE Symposium on Security and Privacy (SP). :1092—1108.
Fully Homomorphic Encryption (FHE) allows a third party to perform arbitrary computations on encrypted data, learning neither the inputs nor the computation results. Hence, it provides resilience in situations where computations are carried out by an untrusted or potentially compromised party. This powerful concept was first conceived by Rivest et al. in the 1970s. However, it remained unrealized until Craig Gentry presented the first feasible FHE scheme in 2009.The advent of the massive collection of sensitive data in cloud services, coupled with a plague of data breaches, moved highly regulated businesses to increasingly demand confidential and secure computing solutions. This demand, in turn, has led to a recent surge in the development of FHE tools. To understand the landscape of recent FHE tool developments, we conduct an extensive survey and experimental evaluation to explore the current state of the art and identify areas for future development.In this paper, we survey, evaluate, and systematize FHE tools and compilers. We perform experiments to evaluate these tools’ performance and usability aspects on a variety of applications. We conclude with recommendations for developers intending to develop FHE-based applications and a discussion on future directions for FHE tools development.
Lin, Yan, Gao, Debin.
2021.
When Function Signature Recovery Meets Compiler Optimization. 2021 IEEE Symposium on Security and Privacy (SP). :36—52.
Matching indirect function callees and callers using function signatures recovered from binary executables (number of arguments and argument types) has been proposed to construct a more fine-grained control-flow graph (CFG) to help control-flow integrity (CFI) enforcement. However, various compiler optimizations may violate calling conventions and result in unmatched function signatures. In this paper, we present eight scenarios in which compiler optimizations impact function signature recovery, and report experimental results with 1,344 real-world applications of various optimization levels. Most interestingly, our experiments show that compiler optimizations have both positive and negative impacts on function signature recovery, e.g., its elimination of redundant instructions at callers makes counting of the number of arguments more accurate, while it hurts argument type matching as the compiler chooses the most efficient (but potentially different) types at callees and callers. To better deal with these compiler optimizations, we propose a set of improved policies and report our more accurate CFG models constructed from the 1,344 applications. We additionally compare our results recovered from binary executables with those extracted from program source and reveal scenarios where compiler optimization makes the task of accurate function signature recovery undecidable.
Saki, Abdullah Ash, Suresh, Aakarshitha, Topaloglu, Rasit Onur, Ghosh, Swaroop.
2021.
Split Compilation for Security of Quantum Circuits. 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). :1—7.
An efficient quantum circuit (program) compiler aims to minimize the gate-count - through efficient instruction translation, routing, gate, and cancellation - to improve run-time and noise. Therefore, a high-efficiency compiler is paramount to enable the game-changing promises of quantum computers. To date, the quantum computing hardware providers are offering a software stack supporting their hardware. However, several third-party software toolchains, including compilers, are emerging. They support hardware from different vendors and potentially offer better efficiency. As the quantum computing ecosystem becomes more popular and practical, it is only prudent to assume that more companies will start offering software-as-a-service for quantum computers, including high-performance compilers. With the emergence of third-party compilers, the security and privacy issues of quantum intellectual properties (IPs) will follow. A quantum circuit can include sensitive information such as critical financial analysis and proprietary algorithms. Therefore, submitting quantum circuits to untrusted compilers creates opportunities for adversaries to steal IPs. In this paper, we present a split compilation methodology to secure IPs from untrusted compilers while taking advantage of their optimizations. In this methodology, a quantum circuit is split into multiple parts that are sent to a single compiler at different times or to multiple compilers. In this way, the adversary has access to partial information. With analysis of over 152 circuits on three IBM hardware architectures, we demonstrate the split compilation methodology can completely secure IPs (when multiple compilers are used) or can introduce factorial time reconstruction complexity while incurring a modest overhead ( 3% to 6% on average).
Bichhawat, Abhishek, McCall, McKenna, Jia, Limin.
2021.
Gradual Security Types and Gradual Guarantees. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1—16.
Information flow type systems enforce the security property of noninterference by detecting unauthorized data flows at compile-time. However, they require precise type annotations, making them difficult to use in practice as much of the legacy infrastructure is written in untyped or dynamically-typed languages. Gradual typing seamlessly integrates static and dynamic typing, providing the best of both approaches, and has been applied to information flow control, where information flow monitors are derived from gradual security types. Prior work on gradual information flow typing uncovered tensions between noninterference and the dynamic gradual guarantee- the property that less precise security type annotations in a program should not cause more runtime errors.This paper re-examines the connection between gradual information flow types and information flow monitors to identify the root cause of the tension between the gradual guarantees and noninterference. We develop runtime semantics for a simple imperative language with gradual information flow types that provides both noninterference and gradual guarantees. We leverage a proof technique developed for FlowML and reduce noninterference proofs to preservation proofs.
Winderix, Hans, Mühlberg, Jan Tobias, Piessens, Frank.
2021.
Compiler-Assisted Hardening of Embedded Software Against Interrupt Latency Side-Channel Attacks. 2021 IEEE European Symposium on Security and Privacy (EuroS&P). :667—682.
Recent controlled-channel attacks exploit timing differences in the rudimentary fetch-decode-execute logic of processors. These new attacks also pose a threat to software on embedded systems. Even when Trusted Execution Environments (TEEs) are used, interrupt latency attacks allow untrusted code to extract application secrets from a vulnerable enclave by scheduling interruption of the enclave. Constant-time programming is effective against these attacks but, as we explain in this paper, can come with some disadvantages regarding performance. To deal with this new threat, we propose a novel algorithm that hardens programs during compilation by aligning the execution time of corresponding instructions in secret-dependent branches. Our results show that, on a class of embedded systems with deterministic execution times, this approach eliminates interrupt latency side-channel leaks and mitigates limitations of constant-time programming. We have implemented our approach in the LLVM compiler infrastructure for the San-cus TEE, which extends the openMSP430 microcontroller, and we discuss applicability to other architectures. We make our implementation and benchmarks available for further research.
Alatoun, Khitam, Shankaranarayanan, Bharath, Achyutha, Shanmukha Murali, Vemuri, Ranga.
2021.
SoC Trust Validation Using Assertion-Based Security Monitors. 2021 22nd International Symposium on Quality Electronic Design (ISQED). :496—503.
Modern SoC applications include a variety of sensitive modules in which data must be protected against malicious access. Security vulnerabilities, when exercised during the SoC operation, lead to denial of service or disclosure of protected data. Hence, it is essential to undertake security validation before and after SoC fabrication and make provisions for continuous security assessment during operation. This paper presents a methodology for optimized post-deployment monitoring of SoC's security properties by migrating pre-fab design security assertions to post-fab run-time security monitors. We show that the method is scalable for large systems and complex properties by optimizing the hardware monitors and applying it to a large SoC design based on a OpenRISC-1200 SoC. About 40 security assertions were specified in System Verilog Assertions (SVA). Following formal verification, the assertions were synthesized into finite state machines and cross optimized. Following code generation in Verilog, commercial logic and layout synthesis tools were used to generate hardware monitors which were then integrated with the SoC design ready for fabrication.
Stepanov, Daniil, Akhin, Marat, Belyaev, Mikhail.
2021.
Type-Centric Kotlin Compiler Fuzzing: Preserving Test Program Correctness by Preserving Types. 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST). :318—328.
Kotlin is a relatively new programming language from JetBrains: its development started in 2010 with release 1.0 done in early 2016. The Kotlin compiler, while slowly and steadily becoming more and more mature, still crashes from time to time on the more tricky input programs, not least because of the complexity of its features and their interactions. This makes it a great target for fuzzing, even the basic forms of which can find a significant number of Kotlin compiler crashes. There is a problem with fuzzing, however, closely related to the cause of the crashes: generating a random, non-trivial and semantically valid Kotlin program is hard. In this paper, we talk about type-centriccompilerfuzzing in the form of type-centricenumeration, an approach inspired by skeletal program enumeration [1] and based on a combination of generative and mutation-based fuzzing, which solves this problem by focusing on program types. After creating the skeleton program, we fill the typed holes with fragments of suitable type, created via generation and enhanced by semantic-aware mutation. We implemented this approach in our Kotlin compiler fuzzing framework called Backend Bug Finder (BBF) and did an extensive evaluation, not only testing the real-world feasibility of our approach, but also comparing it to other compiler fuzzing techniques. The results show our approach to be significantly better compared to other fuzzing approaches at generating semantically valid Kotlin programs, while creating more interesting crash-inducing inputs at the same time. We managed to find more than 50 previously unknown compiler crashes, of which 18 were considered important after their triage by the compiler team.
El-Korashy, Akram, Tsampas, Stelios, Patrignani, Marco, Devriese, Dominique, Garg, Deepak, Piessens, Frank.
2021.
CapablePtrs: Securely Compiling Partial Programs Using the Pointers-as-Capabilities Principle. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1—16.
Capability machines such as CHERI provide memory capabilities that can be used by compilers to provide security benefits for compiled code (e.g., memory safety). The existing C to CHERI compiler, for example, achieves memory safety by following a principle called “pointers as capabilities” (PAC). Informally, PAC says that a compiler should represent a source language pointer as a machine code capability. But the security properties of PAC compilers are not yet well understood. We show that memory safety is only one aspect, and that PAC compilers can provide significant additional security guarantees for partial programs: the compiler can provide security guarantees for a compilation unit, even if that compilation unit is later linked to attacker-provided machine code.As such, this paper is the first to study the security of PAC compilers for partial programs formally. We prove for a model of such a compiler that it is fully abstract. The proof uses a novel proof technique (dubbed TrICL, read trickle), which should be of broad interest because it reuses the whole-program compiler correctness relation for full abstraction, thus saving work. We also implement our scheme for C on CHERI, show that we can compile legacy C code with minimal changes, and show that the performance overhead of compiled code is roughly proportional to the number of cross-compilation-unit function calls.
Kafedziski, Venceslav.
2021.
Compressive Sampling Stepped Frequency GPR Using Probabilistic Structured Sparsity Models. 2021 15th International Conference on Advanced Technologies, Systems and Services in Telecommunications (℡SIKS). :139—144.
We investigate a compressive sampling (CS) stepped frequency ground penetrating radar for detection of underground objects, which uses Bayesian estimation and a probabilistic model for the target support. Due to the underground targets being sparse, the B-scan is a sparse image. Using the CS principle, the stepped frequency radar is implemented using a subset of random frequencies at each antenna position. For image reconstruction we use Markov Chain and Markov Random Field models for the target support in the B-scan, where we also estimate the model parameters using the Expectation Maximization algorithm. The approach is tested using Web radar data obtained by measuring the signal responses scattered off land mine targets in a laboratory experimental setup. Our approach results in improved performance compared to the standard denoising algorithm for image reconstruction.
Prasad Reddy, V H, Kishore Kumar, Puli.
2021.
Performance Comparison of Orthogonal Matching Pursuit and Novel Incremental Gaussian Elimination OMP Reconstruction Algorithms for Compressive Sensing. 2021 IEEE International Conference on Microwaves, Antennas, Communications and Electronic Systems (COMCAS). :367—372.
Compressive Sensing (CS) is a promising investigation field in the communication signal processing domain. It offers an advantage of compression while sampling; hence, data redundancy is reduced and improves sampled data transmission. Due to the acquisition of compressed samples, Analog to Digital Conversions (ADCs) performance also improved at ultra-high frequency communication applications. Several reconstruction algorithms existed to reconstruct the original signal with these sub-Nyquist samples. Orthogonal Matching Pursuit (OMP) falls under the category of greedy algorithms considered in this work. We implemented a compressively sensed sampling procedure using a Random Demodulator Analog-to-Information Converter (RD-AIC). And for CS reconstruction, we have considered OMP and novel Incremental Gaussian Elimination (IGE) OMP algorithms to reconstruct the original signal. Performance comparison between OMP and IGE OMP presented.
Blanco, Geison, Perez, Juan, Monsalve, Jonathan, Marquez, Miguel, Esnaola, Iñaki, Arguello, Henry.
2021.
Single Snapshot System for Compressive Covariance Matrix Estimation for Hyperspectral Imaging via Lenslet Array. 2021 XXIII Symposium on Image, Signal Processing and Artificial Vision (STSIVA). :1—5.
Compressive Covariance Sampling (CCS) is a strategy used to recover the covariance matrix (CM) directly from compressive measurements. Several works have proven the advantages of CSS in Compressive Spectral Imaging (CSI) but most of these algorithms require multiple random projections of the scene to obtain good reconstructions. However, several low-resolution copies of the scene can be captured in a single snapshot through a lenslet array. For this reason, this paper proposes a sensing protocol and a single snapshot CCS optical architecture using a lenslet array based on the Dual Dispersive Aperture Spectral Imager(DD-CASSI) that allows the recovery of the covariance matrix with a single snapshot. In this architecture uses the lenslet array allows to obtain different projections of the image in a shot due to the special coded aperture. In order to validate the proposed approach, simulations evaluated the quality of the recovered CM and the performance recovering the spectral signatures against traditional methods. Results show that the image reconstructions using CM have PSNR values about 30 dB, and reconstructed spectrum has a spectral angle mapper (SAM) error less than 15° compared to the original spectral signatures.
Kozhemyak, Olesya A., Stukach, Oleg V..
2021.
Reducing the Root-Mean-Square Error at Signal Restoration using Discrete and Random Changes in the Sampling Rate for the Compressed Sensing Problem. 2021 International Siberian Conference on Control and Communications (SIBCON). :1—3.
The data revolution will continue in the near future and move from centralized big data to "small" datasets. This trend stimulates the emergence not only new machine learning methods but algorithms for processing data at the point of their origin. So the Compressed Sensing Problem must be investigated in some technology fields that produce the data flow for decision making in real time. In the paper, we compare the random and constant frequency deviation and highlight some circumstances where advantages of the random deviation become more obvious. Also, we propose to use the differential transformations aimed to restore a signal form by discrets of the differential spectrum of the received signal. In some cases for the investigated model, this approach has an advantage in the compress of information.
Zhu, Zhen, Chi, Cheng, Zhang, Chunhua.
2021.
Spatial-Resampling Wideband Compressive Beamforming. OCEANS 2021: San Diego – Porto. :1—4.
Compressive beamforming has been successfully applied to the estimation of the direction of arrival (DOA) of array signals, and has higher angular resolution than traditional high-resolution beamforming methods. However, most of the existing compressive beamforming methods are based on narrow signal models. Wideband signal processing using these existing compressive beamforming methods is to divide the frequency band into several narrow-bands and add up the beamforming results of each narrow-band. However, for sonar application, signals usually consist of continuous spectrum and line spectrum, and the line spectrum is usually more than 10dB higher than the continuous spectrum. Due to the large difference of signal-to-noise ratio (SNR) of each narrow-band, different regularization parameters should be used, otherwise it is difficult to get an ideal result, which makes compressive beamforming highly complicated. In this paper, a compressive beamforming method based on spatial resampling for uniform linear arrays is proposed. The signals are converted into narrow-band signals by spatial resampling technique, and compressive beamforming is then performed to estimate the DOA of the sound source. Experimental results show the superiority of the proposed method, which avoids the problem of using different parameters in the existing compressive beamforming methods, and the resolution is comparable to the existing methods using different parameters for wideband models. The spatial-resampling compressive beamforming has a better robustness when the regularization parameter is fixed, and exhibits lower levels of background interference than the existing methods.
Liu, Cong, Liu, Yunqing, Li, Qi, Wei, Zikang.
2021.
Radar Target MTD 2D-CFAR Algorithm Based on Compressive Detection. 2021 IEEE International Conference on Mechatronics and Automation (ICMA). :83—88.
In order to solve the problem of large data volume brought by the traditional Nyquist sampling theorem in radar signal detection, a compressive detection (CD) model based on compressed sensing (CS) theory is proposed by analyzing the sparsity of the radar target in the range domain. The lower sampling rate completes the compressive sampling of the radar signal on the range field. On this basis, the two-dimensional distribution of the Doppler unit is established by moving target detention moving target detention (MTD), and the detection of the target is achieved with the two-dimensional constant false alarm rate (2D-CFAR) detection algorithm. The simulation experiment results prove that the algorithm can effectively detect the target without the need for reconstruction signals, and has good detection performance.
de Vito, Luca, Picariello, Francesco, Rapuano, Sergio, Tudosa, Ioan.
2021.
Compressive Sampling on RFSoC for Distributed Wideband RF Spectrum Measurements. 2021 IEEE International Instrumentation and Measurement Technology Conference (I2MTC). :1—6.
This paper presents the application of Compressive Sampling (CS) to the realization of a wideband receiver for distributed spectrum monitoring. The proposed prototype performs the non-uniform sampling CS-based technique, while the signal reconstruction is realized by the Orthogonal Matching Pursuit (OMP) algorithm on a personal computer. A first experimental analysis has been conducted on the prototype by assessing several figures of merit, thus characterizing its performance in the time, frequency and modulation domains. The obtained results demonstrate that the proposed prototype can achieve good performance in all specified domains with Compression Ratios (CRs) up to 10 for a 4-QAM (Quadrature Amplitude Modulation) signal having carrier frequency of 350 MHz and working at a symbol rate of 46 MSym/s.