Visible to the public Biblio

Filters: Keyword is matched filters  [Clear All Filters]
2022-04-19
Evstafyev, G. A., Selyanskaya, E. A..  2021.  Method of Ensuring Structural Secrecy of the Signal. 2021 Systems of Signal Synchronization, Generating and Processing in Telecommunications (SYNCHROINFO. :1–4.
A method for providing energy and structural secrecy of a signal is presented, which is based on the method of pseudo-random restructuring of the spreading sequence. This method complicates the implementation of the accumulation mode, and therefore the detection of the signal-code structure of the signal in a third-party receiver, due to the use of nested pseudo-random sequences (PRS) and their restructuring. And since the receiver-detector is similar to the receiver of the communication system, it is necessary to ensure optimal signal processing to implement an acceptable level of structural secrecy.
2021-12-20
Wen, Peisong, Xu, Qianqian, Jiang, Yangbangyan, Yang, Zhiyong, He, Yuan, Huang, Qingming.  2021.  Seeking the Shape of Sound: An Adaptive Framework for Learning Voice-Face Association. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :16342–16351.
Nowadays, we have witnessed the early progress on learning the association between voice and face automatically, which brings a new wave of studies to the computer vision community. However, most of the prior arts along this line (a) merely adopt local information to perform modality alignment and (b) ignore the diversity of learning difficulty across different subjects. In this paper, we propose a novel framework to jointly address the above-mentioned issues. Targeting at (a), we propose a two-level modality alignment loss where both global and local information are considered. Compared with the existing methods, we introduce a global loss into the modality alignment process. The global component of the loss is driven by the identity classification. Theoretically, we show that minimizing the loss could maximize the distance between embeddings across different identities while minimizing the distance between embeddings belonging to the same identity, in a global sense (instead of a mini-batch). Targeting at (b), we propose a dynamic reweighting scheme to better explore the hard but valuable identities while filtering out the unlearnable identities. Experiments show that the proposed method outperforms the previous methods in multiple settings, including voice-face matching, verification and retrieval.
2020-08-03
Xin, Le, Li, Yuanji, Shang, Shize, Li, Guangrui, Yang, Yuhao.  2019.  A Template Matching Background Filtering Method for Millimeter Wave Human Security Image. 2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR). :1–6.
In order to solve the interference of burrs, aliasing and other noises in the background area of millimeter wave human security inspection on the objects identification, an adaptive template matching filtering method is proposed. First, the preprocessed original image is segmented by level set algorithm, then the result is used as a template to filter the background of the original image. Finally, the image after background filtered is used as the input of bilateral filtering. The contrast experiments based on the actual millimeter wave image verifies the improvement of this algorithm compared with the traditional filtering method, and proves that this algorithm can filter the background noise of the human security image, retain the image details of the human body area, and is conducive to the object recognition and location in the millimeter wave security image.
2020-05-11
Yu, Dunyi.  2018.  Research on Anomaly Intrusion Detection Technology in Wireless Network. 2018 International Conference on Virtual Reality and Intelligent Systems (ICVRIS). :540–543.
In order to improve the security of wireless network, an anomaly intrusion detection algorithm based on adaptive time-frequency feature decomposition is proposed. This paper analyzes the types and detection principles of wireless network intrusion detection, it adopts the information statistical analysis method to detect the network intrusion, constructs the traffic statistical analysis model of the network abnormal intrusion, and establishes the network intrusion signal model by combining the signal fitting method. The correlation matching filter is used to filter the network intrusion signal to improve the output signal-to-noise ratio (SNR), the time-frequency analysis method is used to extract the characteristic quantity of the network abnormal intrusion, and the adaptive correlation spectrum analysis method is used to realize the intrusion detection. The simulation results show that this method has high accuracy and strong anti-interference ability, and it can effectively guarantee the network security.
2018-09-12
Al-hisnawi, M., Ahmadi, M..  2017.  Deep packet inspection using Cuckoo filter. 2017 Annual Conference on New Trends in Information Communications Technology Applications (NTICT). :197–202.

Nowadays, Internet Service Providers (ISPs) have been depending on Deep Packet Inspection (DPI) approaches, which are the most precise techniques for traffic identification and classification. However, constructing high performance DPI approaches imposes a vigilant and an in-depth computing system design because the demands for the memory and processing power. Membership query data structures, specifically Bloom filter (BF), have been employed as a matching check tool in DPI approaches. It has been utilized to store signatures fingerprint in order to examine the presence of these signatures in the incoming network flow. The main issue that arise when employing Bloom filter in DPI approaches is the need to use k hash functions which, in turn, imposes more calculations overhead that degrade the performance. Consequently, in this paper, a new design and implementation for a DPI approach have been proposed. This DPI utilizes a membership query data structure called Cuckoo filter (CF) as a matching check tool. CF has many advantages over BF like: less memory consumption, less false positive rate, higher insert performance, higher lookup throughput, support delete operation. The achieved experiments show that the proposed approach offers better performance results than others that utilize Bloom filter.

2018-05-01
Cogranne, R., Sedighi, V., Fridrich, J..  2017.  Practical Strategies for Content-Adaptive Batch Steganography and Pooled Steganalysis. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2122–2126.

This paper investigates practical strategies for distributing payload across images with content-adaptive steganography and for pooling outputs of a single-image detector for steganalysis. Adopting a statistical model for the detector's output, the steganographer minimizes the power of the most powerful detector of an omniscient Warden, while the Warden, informed by the payload spreading strategy, detects with the likelihood ratio test in the form of a matched filter. Experimental results with state-of-the-art content-adaptive additive embedding schemes and rich models are included to show the relevance of the results.

2017-11-20
Wei, Li, Hongyu, Liu, Xiaoliang, Zhang.  2016.  A network data security analysis method based on DPI technology. 2016 7th IEEE International Conference on Software Engineering and Service Science (ICSESS). :973–976.

In view of the high demand for the security of visiting data in power system, a network data security analysis method based on DPI technology was put forward in this paper, to solve the problem of security gateway judge the legality of the network data. Considering the legitimacy of the data involves data protocol and data contents, this article will filters the data from protocol matching and content detection. Using deep packet inspection (DPI) technology to screen the protocol. Using protocol analysis to detect the contents of data. This paper implements the function that allowing secure data through the gateway and blocking threat data. The example proves that the method is more effective guarantee the safety of visiting data.

2017-02-21
I. Ilhan, A. C. Gurbuz, O. Arikan.  2015.  "Sparsity based robust Stretch Processing". 2015 IEEE International Conference on Digital Signal Processing (DSP). :95-99.

Strecth Processing (SP) is a radar signal processing technique that provides high-range resolution with processing large bandwidth signals with lower rate Analog to Digital Converter(ADC)s. The range resolution of the large bandwidth signal is obtained through looking into a limited range window and low rate ADC samples. The target space in the observed range window is sparse and Compressive sensing(CS) is an important tool to further decrease the number of measurements and sparsely reconstruct the target space for sparse scenes with a known basis which is the Fourier basis in the general application of SP. Although classical CS techniques might be directly applied to SP, due to off-grid targets reconstruction performance degrades. In this paper, applicability of compressive sensing framework and its sparse signal recovery techniques to stretch processing is studied considering off-grid cases. For sparsity based robust SP, Perturbed Parameter Orthogonal Matching Pursuit(PPOMP) algorithm is proposed. PPOMP is an iterative technique that estimates off-grid target parameters through a gradient descent. To compute the error between actual and reconstructed parameters, Earth Movers Distance(EMD) is used. Performance of proposed algorithm are compared with classical CS and SP techniques.

R. Lee, L. Mullen, P. Pal, D. Illig.  2015.  "Time of flight measurements for optically illuminated underwater targets using Compressive Sampling and Sparse reconstruction". OCEANS 2015 - MTS/IEEE Washington. :1-6.

Compressive Sampling and Sparse reconstruction theory is applied to a linearly frequency modulated continuous wave hybrid lidar/radar system. The goal is to show that high resolution time of flight measurements to underwater targets can be obtained utilizing far fewer samples than dictated by Nyquist sampling theorems. Traditional mixing/down-conversion and matched filter signal processing methods are reviewed and compared to the Compressive Sampling and Sparse Reconstruction methods. Simulated evidence is provided to show the possible sampling rate reductions, and experiments are used to observe the effects that turbid underwater environments have on recovery. Results show that by using compressive sensing theory and sparse reconstruction, it is possible to achieve significant sample rate reduction while maintaining centimeter range resolution.