Visible to the public Biblio

Filters: Keyword is Fully Convolutional Networks  [Clear All Filters]
2022-09-09
Liu, Pengcheng, Han, Zhen, Shi, Zhixin, Liu, Meichen.  2021.  Recognition of Overlapped Frequency Hopping Signals Based on Fully Convolutional Networks. 2021 28th International Conference on Telecommunications (ICT). :1—5.
Previous research on frequency hopping (FH) signal recognition utilizing deep learning only focuses on single-label signal, but can not deal with overlapped FH signal which has multi-labels. To solve this problem, we propose a new FH signal recognition method based on fully convolutional networks (FCN). Firstly, we perform the short-time Fourier transform (STFT) on the collected FH signal to obtain a two-dimensional time-frequency pattern with time, frequency, and intensity information. Then, the pattern will be put into an improved FCN model, named FH-FCN, to make a pixel-level prediction. Finally, through the statistics of the output pixels, we can get the final classification results. We also design an algorithm that can automatically generate dataset for model training. The experimental results show that, for an overlapped FH signal, which contains up to four different types of signals, our method can recognize them correctly. In addition, the separation of multiple FH signals can be achieved by a slight improvement of our method.
2022-05-19
Zhang, Xiangyu, Yang, Jianfeng, Li, Xiumei, Liu, Minghao, Kang, Ruichun, Wang, Runmin.  2021.  Deeply Multi-channel guided Fusion Mechanism for Natural Scene Text Detection. 2021 7th International Conference on Big Data and Information Analytics (BigDIA). :149–156.
Scene text detection methods have developed greatly in the past few years. However, due to the limitation of the diversity of the text background of natural scene, the previous methods often failed when detecting more complicated text instances (e.g., super-long text and arbitrarily shaped text). In this paper, a text detection method based on multi -channel bounding box fusion is designed to address the problem. Firstly, the convolutional neural network is used as the basic network for feature extraction, including shallow text feature map and deep semantic text feature map. Secondly, the whole convolutional network is used for upsampling of feature map and fusion of feature map at each layer, so as to obtain pixel-level text and non-text classification results. Then, two independent text detection boxes channels are designed: the boundary box regression channel and get the bounding box directly on the score map channel. Finally, the result is obtained by combining multi-channel boundary box fusion mechanism with the detection box of the two channels. Experiments on ICDAR2013 and ICDAR2015 demonstrate that the proposed method achieves competitive results in scene text detection.
2018-06-07
Akcay, S., Breckon, T. P..  2017.  An evaluation of region based object detection strategies within X-ray baggage security imagery. 2017 IEEE International Conference on Image Processing (ICIP). :1337–1341.

Here we explore the applicability of traditional sliding window based convolutional neural network (CNN) detection pipeline and region based object detection techniques such as Faster Region-based CNN (R-CNN) and Region-based Fully Convolutional Networks (R-FCN) on the problem of object detection in X-ray security imagery. Within this context, with limited dataset availability, we employ a transfer learning paradigm for network training tackling both single and multiple object detection problems over a number of R-CNN/R-FCN variants. The use of first-stage region proposal within the Faster RCNN and R-FCN provide superior results than traditional sliding window driven CNN (SWCNN) approach. With the use of Faster RCNN with VGG16, pretrained on the ImageNet dataset, we achieve 88.3 mAP for a six object class X-ray detection problem. The use of R-FCN with ResNet-101, yields 96.3 mAP for the two class firearm detection problem requiring 0.1 second computation per image. Overall we illustrate the comparative performance of these techniques as object localization strategies within cluttered X-ray security imagery.