Visible to the public Biblio

Filters: Keyword is Few-shot learning  [Clear All Filters]
2022-03-14
Ouyang, Yuankai, Li, Beibei, Kong, Qinglei, Song, Han, Li, Tao.  2021.  FS-IDS: A Novel Few-Shot Learning Based Intrusion Detection System for SCADA Networks. ICC 2021 - IEEE International Conference on Communications. :1—6.

Supervisory control and data acquisition (SCADA) networks provide high situational awareness and automation control for industrial control systems, whilst introducing a wide range of access points for cyber attackers. To address these issues, a line of machine learning or deep learning based intrusion detection systems (IDSs) have been presented in the literature, where a large number of attack examples are usually demanded. However, in real-world SCADA networks, attack examples are not always sufficient, having only a few shots in many cases. In this paper, we propose a novel few-shot learning based IDS, named FS-IDS, to detect cyber attacks against SCADA networks, especially when having only a few attack examples in the defenders’ hands. Specifically, a new method by orchestrating one-hot encoding and principal component analysis is developed, to preprocess SCADA datasets containing sufficient examples for frequent cyber attacks. Then, a few-shot learning based preliminary IDS model is designed and trained using the preprocessed data. Last, a complete FS-IDS model for SCADA networks is established by further training the preliminary IDS model with a few examples for cyber attacks of interest. The high effectiveness of the proposed FS-IDS, in detecting cyber attacks against SCADA networks with only a few examples, is demonstrated by extensive experiments on a real SCADA dataset.

2022-03-10
Yang, Mengde.  2021.  A Survey on Few-Shot Learning in Natural Language Processing. 2021 International Conference on Artificial Intelligence and Electromechanical Automation (AIEA). :294—297.
The annotated dataset is the foundation for Supervised Natural Language Processing. However, the cost of obtaining dataset is high. In recent years, the Few-Shot Learning has gradually attracted the attention of researchers. From the definition, in this paper, we conclude the difference in Few-Shot Learning between Natural Language Processing and Computer Vision. On that basis, the current Few-Shot Learning on Natural Language Processing is summarized, including Transfer Learning, Meta Learning and Knowledge Distillation. Furthermore, we conclude the solutions to Few-Shot Learning in Natural Language Processing, such as the method based on Distant Supervision, Meta Learning and Knowledge Distillation. Finally, we present the challenges facing Few-Shot Learning in Natural Language Processing.
2022-03-01
Zhang, Zilin, Li, Yan, Gao, Meiguo.  2021.  Few-Shot Learning of Signal Modulation Recognition Based on Attention Relation Network. 2020 28th European Signal Processing Conference (EUSIPCO). :1372–1376.
Most of existing signal modulation recognition methods attempt to establish a machine learning mechanism by training with a large number of annotated samples, which is hardly applied to the real-world electronic reconnaissance scenario where only a few samples can be intercepted in advance. Few-Shot Learning (FSL) aims to learn from training classes with a lot of samples and transform the knowledge to support classes with only a few samples, thus realizing model generalization. In this paper, a novel FSL framework called Attention Relation Network (ARN) is proposed, which introduces channel and spatial attention respectively to learn a more effective feature representation of support samples. The experimental results show that the proposed method can achieve excellent performance for fine-grained signal modulation recognition even with only one support sample and is robust to low signal-to-noise-ratio conditions.
2020-11-02
Pan, C., Huang, J., Gong, J., Yuan, X..  2019.  Few-Shot Transfer Learning for Text Classification With Lightweight Word Embedding Based Models. IEEE Access. 7:53296–53304.
Many deep learning architectures have been employed to model the semantic compositionality for text sequences, requiring a huge amount of supervised data for parameters training, making it unfeasible in situations where numerous annotated samples are not available or even do not exist. Different from data-hungry deep models, lightweight word embedding-based models could represent text sequences in a plug-and-play way due to their parameter-free property. In this paper, a modified hierarchical pooling strategy over pre-trained word embeddings is proposed for text classification in a few-shot transfer learning way. The model leverages and transfers knowledge obtained from some source domains to recognize and classify the unseen text sequences with just a handful of support examples in the target problem domain. The extensive experiments on five datasets including both English and Chinese text demonstrate that the simple word embedding-based models (SWEMs) with parameter-free pooling operations are able to abstract and represent the semantic text. The proposed modified hierarchical pooling method exhibits significant classification performance in the few-shot transfer learning tasks compared with other alternative methods.