Visible to the public Biblio

Filters: Keyword is sequence modeling  [Clear All Filters]
2023-04-14
Chen, Yang, Luo, Xiaonan, Xu, Songhua, Chen, Ruiai.  2022.  CaptchaGG: A linear graphical CAPTCHA recognition model based on CNN and RNN. 2022 9th International Conference on Digital Home (ICDH). :175–180.
This paper presents CaptchaGG, a model for recognizing linear graphical CAPTCHAs. As in the previous society, CAPTCHA is becoming more and more complex, but in some scenarios, complex CAPTCHA is not needed, and usually, linear graphical CAPTCHA can meet the corresponding functional scenarios, such as message boards of websites and registration of accounts with low security. The scheme is based on convolutional neural networks for feature extraction of CAPTCHAs, recurrent neural forests A neural network that is too complex will lead to problems such as difficulty in training and gradient disappearance, and too simple will lead to underfitting of the model. For the single problem of linear graphical CAPTCHA recognition, the model which has a simple architecture, extracting features by convolutional neural network, sequence modeling by recurrent neural network, and finally classification and recognition, can achieve an accuracy of 96% or more recognition at a lower complexity.
2018-05-24
Zheng, Yanan, Wen, Lijie, Wang, Jianmin, Yan, Jun, Ji, Lei.  2017.  Sequence Modeling with Hierarchical Deep Generative Models with Dual Memory. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. :1369–1378.

Deep Generative Models (DGMs) are able to extract high-level representations from massive unlabeled data and are explainable from a probabilistic perspective. Such characteristics favor sequence modeling tasks. However, it still remains a huge challenge to model sequences with DGMs. Unlike real-valued data that can be directly fed into models, sequence data consist of discrete elements and require being transformed into certain representations first. This leads to the following two challenges. First, high-level features are sensitive to small variations of inputs as well as the way of representing data. Second, the models are more likely to lose long-term information during multiple transformations. In this paper, we propose a Hierarchical Deep Generative Model With Dual Memory to address the two challenges. Furthermore, we provide a method to efficiently perform inference and learning on the model. The proposed model extends basic DGMs with an improved hierarchically organized multi-layer architecture. Besides, our model incorporates memories along dual directions, respectively denoted as broad memory and deep memory. The model is trained end-to-end by optimizing a variational lower bound on data log-likelihood using the improved stochastic variational method. We perform experiments on several tasks with various datasets and obtain excellent results. The results of language modeling show our method significantly outperforms state-of-the-art results in terms of generative performance. Extended experiments including document modeling and sentiment analysis, prove the high-effectiveness of dual memory mechanism and latent representations. Text random generation provides a straightforward perception for advantages of our model.