Visible to the public Biblio

Filters: Keyword is bidirectional LSTM  [Clear All Filters]
2022-06-30
Mistry, Rahul, Thatte, Girish, Waghela, Amisha, Srinivasan, Gayatri, Mali, Swati.  2021.  DeCaptcha: Cracking captcha using Deep Learning Techniques. 2021 5th International Conference on Information Systems and Computer Networks (ISCON). :1—6.
CAPTCHA or Completely Automated Public Turing test to Tell Computers and Humans Apart is a technique to distinguish between humans and computers by generating and evaluating tests that can be passed by humans but not computer bots. However, captchas are not foolproof, and they can be bypassed which raises security concerns. Hence, sites over the internet remain open to such vulnerabilities. This research paper identifies the vulnerabilities found in some of the commonly used captcha schemes by cracking them using Deep Learning techniques. It also aims to provide solutions to safeguard against these vulnerabilities and provides recommendations for the generation of secure captchas.
2022-06-09
Alsyaibani, Omar Muhammad Altoumi, Utami, Ema, Hartanto, Anggit Dwi.  2021.  An Intrusion Detection System Model Based on Bidirectional LSTM. 2021 3rd International Conference on Cybernetics and Intelligent System (ICORIS). :1–6.
Intrusion Detection System (IDS) is used to identify malicious traffic on the network. Apart from rule-based IDS, machine learning and deep learning based on IDS are also being developed to improve the accuracy of IDS detection. In this study, the public dataset CIC IDS 2017 was used in developing deep learning-based IDS because this dataset contains the new types of attacks. In addition, this dataset also meets the criteria as an intrusion detection dataset. The dataset was split into train data, validation data and test data. We proposed Bidirectional Long-Short Term Memory (LSTM) for building neural network. We created 24 scenarios with various changes in training parameters which were trained for 100 epochs. The training parameters used as research variables are optimizer, activation function, and learning rate. As addition, Dropout layer and L2-regularizer were implemented on every scenario. The result shows that the model used Adam optimizer, Tanh activation function and a learning rate of 0.0001 produced the highest accuracy compared to other scenarios. The accuracy and F1 score reached 97.7264% and 97.7516%. The best model was trained again until 1000 iterations and the performance increased to 98.3448% in accuracy and 98.3793% in F1 score. The result exceeded several previous works on the same dataset.
2021-06-01
Ming, Kun.  2020.  Chinese Coreference Resolution via Bidirectional LSTMs using Word and Token Level Representations. 2020 16th International Conference on Computational Intelligence and Security (CIS). :73–76.
Coreference resolution is an important task in the field of natural language processing. Most existing methods usually utilize word-level representations, ignoring massive information from the texts. To address this issue, we investigate how to improve Chinese coreference resolution by using span-level semantic representations. Specifically, we propose a model which acquires word and character representations through pre-trained Skip-Gram embeddings and pre-trained BERT, then explicitly leverages span-level information by performing bidirectional LSTMs among above representations. Experiments on CoNLL-2012 shared task have demonstrated that the proposed model achieves 62.95% F1-score, outperforming our baseline methods.