Biblio

Found 168 results

Filters: Keyword is natural language processing  [Clear All Filters]
2019-11-12
Ferenc, Rudolf, Heged\H us, Péter, Gyimesi, Péter, Antal, Gábor, Bán, Dénes, Gyimóthy, Tibor.  2019.  Challenging Machine Learning Algorithms in Predicting Vulnerable JavaScript Functions. 2019 IEEE/ACM 7th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE). :8-14.

The rapid rise of cyber-crime activities and the growing number of devices threatened by them place software security issues in the spotlight. As around 90% of all attacks exploit known types of security issues, finding vulnerable components and applying existing mitigation techniques is a viable practical approach for fighting against cyber-crime. In this paper, we investigate how the state-of-the-art machine learning techniques, including a popular deep learning algorithm, perform in predicting functions with possible security vulnerabilities in JavaScript programs. We applied 8 machine learning algorithms to build prediction models using a new dataset constructed for this research from the vulnerability information in public databases of the Node Security Project and the Snyk platform, and code fixing patches from GitHub. We used static source code metrics as predictors and an extensive grid-search algorithm to find the best performing models. We also examined the effect of various re-sampling strategies to handle the imbalanced nature of the dataset. The best performing algorithm was KNN, which created a model for the prediction of vulnerable functions with an F-measure of 0.76 (0.91 precision and 0.66 recall). Moreover, deep learning, tree and forest based classifiers, and SVM were competitive with F-measures over 0.70. Although the F-measures did not vary significantly with the re-sampling strategies, the distribution of precision and recall did change. No re-sampling seemed to produce models preferring high precision, while re-sampling strategies balanced the IR measures.

2020-10-05
Liu, Donglei, Niu, Zhendong, Zhang, Chunxia, Zhang, Jiadi.  2019.  Multi-Scale Deformable CNN for Answer Selection. IEEE Access. 7:164986—164995.

The answer selection task is one of the most important issues within the automatic question answering system, and it aims to automatically find accurate answers to questions. Traditional methods for this task use manually generated features based on tf-idf and n-gram models to represent texts, and then select the right answers according to the similarity between the representations of questions and the candidate answers. Nowadays, many question answering systems adopt deep neural networks such as convolutional neural network (CNN) to generate the text features automatically, and obtained better performance than traditional methods. CNN can extract consecutive n-gram features with fixed length by sliding fixed-length convolutional kernels over the whole word sequence. However, due to the complex semantic compositionality of the natural language, there are many phrases with variable lengths and be composed of non-consecutive words in natural language, such as these phrases whose constituents are separated by other words within the same sentences. But the traditional CNN is unable to extract the variable length n-gram features and non-consecutive n-gram features. In this paper, we propose a multi-scale deformable convolutional neural network to capture the non-consecutive n-gram features by adding offset to the convolutional kernel, and also propose to stack multiple deformable convolutional layers to mine multi-scale n-gram features by the means of generating longer n-gram in higher layer. Furthermore, we apply the proposed model into the task of answer selection. Experimental results on public dataset demonstrate the effectiveness of our proposed model in answer selection.

2020-05-18
Sel, Slhami, Hanbay, Davut.  2019.  E-Mail Classification Using Natural Language Processing. 2019 27th Signal Processing and Communications Applications Conference (SIU). :1–4.
Thanks to the rapid increase in technology and electronic communications, e-mail has become a serious communication tool. In many applications such as business correspondence, reminders, academic notices, web page memberships, e-mail is used as primary way of communication. If we ignore spam e-mails, there remain hundreds of e-mails received every day. In order to determine the importance of received e-mails, the subject or content of each e-mail must be checked. In this study we proposed an unsupervised system to classify received e-mails. Received e-mails' coordinates are determined by a method of natural language processing called as Word2Vec algorithm. According to the similarities, processed data are grouped by k-means algorithm with an unsupervised training model. In this study, 10517 e-mails were used in training. The success of the system is tested on a test group of 200 e-mails. In the test phase M3 model (window size 3, min. Word frequency 10, Gram skip) consolidated the highest success (91%). Obtained results are evaluated in section VI.
Kermani, Fatemeh Hojati, Ghanbari, Shirin.  2019.  Extractive Persian Summarizer for News Websites. 2019 5th International Conference on Web Research (ICWR). :85–89.
Automatic extractive text summarization is the process of condensing textual information while preserving the important concepts. The proposed method after performing pre-processing on input Persian news articles generates a feature vector of salient sentences from a combination of statistical, semantic and heuristic methods and that are scored and concatenated accordingly. The scoring of the salient features is based on the article's title, proper nouns, pronouns, sentence length, keywords, topic words, sentence position, English words, and quotations. Experimental results on measurements including recall, F-measure, ROUGE-N are presented and compared to other Persian summarizers and shown to provide higher performance.
2020-11-20
Han, H., Wang, Q., Chen, C..  2019.  Policy Text Analysis Based on Text Mining and Fuzzy Cognitive Map. 2019 15th International Conference on Computational Intelligence and Security (CIS). :142—146.
With the introduction of computer methods, the amount of material and processing accuracy of policy text analysis have been greatly improved. In this paper, Text mining(TM) and latent semantic analysis(LSA) were used to collect policy documents and extract policy elements from them. Fuzzy association rule mining(FARM) technique and partial association test (PA) were used to discover the causal relationships and impact degrees between elements, and a fuzzy cognitive map (FCM) was developed to deduct the evolution of elements through a soft computing method. This non-interventionist approach avoids the validity defects caused by the subjective bias of researchers and provides policy makers with more objective policy suggestions from a neutral perspective. To illustrate the accuracy of this method, this study experimented by taking the state-owned capital layout adjustment related policies as an example, and proved that this method can effectively analyze policy text.
2020-08-28
Jafariakinabad, Fereshteh, Hua, Kien A..  2019.  Style-Aware Neural Model with Application in Authorship Attribution. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :325—328.

Writing style is a combination of consistent decisions associated with a specific author at different levels of language production, including lexical, syntactic, and structural. In this paper, we introduce a style-aware neural model to encode document information from three stylistic levels and evaluate it in the domain of authorship attribution. First, we propose a simple way to jointly encode syntactic and lexical representations of sentences. Subsequently, we employ an attention-based hierarchical neural network to encode the syntactic and semantic structure of sentences in documents while rewarding the sentences which contribute more to capturing the writing style. Our experimental results, based on four benchmark datasets, reveal the benefits of encoding document information from all three stylistic levels when compared to the baseline methods in the literature.

2020-05-18
Panahandeh, Mahnaz, Ghanbari, Shirin.  2019.  Correction of Spaces in Persian Sentences for Tokenization. 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI). :670–674.
The exponential growth of the Internet and its users and the emergence of Web 2.0 have caused a large volume of textual data to be created. Automatic analysis of such data can be used in making decisions. As online text is created by different producers with different styles of writing, pre-processing is a necessity prior to any processes related to natural language tasks. An essential part of textual preprocessing prior to the recognition of the word vocabulary is normalization, which includes the correction of spaces that particularly in the Persian language this includes both full-spaces between words and half-spaces. Through the review of user comments within social media services, it can be seen that in many cases users do not adhere to grammatical rules of inserting both forms of spaces, which increases the complexity of the identification of words and henceforth, reducing the accuracy of further processing on the text. In this study, current issues in the normalization and tokenization of preprocessing tools within the Persian language and essentially identifying and correcting the separation of words are and the correction of spaces are proposed. The results obtained and compared to leading preprocessing tools highlight the significance of the proposed methodology.
Lee, Hyun-Young, Kang, Seung-Shik.  2019.  Word Embedding Method of SMS Messages for Spam Message Filtering. 2019 IEEE International Conference on Big Data and Smart Computing (BigComp). :1–4.
SVM has been one of the most popular machine learning method for the binary classification such as sentiment analysis and spam message filtering. We explored a word embedding method for the construction of a feature vector and the deep learning method for the binary classification. CBOW is used as a word embedding technique and feedforward neural network is applied to classify SMS messages into ham or spam. The accuracy of the two classification methods of SVM and neural network are compared for the binary classification. The experimental result shows that the accuracy of deep learning method is better than the conventional machine learning method of SVM-light in the binary classification.
2020-08-28
Traylor, Terry, Straub, Jeremy, Gurmeet, Snell, Nicholas.  2019.  Classifying Fake News Articles Using Natural Language Processing to Identify In-Article Attribution as a Supervised Learning Estimator. 2019 IEEE 13th International Conference on Semantic Computing (ICSC). :445—449.

Intentionally deceptive content presented under the guise of legitimate journalism is a worldwide information accuracy and integrity problem that affects opinion forming, decision making, and voting patterns. Most so-called `fake news' is initially distributed over social media conduits like Facebook and Twitter and later finds its way onto mainstream media platforms such as traditional television and radio news. The fake news stories that are initially seeded over social media platforms share key linguistic characteristics such as making excessive use of unsubstantiated hyperbole and non-attributed quoted content. In this paper, the results of a fake news identification study that documents the performance of a fake news classifier are presented. The Textblob, Natural Language, and SciPy Toolkits were used to develop a novel fake news detector that uses quoted attribution in a Bayesian machine learning system as a key feature to estimate the likelihood that a news article is fake. The resultant process precision is 63.333% effective at assessing the likelihood that an article with quotes is fake. This process is called influence mining and this novel technique is presented as a method that can be used to enable fake news and even propaganda detection. In this paper, the research process, technical analysis, technical linguistics work, and classifier performance and results are presented. The paper concludes with a discussion of how the current system will evolve into an influence mining system.

2020-01-28
KADOGUCHI, Masashi, HAYASHI, Shota, HASHIMOTO, Masaki, OTSUKA, Akira.  2019.  Exploring the Dark Web for Cyber Threat Intelligence Using Machine Leaning. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :200–202.

In recent years, cyber attack techniques are increasingly sophisticated, and blocking the attack is more and more difficult, even if a kind of counter measure or another is taken. In order for a successful handling of this situation, it is crucial to have a prediction of cyber attacks, appropriate precautions, and effective utilization of cyber intelligence that enables these actions. Malicious hackers share various kinds of information through particular communities such as the dark web, indicating that a great deal of intelligence exists in cyberspace. This paper focuses on forums on the dark web and proposes an approach to extract forums which include important information or intelligence from huge amounts of forums and identify traits of each forum using methodologies such as machine learning, natural language processing and so on. This approach will allow us to grasp the emerging threats in cyberspace and take appropriate measures against malicious activities.

2020-08-28
Perry, Lior, Shapira, Bracha, Puzis, Rami.  2019.  NO-DOUBT: Attack Attribution Based On Threat Intelligence Reports. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :80—85.

The task of attack attribution, i.e., identifying the entity responsible for an attack, is complicated and usually requires the involvement of an experienced security expert. Prior attempts to automate attack attribution apply various machine learning techniques on features extracted from the malware's code and behavior in order to identify other similar malware whose authors are known. However, the same malware can be reused by multiple actors, and the actor who performed an attack using a malware might differ from the malware's author. Moreover, information collected during an incident may contain many clues about the identity of the attacker in addition to the malware used. In this paper, we propose a method of attack attribution based on textual analysis of threat intelligence reports, using state of the art algorithms and models from the fields of machine learning and natural language processing (NLP). We have developed a new text representation algorithm which captures the context of the words and requires minimal feature engineering. Our approach relies on vector space representation of incident reports derived from a small collection of labeled reports and a large corpus of general security literature. Both datasets have been made available to the research community. Experimental results show that the proposed representation can attribute attacks more accurately than the baselines' representations. In addition, we show how the proposed approach can be used to identify novel previously unseen threat actors and identify similarities between known threat actors.

2020-05-18
Zhu, Meng, Yang, Xudong.  2019.  Chinese Texts Classification System. 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT). :149–152.
In this article, we designed an automatic Chinese text classification system aiming to implement a system for classifying news texts. We propose two improved classification algorithms as two different choices for users to choose and then our system uses the chosen method for the obtaining of the classified result of the input text. There are two improved algorithms, one is k-Bayes using hierarchy conception based on NB method in machine learning field and another one adds attention layer to the convolutional neural network in deep learning field. Through experiments, our results showed that improved classification algorithms had better accuracy than based algorithms and our system is useful for making classifying news texts more reasonably and effectively.
2020-07-30
Deeba, Farah, Tefera, Getenet, Kun, She, Memon, Hira.  2019.  Protecting the Intellectual Properties of Digital Watermark Using Deep Neural Network. 2019 4th International Conference on Information Systems Engineering (ICISE). :91—95.

Recently in the vast advancement of Artificial Intelligence, Machine learning and Deep Neural Network (DNN) driven us to the robust applications. Such as Image processing, speech recognition, and natural language processing, DNN Algorithms has succeeded in many drawbacks; especially the trained DNN models have made easy to the researchers to produces state-of-art results. However, sharing these trained models are always a challenging task, i.e. security, and protection. We performed extensive experiments to present some analysis of watermark in DNN. We proposed a DNN model for Digital watermarking which investigate the intellectual property of Deep Neural Network, Embedding watermarks, and owner verification. This model can generate the watermarks to deal with possible attacks (fine tuning and train to embed). This approach is tested on the standard dataset. Hence this model is robust to above counter-watermark attacks. Our model accurately and instantly verifies the ownership of all the remotely expanded deep learning models without affecting the model accuracy for standard information data.

2022-06-06
Hung, Benjamin W.K., Muramudalige, Shashika R., Jayasumana, Anura P., Klausen, Jytte, Libretti, Rosanne, Moloney, Evan, Renugopalakrishnan, Priyanka.  2019.  Recognizing Radicalization Indicators in Text Documents Using Human-in-the-Loop Information Extraction and NLP Techniques. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1–7.
Among the operational shortfalls that hinder law enforcement from achieving greater success in preventing terrorist attacks is the difficulty in dynamically assessing individualized violent extremism risk at scale given the enormous amount of primarily text-based records in disparate databases. In this work, we undertake the critical task of employing natural language processing (NLP) techniques and supervised machine learning models to classify textual data in analyst and investigator notes and reports for radicalization behavioral indicators. This effort to generate structured knowledge will build towards an operational capability to assist analysts in rapidly mining law enforcement and intelligence databases for cues and risk indicators. In the near-term, this effort also enables more rapid coding of biographical radicalization profiles to augment a research database of violent extremists and their exhibited behavioral indicators.
2020-12-11
Dabas, K., Madaan, N., Arya, V., Mehta, S., Chakraborty, T., Singh, G..  2019.  Fair Transfer of Multiple Style Attributes in Text. 2019 Grace Hopper Celebration India (GHCI). :1—5.

To preserve anonymity and obfuscate their identity on online platforms users may morph their text and portray themselves as a different gender or demographic. Similarly, a chatbot may need to customize its communication style to improve engagement with its audience. This manner of changing the style of written text has gained significant attention in recent years. Yet these past research works largely cater to the transfer of single style attributes. The disadvantage of focusing on a single style alone is that this often results in target text where other existing style attributes behave unpredictably or are unfairly dominated by the new style. To counteract this behavior, it would be nice to have a style transfer mechanism that can transfer or control multiple styles simultaneously and fairly. Through such an approach, one could obtain obfuscated or written text incorporated with a desired degree of multiple soft styles such as female-quality, politeness, or formalness. To the best of our knowledge this work is the first that shows and attempt to solve the issues related to multiple style transfer. We also demonstrate that the transfer of multiple styles cannot be achieved by sequentially performing multiple single-style transfers. This is because each single style-transfer step often reverses or dominates over the style incorporated by a previous transfer step. We then propose a neural network architecture for fairly transferring multiple style attributes in a given text. We test our architecture on the Yelp dataset to demonstrate our superior performance as compared to existing one-style transfer steps performed in a sequence.

2020-08-28
Khomytska, Iryna, Teslyuk, Vasyl.  2019.  Mathematical Methods Applied for Authorship Attribution on the Phonological Level. 2019 IEEE 14th International Conference on Computer Sciences and Information Technologies (CSIT). 3:7—11.

The proposed combination of statistical methods has proved efficient for authorship attribution. The complex analysis method based on the proposed combination of statistical methods has made it possible to minimize the number of phoneme groups by which the authorial differentiation of texts has been done.

2020-07-16
Pérez-Soler, Sara, Guerra, Esther, de Lara, Juan.  2019.  Flexible Modelling using Conversational Agents. 2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C). :478—482.

The advances in natural language processing and the wide use of social networks have boosted the proliferation of chatbots. These are software services typically embedded within a social network, and which can be addressed using conversation through natural language. Many chatbots exist with different purposes, e.g., to book all kind of services, to automate software engineering tasks, or for customer support. In previous work, we proposed the use of chatbots for domain-specific modelling within social networks. In this short paper, we report on the needs for flexible modelling required by modelling using conversation. In particular, we propose a process of meta-model relaxation to make modelling more flexible, followed by correction steps to make the model conforming to its meta-model. The paper shows how this process is integrated within our conversational modelling framework, and illustrates the approach with an example.

2020-05-18
Zong, Zhaorong, Hong, Changchun.  2018.  On Application of Natural Language Processing in Machine Translation. 2018 3rd International Conference on Mechanical, Control and Computer Engineering (ICMCCE). :506–510.
Natural language processing is the core of machine translation. In the history, its development process is almost the same as machine translation, and the two complement each other. This article compares the natural language processing of statistical corpora with neural machine translation and concludes the natural language processing: Neural machine translation has the advantage of deep learning, which is very suitable for dealing with the high dimension, label-free and big data of natural language, therefore, its application is more general and reflects the power of big data and big data thinking.
2019-12-16
Karve, Shreya, Nagmal, Arati, Papalkar, Sahil, Deshpande, S. A..  2018.  Context Sensitive Conversational Agent Using DNN. 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA). :475–478.
We investigate a method of building a closed domain intelligent conversational agent using deep neural networks. A conversational agent is a dialog system intended to converse with a human, with a coherent structure. Our conversational agent uses a retrieval based model that identifies the intent of the input user query and maps it to a knowledge base to return appropriate results. Human conversations are based on context, but existing conversational agents are context insensitive. To overcome this limitation, our system uses a simple stack based context identification and storage system. The conversational agent generates responses according to the current context of conversation. allowing more human-like conversations.
2020-05-18
Peng, Tianrui, Harris, Ian, Sawa, Yuki.  2018.  Detecting Phishing Attacks Using Natural Language Processing and Machine Learning. 2018 IEEE 12th International Conference on Semantic Computing (ICSC). :300–301.
Phishing attacks are one of the most common and least defended security threats today. We present an approach which uses natural language processing techniques to analyze text and detect inappropriate statements which are indicative of phishing attacks. Our approach is novel compared to previous work because it focuses on the natural language text contained in the attack, performing semantic analysis of the text to detect malicious intent. To demonstrate the effectiveness of our approach, we have evaluated it using a large benchmark set of phishing emails.
Fahad, S.K. Ahammad, Yahya, Abdulsamad Ebrahim.  2018.  Inflectional Review of Deep Learning on Natural Language Processing. 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE). :1–4.
In the age of knowledge, Natural Language Processing (NLP) express its demand by a huge range of utilization. Previously NLP was dealing with statically data. Contemporary time NLP is doing considerably with the corpus, lexicon database, pattern reorganization. Considering Deep Learning (DL) method recognize artificial Neural Network (NN) to nonlinear process, NLP tools become increasingly accurate and efficient that begin a debacle. Multi-Layer Neural Network obtaining the importance of the NLP for its capability including standard speed and resolute output. Hierarchical designs of data operate recurring processing layers to learn and with this arrangement of DL methods manage several practices. In this paper, this resumed striving to reach a review of the tools and the necessary methodology to present a clear understanding of the association of NLP and DL for truly understand in the training. Efficiency and execution both are improved in NLP by Part of speech tagging (POST), Morphological Analysis, Named Entity Recognition (NER), Semantic Role Labeling (SRL), Syntactic Parsing, and Coreference resolution. Artificial Neural Networks (ANN), Time Delay Neural Networks (TDNN), Recurrent Neural Network (RNN), Convolution Neural Networks (CNN), and Long-Short-Term-Memory (LSTM) dealings among Dense Vector (DV), Windows Approach (WA), and Multitask learning (MTL) as a characteristic of Deep Learning. After statically methods, when DL communicate the influence of NLP, the individual form of the NLP process and DL rule collaboration was started a fundamental connection.
2019-08-05
Sorokine, Alex, Thakur, Gautam, Palumbo, Rachel.  2018.  Machine Learning to Improve Retrieval by Category in Big Volunteered Geodata. Proceedings of the 12th Workshop on Geographic Information Retrieval. :4:1–4:2.
Nowadays, Volunteered Geographic Information (VGI) is commonly used in research and practical applications. However, the quality assurance of such a geographic data remains a problem. In this study we use machine learning and natural language processing to improve record retrieval by category (e.g. restaurant, museum, etc.) from Wikimapia Points of Interest data. We use textual information contained in VGI records to evaluate its ability to determine the category label. The performance of the trained classifier is evaluated on the complete dataset and then is compared with its performance on regional subsets. Preliminary analysis shows significant difference in the classifier performance across the regions. Such geographic differences will have a significant effect on data enrichment efforts such as labeling entities with missing categories.
2019-03-04
Buck, Joshua W., Perugini, Saverio, Nguyen, Tam V..  2018.  Natural Language, Mixed-initiative Personal Assistant Agents. Proceedings of the 12th International Conference on Ubiquitous Information Management and Communication. :82:1–82:8.
The increasing popularity and use of personal voice assistant technologies, such as Siri and Google Now, is driving and expanding progress toward the long-term and lofty goal of using artificial intelligence to build human-computer dialog systems capable of understanding natural language. While dialog-based systems such as Siri support utterances communicated through natural language, they are limited in the flexibility they afford to the user in interacting with the system and, thus, support primarily action-requesting and information-seeking tasks. Mixed-initiative interaction, on the other hand, is a flexible interaction technique where the user and the system act as equal participants in an activity, and is often exhibited in human-human conversations. In this paper, we study user support for mixed-initiative interaction with dialog-based systems through natural language using a bag-of-words model and k-nearest-neighbor classifier. We study this problem in the context of a toolkit we developed for automated, mixed-initiative dialog system construction, involving a dialog authoring notation and management engine based on lambda calculus, for specifying and implementing task-based, mixed-initiative dialogs. We use ordering at Subway through natural language, human-computer dialogs as a case study. Our results demonstrate that the dialogs authored with our toolkit support the end user's completion of a natural language, human-computer dialog in a mixed-initiative fashion. The use of natural language in the resulting mixed-initiative dialogs afford the user the ability to experience multiple self-directed paths through the dialog and makes the flexibility in communicating user utterances commensurate with that in dialog completion paths—an aspect missing from commercial assistants like Siri.
2020-05-18
Kadebu, Prudence, Thada, Vikas, Chiurunge, Panashe.  2018.  Natural Language Processing and Deep Learning Towards Security Requirements Classification. 2018 3rd International Conference on Contemporary Computing and Informatics (IC3I). :135–140.
Security Requirements classification is an important area to the Software Engineering community in order to build software that is secure, robust and able to withstand attacks. This classification facilitates proper analysis of security requirements so that adequate security mechanisms are incorporated in the development process. Machine Learning techniques have been used in Security Requirements classification to aid in the process that lead to ensuring that correct security mechanisms are designed corresponding to the Security Requirements classifications made to eliminate the risk of security being incorporated in the late stages of development. However, these Machine Learning techniques have been found to have problems including, handcrafting of features, overfitting and failure to perform well with high dimensional data. In this paper we explore Natural Language Processing and Deep Learning to determine if this can be applied to Security Requirements classification.
2019-12-16
Alam, Mehreen.  2018.  Neural Encoder-Decoder based Urdu Conversational Agent. 2018 9th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :901–905.
Conversational agents have very much become part of our lives since the renaissance of neural network based "neural conversational agents". Previously used manually annotated and rule based methods lacked the scalability and generalization capabilities of the neural conversational agents. A neural conversational agent has two parts: at one end an encoder understands the question while the other end a decoder prepares and outputs the corresponding answer to the question asked. Both the parts are typically designed using recurrent neural network and its variants and trained in an end-to-end fashion. Although conversation agents for other languages have been developed, Urdu language has seen very less progress in building of conversational agents. Especially recent state of the art neural network based techniques have not been explored yet. In this paper, we design an attention driven deep encoder-decoder based neural conversational agent for Urdu language. Overall, we make following contributions we (i) create a dataset of 5000 question-answer pairs, and (ii) present a new deep encoder-decoder based conversational agent for Urdu language. For our work, we limit the knowledge base of our agent to general knowledge regarding Pakistan. Our best model has the BLEU score of 58 and gives syntactically and semantically correct answers in majority of the cases.