Title | Evaluating the Performance of Machine Learning Sentiment Analysis Algorithms in Software Engineering |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Shen, Jingyi, Baysal, Olga, Shafiq, M. Omair |
Conference Name | 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech) |
Date Published | aug |
Keywords | Automated Secure Software Engineering, automated sentiment analysis, automated sentiment tool, Benchmark testing, composability, data mining, datasets, evaluation performance, learning (artificial intelligence), machine learning, machine learning algorithms, machine learning sentiment analysis algorithms, pubcrawl, Resiliency, sentiment analysis, Software algorithms, software engineering, software engineering domain, tool performance, Tools, Training |
Abstract | In recent years, sentiment analysis has been aware within software engineering domain. While automated sentiment analysis has long been suffering from doubt of accuracy, the tool performance is unstable when being applied on datasets other than the original dataset for evaluation. Researchers also have the disagreements upon if machine learning algorithms perform better than conventional lexicon and rule based approaches. In this paper, we looked into the factors in datasets that may affect the evaluation performance, also evaluated the popular machine learning algorithms in sentiment analysis, then proposed a novel structure for automated sentiment tool combines advantages from both approaches. |
DOI | 10.1109/DASC/PiCom/CBDCom/CyberSciTech.2019.00185 |
Citation Key | shen_evaluating_2019 |