Visible to the public Evaluating the Performance of Machine Learning Sentiment Analysis Algorithms in Software Engineering

TitleEvaluating the Performance of Machine Learning Sentiment Analysis Algorithms in Software Engineering
Publication TypeConference Paper
Year of Publication2019
AuthorsShen, Jingyi, Baysal, Olga, Shafiq, M. Omair
Conference Name2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)
Date Publishedaug
KeywordsAutomated Secure Software Engineering, automated sentiment analysis, automated sentiment tool, Benchmark testing, composability, data mining, datasets, evaluation performance, learning (artificial intelligence), machine learning, machine learning algorithms, machine learning sentiment analysis algorithms, pubcrawl, Resiliency, sentiment analysis, Software algorithms, software engineering, software engineering domain, tool performance, Tools, Training
AbstractIn recent years, sentiment analysis has been aware within software engineering domain. While automated sentiment analysis has long been suffering from doubt of accuracy, the tool performance is unstable when being applied on datasets other than the original dataset for evaluation. Researchers also have the disagreements upon if machine learning algorithms perform better than conventional lexicon and rule based approaches. In this paper, we looked into the factors in datasets that may affect the evaluation performance, also evaluated the popular machine learning algorithms in sentiment analysis, then proposed a novel structure for automated sentiment tool combines advantages from both approaches.
DOI10.1109/DASC/PiCom/CBDCom/CyberSciTech.2019.00185
Citation Keyshen_evaluating_2019