Biblio

Filters: Author is Tjoa, A Min  [Clear All Filters]
2022-09-20
Herwanto, Guntur Budi, Quirchmayr, Gerald, Tjoa, A Min.  2021.  A Named Entity Recognition Based Approach for Privacy Requirements Engineering. 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW). :406—411.
The presence of experts, such as a data protection officer (DPO) and a privacy engineer is essential in Privacy Requirements Engineering. This task is carried out in various forms including threat modeling and privacy impact assessment. The knowledge required for performing privacy threat modeling can be a serious challenge for a novice privacy engineer. We aim to bridge this gap by developing an automated approach via machine learning that is able to detect privacy-related entities in the user stories. The relevant entities include (1) the Data Subject, (2) the Processing, and (3) the Personal Data entities. We use a state-of-the-art Named Entity Recognition (NER) model along with contextual embedding techniques. We argue that an automated approach can assist agile teams in performing privacy requirements engineering techniques such as threat modeling, which requires a holistic understanding of how personally identifiable information is used in a system. In comparison to other domain-specific NER models, our approach achieves a reasonably good performance in terms of precision and recall.
2021-08-11
Erika Puiutta, Eric M. S. P. Veith.  2020.  Explainable Reinforcement Learning: A Survey. Machine Learning and Knowledge Extraction. :77–95.
Explainable Artificial Intelligence (XAI), i.e., the development of more transparent and interpretable AI models, has gained increased traction over the last few years. This is due to the fact that, in conjunction with their growth into powerful and ubiquitous tools, AI models exhibit one detrimental characteristic: a performance-transparency trade-off. This describes the fact that the more complex a model's inner workings, the less clear it is how its predictions or decisions were achieved. But, especially considering Machine Learning (ML) methods like Reinforcement Learning (RL) where the system learns autonomously, the necessity to understand the underlying reasoning for their decisions becomes apparent. Since, to the best of our knowledge, there exists no single work offering an overview of Explainable Reinforcement Learning (XRL) methods, this survey attempts to address this gap. We give a short summary of the problem, a definition of important terms, and offer a classification and assessment of current XRL methods. We found that a) the majority of XRL methods function by mimicking and simplifying a complex model instead of designing an inherently simple one, and b) XRL (and XAI) methods often neglect to consider the human side of the equation, not taking into account research from related fields like psychology or philosophy. Thus, an interdisciplinary effort is needed to adapt the generated explanations to a (non-expert) human user in order to effectively progress in the field of XRL and XAI in general.