Visible to the public Towards a Framework for Adapting Machine Learning Components

TitleTowards a Framework for Adapting Machine Learning Components
Publication TypeConference Paper
Year of Publication2022
AuthorsCasimiro, Maria, Romano, Paolo, Garlan, David, Rodrigues, Luís
Conference Name2022 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS)
Date Publishedsep
KeywordsAdaptation models, Autonomic Security, Cognition, composability, Computational modeling, fraud detection system, machine learning, maximum likelihood estimation, model retrain, Predictive models, Probabilistic logic, pubcrawl, resilience, Resiliency, self-adaptation
AbstractMachine Learning (ML) models are now commonly used as components in systems. As any other component, ML components can produce erroneous outputs that may penalize system utility. In this context, self-adaptive systems emerge as a natural approach to cope with ML mispredictions, through the execution of adaptation tactics such as model retraining. To synthesize an adaptation strategy, the self-adaptation manager needs to reason about the cost-benefit tradeoffs of the applicable tactics, which is a non-trivial task for tactics such as model retraining, whose benefits are both context- and data-dependent.To address this challenge, this paper proposes a probabilistic modeling framework that supports automated reasoning about the cost/benefit tradeoffs associated with improving ML components of ML-based systems. The key idea of the proposed approach is to decouple the problems of (i) estimating the expected performance improvement after retrain and (ii) estimating the impact of ML improved predictions on overall system utility.We demonstrate the application of the proposed framework by using it to self-adapt a state-of-the-art ML-based fraud-detection system, which we evaluate using a publicly-available, real fraud detection dataset. We show that by predicting system utility stemming from retraining a ML component, the probabilistic model checker can generate adaptation strategies that are significantly closer to the optimal, as compared against baselines such as periodic retraining, or reactive retraining.
DOI10.1109/ACSOS55765.2022.00031
Citation Keycasimiro_towards_2022