The Limitations of Model Retraining in the Face of Performativity
Abstract
We study stochastic optimization in the context of performative shifts, where the data distribution changes in response to the deployed model. We demonstrate that naive retraining can be provably suboptimal even for simple distribution shifts. The issue worsens when models are retrained given a finite number of samples at each retraining step. We show that adding regularization to retraining corrects both of these issues, attaining provably optimal models in the face of distribution shifts. Our work advocates rethinking how machine learning models are retrained in the presence of performative effects.
Cited in this thesis
Frequently Cited Together
- Fishers' preference for mobile traceability platform: challenges in achieving a 1 chapter
- Online learning: A comprehensive survey1 chapter
- One model to learn them all1 chapter
- Deep conditional ordinal regression for neural networks1 chapter
- Seafood phospholipids: extraction efficiency and phosphorous nuclear magnetic re1 chapter
- Prediction of coffee traits by artificial neural networks and laser-assisted rap1 chapter
BibTeX
@article{Kabra2024,
author = {Kabra, Anmol and Patel, Kumar Kshitij},
journal = {arXiv preprint arXiv:2408.08499},
title = {The Limitations of Model Retraining in the Face of Performativity},
year = {2024},
}