01.03.22
One of the many ways your machine learning model can go wrong
Building and deploying a machine learning model can be a daunting task. But even once that is done, the model can still go wrong. If the world changes in some significant way between deploying a trained model and using it, the predictions that looked good during training can suddenly become worthless.
To mitigate this risk we should monitor if our data has changed. One way is though feedback - checking if your model is still predicting well. Another is by looking at data distribution shifts. Chip Huyen has written a very useful guide to this often overlooked topic.