Machine learning textbooks tend to focus too narrowly on specific algorithms or code without looking at the bigger picture. One key real-world application that's rarely covered: predictive models which are regularly updated with new data stemming from earlier predictions. Done poorly, repeated models can amplify the errors and biases of their initial versions. But when done right, they can learn from those mistakes over time, and employ the results of previous versions as new training data to keep the model fresh and productive over the course of months or years of applied use. With examples from my own work in the political, nonprofit, and civic data science fields, this talk will introduce a framework for designing machine learning models that get better over time.