Description
In this video, we'll learn about K-fold cross-validation and how it can be used for selecting optimal tuning parameters, choosing between models, and selecting features. We'll compare cross-validation with the train/test split procedure, and we'll also discuss some variations of cross-validation that can result in more accurate estimates of model performance.
This is the seventh video in the series, Introduction to machine learning with scikit-learn. The notebook and resources shown in the video are available on GitHub.