Diagnosing, explaining and scaling machine learning is hard. I'll talk about a set of libraries that have helped me to understand when and how a model is failing, helped me communicate why it is working to non-technical users, automated the search for better models and helped me to scale my modeling.
I'll discuss YellowBrick, LIME, ELI5, TPOT and Dask. These libraries will make it more likely that you deliver trustworthy and reliable systems that will actually make it past R&D and into Production. The talk will be rooted in my experience delivering client projects and participating in Kaggle competitions.