In this talk we will discuss the problem of making results coming from machine learning models understandable to both the data scientists and the final users. While most machine learning frameworks already include metrics that measure the importance of different features during training, fewer allow us to understand which features were most important for a specific prediction at runtime. We will describe our use case, discuss a few options and finally present Shap, an open-source python library able to interpret many different models. Shap comes with both general purpose tools and specialized tools for families of machine learning models and with integrations for Jupyter and Matplotlib for easy use during exploration and testing of a model.
Feedback form: https://python.it/feedback-1601
in __on Saturday 4 May at 17:15 **See schedule**