Contribute Media
A thank you to everyone who makes this possible: Read More

Beyond the Black Box: Interpreting ML models with SHAP

Translations: en

Description

ML models often act as black boxes, making it hard to extract actionable insights. SHAP helps explain predictions by attributing importance to input features using concepts from game theory. In this talk, we’ll cover the need for explainability, introduce the intuition behind Shapley values, and walk through a couple of case studies using boosted tree-based and neural network based models. We’ll also discuss SHAP plots, best practices, challenges, and pitfalls when working with large datasets.

Details

Improve this page