top of page

Shapley Additive Explanations (SHAP)

SHAP (SHapley Additive exPlanations) is a framework for explaining the predictions of machine learning models. SHAP is based on the concept of Shapley values, which is a well-known concept from cooperative game theory. In the context of machine learning, Shapley values are used to assign an importance score to each input feature, indicating its contribution to the prediction of the model.

The key idea behind SHAP is to estimate the Shapley values of each input feature by approximating the expected value of the model output when all possible subsets of input features are included in the model. This is done by computing the difference between the model output with and without each input feature, and then weighting these differences based on the probability of including each feature in a subset.

 

Source: Explaining machine learning models with SHAP and SAGE by Ian Covert

SHAP can be used with any type of machine learning model, including linear models, tree-based models, and neural networks. SHAP provides a set of tools and algorithms for computing Shapley values and visualizing the contributions of each input feature to the model output. These tools can be used to gain insights into how a model is making predictions, identify important features, and detect potential sources of bias or error.

 

SHAP has been used in a variety of applications, including image classification, text classification, and time-series forecasting, and has been shown to be effective in explaining the predictions of machine learning models in a wide range of domains.

shap.png
Technical Resources

A Unified Approach to Interpreting Model by Scott M. Lundberg and Su-In Lee

Explaining machine learning models with SHAP and SAGE by Ian Covert
bottom of page