SHAP Is Not All You Need
A most annoying misconception in the world of machine learning interpretability
This post is
30% rant
50% comparison of SHAP and permutation feature importance
20% good news (announcement of the release date of conformal prediction book đ„ł)
I just got a paper rejection.
The paper itself fills a theoretical and conceptual gap: While ML interpretation techniques such as partial dependence plots and permutation feature importance primarily describe the model, many (data) scientists use them to study the underlying data and phenomenon. Our paper discusses whatâs needed to actually achieve the jump from model to data.
But thatâs not whatâs important today. Maybe Iâll explain the paper in another post.
Today I want to talk about a part of the criticism we received for the paper. Here are two quotes:
âSHAP graphs also contain all the information that PDPs containâ
âPFI is less informative than SHAPâ
The reviewer draws the conclusion that âI do not see much value in the analysis of somewhat "inferior" feature-analysis methods [like PDP and PFI]â.
If you take this statement to its full conclusion everyone would have to stop working on PDP and PFI. And while we are at it, why not drop ALE plots, ICE plots, and counterfactual explanations and write yet-another SHAP extension paper?
The critic of the reviewer is wrong on at least two levels:
With this attitude, academia would be condemned to always study the hyped and shiny. It discourages thoroughness and diminishes the chance that âbetsâ on other lines of research are tested out.
In the case of SHAP, the reviewer is plain wrong. PDP and PFI are not a subset of SHAP. They are different techniques with different goals. And while, for example, PFI and SHAP can both produce importance plots, they are not the same.
If it were the first time that someone said that SHAP is all you need, it wouldn't be worthy of a post. But especially in âpeerâ review, the critique âYou should be working on Shapley values / SHAP / LIMEâ was surprisingly common. And also elsewhere I often saw people with the attitude of âSHAP is all you needâ.
Itâs wrong and Iâll show why.
Short primer on SHAP and PFI
If you are already familiar with SHAP and PFI, just skip this section.
Letâs start with permutation feature importance, because this is one of the simplest interpretability methods to explain. Itâs a model interpretation technique that assigns an importance value for each feature. The importance is computed as how much the model performance would drop if we shuffle a feature. The more the performance drops (aka loss increases), the more important the feature was for correct predictions.
Compute loss. Permute feature. Compute loss again. Compute difference. Simple.
SHAP is a method to compute Shapley values for machine learning predictions. Itâs a so-called attribution method that fairly attributes the predicted value among the features. The computation is more complicated than for PFI and also the interpretation is somewhere between difficult and unclear.
SHAP produces many types of interpretation outputs: SHAP can be used to explain individual predictions (aka attributions). But if you compute Shapley values for all the instances in your data, you can also aggregate them. Then you get good-looking plots that show you some notion of feature dependence, some notion of feature importance, and some notion of feature interactions. All these notions are of course tied to the not-so-easy interpretation of Shapley values. For an overview of the plots, you can check out my SHAP Plots For Tabular Data Cheat Sheet.
SHAP Is Not All You Need
Believing that SHAP is all you need is a typical pitfall: assuming that 1 method is the best for all interpretation contexts.
Letâs walk through my favorite example for showing how SHAP importance can be inadequate.
An xgboost regression model was trained on simulated data. But all of the 20 features were simulated to have no relation with the target. In other words, any type of relationship that the model picks up is the result of overfitting. And for this experiment, we overfit the model on purpose because in this case PFI and SHAP will diverge quite drastically.
The example is from our paper on ML interpretability pitfalls:
Clearly, SHAP and PFI deviate in the bar plot above. PFI more or less shows that all 20 features are unimportant. But SHAP importance clearly shows that some of the features are important.
Which interpretation is the correct one?
Given the simulation setup where none of the features has a relation to the target, one could say that PFI results are correct and SHAP is wrong. But this answer is too simplistic. The choice of interpretation method really depends on what you use the importance values for. What is the question that you want to answer?
Because Shapley values are âcorrectâ in the sense that they do what they are supposed to do: Attribute the prediction to the features. And in this case, changing the âimportantâ features truly changes the model prediction. So if your goal tends towards understanding how the model âbehavesâ, SHAP might be the right choice.
But if you want to find out how relevant a feature was for the CORRECT prediction, SHAP is not a good option. Here PFI is the better choice since it links importance to model performance.
In a way, it boils down to the question of audit versus insight: SHAP importance is more about auditing how the model behaves. As in the simulated example, itâs useful to see how model predictions are affected by features X4, X6, and so on. For that SHAP importance is meaningful. But if your goal was to study the underlying data, then itâs completely misleading. Here PFI gives you a better idea of whatâs really going on. Also, both importance plots work on different scales: SHAP may be interpreted on the scale of the prediction because SHAP importance is the average absolute change in prediction that was attributed to a feature. PFI is the average increase in loss when the feature information is destroyed (aka feature is permuted). Therefore PFI importance is on the scale of the loss.
A fallacy of the reviewer was to equate these different ideas of feature importance.
Unfortunately, this points towards a much larger issue in research on interpretability. The field is more method-driven than question-driven. We first develop methods, and then ask âwhat question do the methods really answer?â.
For SHAP, itâs not so easy to answer how the Shapley values are supposed to be interpreted.
Shapley values are also expensive to compute, especially if your model is not tree-based.
So there are many reasons not to use SHAP, but an âinferiorâ (as the reviewer said) interpretation method.
For another critique of Shapley values I recommend this post by Giles Hooker.
Enough SHAP and venting. I also have some good news.
Book Release Next Week: Introduction To Conformal Prediction With Python
This week I'm finishing the book on conformal prediction. The official release date is Tuesday the 14th. Valentine's Day. You can find it on Leanpub.
Hereâs a sneak peek of the cover. I adore this new mascot. Look how eager it is:
If you have subscribed to this newsletter, you will automatically receive a promotional code that will give you a good discount as an early buyer of the book.
Love this, I am a big fan of permutation importance.
For the first scenario, since the model was overfit, is SHAP more useful with a validation set? This way at least you would see the clear failure of the model for certain examples. Is it worthwhile looking at shap for correct versus incorrect predictions?
(Obviously this doesnât hold if there is some sort of distribution shift in production, etc.)
Haha đ I totally get your rant. The two quotes speak for themselves...