DocsExplainability

Explainability

Understand how your model makes decisions with SHAP (SHapley Additive exPlanations) values.

What is SHAP?

SHAP is a game-theoretic approach to explain machine learning model predictions. It assigns each feature an importance value for a particular prediction, showing how much each feature contributed to pushing the prediction higher or lower.

Key Properties:

  • Local - Explains individual predictions
  • Global - Aggregates to show overall feature importance
  • Consistent - Mathematically grounded in Shapley values
  • Model-agnostic - Works with any ML model

SHAP Summary Plot

The summary plot shows global feature importance ranked by average impact on predictions.

Reading the Plot
credit_score
0.42
income
0.31
age
0.18

�� Red dots (high values)

Push prediction toward positive outcome (approval)

🔵 Blue dots (low values)

Push prediction toward negative outcome (rejection)

Interpretation Example

If credit_score has the highest mean SHAP value (0.42), it means credit score is the most influential feature in your model's decisions. Red dots on the right indicate high credit scores increase approval likelihood.

Force Plots

Force plots show how features push individual predictions away from the base value.

Example Force Plot
Base Value: 0.50
Prediction: 0.82
credit_score=740 (+0.22)
age=28 (-0.05)
← Lower predictionHigher prediction →
Reading Force Plots
  • Base value - Average model prediction across all data
  • Green bars - Features pushing prediction higher
  • Red bars - Features pushing prediction lower
  • Bar width - Magnitude of feature's impact
  • Final prediction - Where all forces balance out

Feature Importance

Aggregated SHAP values provide a clear ranking of feature importance.

Top Features
credit_score0.42
income0.31
age0.18
employment0.09
Use Cases
  • Model debugging - Identify unexpected feature impacts
  • Feature engineering - Focus on high-impact features
  • Stakeholder communication - Explain model behavior
  • Regulatory compliance - Demonstrate transparency

Dependence Plots

Show how SHAP values change as a feature value changes, revealing interaction effects.

Example: Credit Score Impact
SHAP Value
Credit Score →

This plot shows that as credit score increases, its positive impact on approval predictions grows non-linearly, with the strongest effect above 700.

Best Practices

✓ Do
  • • Use summary plots for overall model behavior
  • • Check force plots for contested decisions
  • • Validate explanations with domain experts
  • • Monitor SHAP values for drift over time
✗ Don't
  • • Confuse correlation with causation
  • • Ignore feature interactions
  • • Over-interpret small SHAP values
  • • Skip validation on new data