Model Explainability
The Black Box Problem
Your model predicts a loan should be denied. The customer asks: “Why?” You say: “The neural network decided.” That’s not acceptable in healthcare, finance, legal, and many other domains. We need to explain our models.Why Explainability Matters
| Domain | Why It’s Required |
|---|---|
| Healthcare | Doctors need to validate AI recommendations |
| Finance | Regulations require explainable credit decisions |
| Legal | Right to explanation in GDPR |
| Hiring | Avoid discrimination and bias |
| Insurance | Justify pricing decisions |
Types of Explainability
Global Explainability
How does the model work overall?
What features matter most in general?
Local Explainability
Why did the model make THIS specific prediction?
What drove this particular decision?
Method 1: Feature Importance
For Tree-Based Models
For Linear Models
Method 2: Permutation Importance
A model-agnostic method that works for any model:How permutation importance works:
- Baseline: Get model accuracy on test set
- Shuffle one feature’s values randomly
- Measure accuracy drop
- Bigger drop = More important feature
Method 3: SHAP Values
SHAP (SHapley Additive exPlanations) provides both global and local explanations:Global: Summary Plot
Global: Bar Plot
Local: Individual Prediction Explanation
Local: Waterfall Plot
Method 4: LIME (Local Explanations)
LIME explains individual predictions by approximating the model locally:LIME vs SHAP
| Aspect | LIME | SHAP |
|---|---|---|
| Method | Local linear approximation | Game theory (Shapley values) |
| Consistency | Can vary between runs | Mathematically consistent |
| Speed | Fast | Slower for many samples |
| Global | No (local only) | Yes (aggregate local) |
| Accuracy | Approximate | Exact (for tree models) |
Method 5: Partial Dependence Plots
Show how a feature affects predictions, on average:2D Interaction Plot
Method 6: ICE Plots
Individual Conditional Expectation - like PDP but for each sample:Practical: Explaining a Loan Decision
Building an Explanation Report
Key Takeaways
Multiple Methods
Use feature importance, SHAP, LIME, and PDPs together
Global vs Local
Global shows patterns, local explains decisions
SHAP is Gold Standard
Mathematically grounded, works for any model
Document Explanations
Generate reports for stakeholders
What’s Next?
Now that you can explain your models, let’s learn how to build robust ML pipelines!Continue to ML Pipelines
Build reproducible, production-ready ML workflows