Back to Skills

model-explainer

verified

Model interpretability and explainability using SHAP, LIME, feature importance, and partial dependence plots. Activates for "explain model", "model interpretability", "SHAP", "LIME", "feature importance", "why prediction", "model explanation". Generates human-readable explanations for model predictions, critical for trust, debugging, and regulatory compliance.

View on GitHub

Marketplace

specweave

anton-abyzov/specweave

Plugin

sw-ml

development

Repository

anton-abyzov/specweave
27stars

plugins/specweave-ml/skills/model-explainer/SKILL.md

Last Verified

January 25, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/anton-abyzov/specweave/blob/main/plugins/specweave-ml/skills/model-explainer/SKILL.md -a claude-code --skill model-explainer

Installation paths:

Claude
.claude/skills/model-explainer/
Powered by add-skill CLI

Instructions

# Model Explainer

## Overview

Makes black-box models interpretable. Explains why models make specific predictions, which features matter most, and how features interact. Critical for trust, debugging, and regulatory compliance.

## Why Explainability Matters

- **Trust**: Stakeholders trust models they understand
- **Debugging**: Find model weaknesses and biases
- **Compliance**: GDPR, fair lending laws require explanations
- **Improvement**: Understand what to improve
- **Safety**: Detect when model might fail

## Explanation Types

### 1. Global Explanations (Model-Level)

**Feature Importance**:
```python
from specweave import explain_model

explainer = explain_model(
    model=trained_model,
    X_train=X_train,
    increment="0042"
)

# Global feature importance
importance = explainer.feature_importance()
```

Output:
```
Top Features (Global):
1. transaction_amount (importance: 0.35)
2. user_history_days (importance: 0.22)
3. merchant_reputation (importance: 0.18)
4. time_since_last_transaction (importance: 0.15)
5. device_type (importance: 0.10)
```

**Partial Dependence Plots**:
```python
# How does feature affect prediction?
explainer.partial_dependence(feature="transaction_amount")
```

### 2. Local Explanations (Prediction-Level)

**SHAP Values**:
```python
# Explain single prediction
explanation = explainer.explain_prediction(X_sample)
```

Output:
```
Prediction: FRAUD (probability: 0.92)

Why?
+ transaction_amount=5000 → +0.45 (high amount increases fraud risk)
+ user_history_days=2 → +0.30 (new user increases risk)
+ merchant_reputation=low → +0.25 (suspicious merchant)
- time_since_last_transaction=1hr → -0.08 (recent activity normal)

Base prediction: 0.10
Final prediction: 0.92
```

**LIME Explanations**:
```python
# Local interpretable model
lime_exp = explainer.lime_explanation(X_sample)
```

## Usage in SpecWeave

```python
from specweave import ModelExplainer

# Create explainer
explainer = ModelExplainer(
    model=model,
    X_train=X_train

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
4856 chars