Investigate bugfix items using test-first debugging methodology
View on GitHubskills/systematic-debugging/SKILL.md
February 3, 2026
Select agents to install to:
npx add-skill https://github.com/ben-mad-jlp/claude-mermaid-collab/blob/main/skills/systematic-debugging/SKILL.md -a claude-code --skill systematic-debuggingInstallation paths:
.claude/skills/systematic-debugging/# Systematic Debugging (Test-First)
Fix bugs by writing a failing test first, then having subagents race to fix it.
## Core Principle
**Don't start by trying to fix the bug. Start by writing a test that reproduces it.**
A passing test is the only proof that a fix works. Subagents compete to make the test pass.
## Overview
When a work item has `Type: bugfix`, this skill:
1. Reads the bug report from the design doc
2. Writes a failing test that reproduces the bug
3. Verifies the test fails for the right reason
4. Spawns parallel subagents to attempt fixes
5. First subagent to make the test pass wins
6. Updates the design doc with the fix
7. Calls complete_skill to proceed
## Step 1: Get Current Item Context
Read the design doc to get the current bugfix item:
```
Tool: mcp__plugin_mermaid-collab_mermaid__get_document
Args: { "project": "<cwd>", "session": "<session>", "id": "design" }
```
Find the work item marked as current (from session state's `currentItem`).
## Step 2: Write a Failing Test
Before any fix attempts, write a test that reproduces the bug.
**Requirements:**
- Test must fail with the current code
- Test must fail for the right reason (bug behavior, not syntax error)
- Test must be minimal - isolate the bug
- Test name should describe the expected behavior: `test('rejects empty email', ...)`
**Process:**
1. Analyze the bug report to understand expected vs actual behavior
2. Find the appropriate test file (or create one)
3. Write a test that asserts the expected behavior
4. Run the test to confirm it fails
```bash
npm run test:ci -- path/to/test.test.ts
```
**Escape hatch:** If after 3 attempts you cannot write a reproducing test (timing issues, environment-specific, etc.), document why and fall back to manual investigation. Ask the user for guidance.
## Step 3: Verify Test Fails Correctly
**MANDATORY. Never skip.**
The test must:
- Fail (not error)
- Fail because of the bug (not typos or missing imports)
- Show the actual buggy behavior