Use for implementation plans - dispatches fresh subagent per task with quality scoring (0.00-1.00), code review gates, Serena pattern learning, and MCP tracking
View on GitHubkrzemienski/shannon-framework
shannon
January 21, 2026
Select agents to install to:
npx add-skill https://github.com/krzemienski/shannon-framework/blob/main/skills/subagent-driven-development/SKILL.md -a claude-code --skill subagent-driven-developmentInstallation paths:
.claude/skills/subagent-driven-development/# Subagent-Driven Development (Shannon-Enhanced)
## Overview
**Execute plans with fresh subagent per task + quantitative quality gates.**
Each task: Subagent implementation → Code review → Quality score → Advance if > 0.80.
Shannon enhancement: Numerical quality scoring, pattern learning from task history, MCP work tracking.
## When to Use
**Same-session task execution:**
- Staying in this session
- Tasks mostly independent
- Want continuous progress with quality gates
- Plan already written and validated
**Don't use when:**
- Need to review plan first
- Tasks are tightly coupled
- Plan needs revision
## Quality Scoring (0.00-1.00)
**Per-task scoring:**
```
task_quality_score = (
(implementation_completeness) * 0.4 +
(test_coverage) * 0.35 +
(code_review_score) * 0.25
)
```
**Thresholds:**
- 0.00-0.50: Needs major rework (Critical issues)
- 0.51-0.80: Has issues, fixable (Important issues)
- 0.81-0.95: Good, minor cleanup (Minor issues)
- 0.96-1.00: Excellent, ready to advance
**Gate rule:** Only advance to next task if task_quality_score > 0.80
## The Process
### 1. Load Plan & Setup Tracking
```bash
# Serena metric initialization
plan_metadata = {
plan_file: "path/to/plan.md",
total_tasks: N,
tasks_completed: 0,
cumulative_quality_score: 0.0,
estimated_completion_time: "X hours"
}
```
Create TodoWrite with all tasks, all in "pending" state.
### 2. Execute Task (Fresh Subagent)
Mark task as in_progress. Dispatch:
```markdown
**Task N:** [task_name]
Implementation scope: [read task N from plan]
Requirements:
1. Implement exactly what task specifies
2. Write tests (TDD if task requires)
3. Verify all tests pass
4. Commit with conventional message
5. Report: what you implemented, test results, files changed
Work directory: [path]
```
Subagent reports implementation summary.
### 3. Code Review Gate
**Dispatch code-reviewer with metrics:**
```bash
# Review metrics (Serena)
review_context = {
task_number: N,
what_implemented: "