Implement quality gates, user approval, iteration loops, and test-driven development. Use when validating with users, implementing feedback loops, classifying issue severity, running test-driven loops, or building multi-iteration workflows. Trigger keywords - "approval", "user validation", "iteration", "feedback loop", "severity", "test-driven", "TDD", "quality gate", "consensus".
View on GitHubinvolvex/involvex-claude-marketplace
orchestration
January 20, 2026
Select agents to install to:
npx add-skill https://github.com/involvex/involvex-claude-marketplace/blob/main/plugins/orchestration/skills/quality-gates/SKILL.md -a claude-code --skill quality-gatesInstallation paths:
.claude/skills/quality-gates/# Quality Gates **Version:** 1.0.0 **Purpose:** Patterns for approval gates, iteration loops, and quality validation in multi-agent workflows **Status:** Production Ready ## Overview Quality gates are checkpoints in workflows where execution pauses for validation before proceeding. They prevent low-quality work from advancing through the pipeline and ensure user expectations are met. This skill provides battle-tested patterns for: - **User approval gates** (cost gates, quality gates, final acceptance) - **Iteration loops** (automated refinement until quality threshold met) - **Issue severity classification** (CRITICAL, HIGH, MEDIUM, LOW) - **Multi-reviewer consensus** (unanimous vs majority agreement) - **Feedback loops** (user reports issues → agent fixes → user validates) - **Test-driven development loops** (write tests → run → analyze failures → fix → repeat) Quality gates transform "fire and forget" workflows into **iterative refinement systems** that consistently produce high-quality results. ## Core Patterns ### Pattern 1: User Approval Gates **When to Ask for Approval:** Use approval gates for: - **Cost gates:** Before expensive operations (multi-model review, large-scale refactoring) - **Quality gates:** Before proceeding to next phase (design validation before implementation) - **Final validation:** Before completing workflow (user acceptance testing) - **Irreversible operations:** Before destructive actions (delete files, database migrations) **How to Present Approval:** ``` Good Approval Prompt: "You selected 5 AI models for code review: - Claude Sonnet (embedded, free) - Grok Code Fast (external, $0.002) - Gemini 2.5 Flash (external, $0.001) - GPT-5 Codex (external, $0.004) - DeepSeek Coder (external, $0.001) Estimated total cost: $0.008 ($0.005 - $0.010) Expected duration: ~5 minutes Proceed with multi-model review? (Yes/No/Cancel)" Why it works: ✓ Clear context (what will happen) ✓ Cost transparency (range, not single number) ✓ T