aka. Agent Skills
Discover skills for AI coding agents. Works with Claude Code, OpenAI Codex, Gemini CLI, Cursor, and more.
Use when improving general prompts for structure, examples, and constraints.
Use when managing perf baselines, consolidating results, or comparing versions. Ensures one baseline JSON per version.
Use when validating task completion before shipping. Runs tests, build, and requirement checks. Returns pass/fail with fix instructions.
Use when coordinating multiple enhancers for /enhance command. Runs analyzers in parallel and produces unified report.
This skill should be used when the user asks about "plan drift", "reality check", "comparing docs to code", "project state analysis", "roadmap alignment", "implementation gaps", or needs guidance on identifying discrepancies between documented plans and actual implementation state.
Use when profiling CPU/memory hot paths, generating flame graphs, or capturing JFR/perf evidence.
Use when updating documentation related to recent code changes. Finds related docs, updates CHANGELOG, and delegates simple fixes to haiku.
Use when generating the unified enhancement report from aggregated findings. Called by orchestrator after all enhancers complete.
Use when mapping code paths, entrypoints, and likely hot files before profiling.
Use when analyzing doc issues after code changes. Check for outdated references, stale examples, and missing updates.
Use when managing CHANGELOG entries. Check for undocumented commits, format and apply new entries safely.
Use when finding docs affected by code changes. Find documentation files related to changed source files using lib/collectors/docs-patterns.
Use when user asks to \"deep review the code\", \"thorough code review\", \"multi-pass review\", or when orchestrating Phase 9 review loop. Provides review pass definitions (code quality, security, performance, test coverage, specialists), signal detection patterns, and iteration algorithms.
Use when running performance benchmarks, establishing baselines, or validating regressions with sequential runs. Enforces 60s minimum runs (30s only for binary search) and no parallel benchmarks.
Use when improving documentation structure, accuracy, and RAG readiness.
Use when appending structured perf investigation notes and evidence.
Use when running controlled perf experiments to validate hypotheses.