aka. Agent Skills
Discover skills for AI coding agents. Works with Claude Code, OpenAI Codex, Gemini CLI, Cursor, and more.
Use when improving agent prompts, frontmatter, and tool restrictions.
Use when improving documentation structure, accuracy, and RAG readiness.
Use when mapping code paths, entrypoints, and likely hot files before profiling.
Use when reviewing hooks for safety, timeouts, and correct frontmatter.
Use when appending structured perf investigation notes and evidence.
Use when user asks to \"deep review the code\", \"thorough code review\", \"multi-pass review\", or when orchestrating Phase 9 review loop. Provides review pass definitions (code quality, security, performance, test coverage, specialists), signal detection patterns, and iteration algorithms.
Use when generating performance hypotheses backed by git history and code evidence.
Use when analyzing plugin structures, MCP tools, and plugin security patterns.
Use when running controlled perf experiments to validate hypotheses.
Use when running performance benchmarks, establishing baselines, or validating regressions with sequential runs. Enforces 60s minimum runs (30s only for binary search) and no parallel benchmarks.
Use when updating documentation related to recent code changes. Finds related docs, updates CHANGELOG, and delegates simple fixes to haiku.
Use when managing CHANGELOG entries. Check for undocumented commits, format and apply new entries safely.
Use when analyzing doc issues after code changes. Check for outdated references, stale examples, and missing updates.
Use when finding docs affected by code changes. Find documentation files related to changed source files using lib/collectors/docs-patterns.
This skill should be used when the user asks about "plan drift", "reality check", "comparing docs to code", "project state analysis", "roadmap alignment", "implementation gaps", or needs guidance on identifying discrepancies between documented plans and actual implementation state.
Use when coordinating multiple enhancers for /enhance command. Runs analyzers in parallel and produces unified report.
Use when validating task completion before shipping. Runs tests, build, and requirement checks. Returns pass/fail with fix instructions.
Use when improving general prompts for structure, examples, and constraints.
Use when profiling CPU/memory hot paths, generating flame graphs, or capturing JFR/perf evidence.
Analyzes your Claude Code conversation history to identify patterns, common mistakes, and opportunities for workflow improvement. Use when user wants to understand usage patterns, optimize workflow, identify automation opportunities, or check if they're following best practices.