A skill for improving prompts by applying general LLM/agent best practices. When the user provides a prompt, this skill outputs an improved version, identifies missing information, and provides specific improvement points. Use when the user asks to "improve this prompt", "review this prompt", or "make this prompt better".
View on GitHubdotneet/claude-code-marketplace
prompt
prompt/skills/prompt-improver/SKILL.md
January 23, 2026
Select agents to install to:
npx add-skill https://github.com/dotneet/claude-code-marketplace/blob/main/prompt/skills/prompt-improver/SKILL.md -a claude-code --skill prompt-improverInstallation paths:
.claude/skills/prompt-improver/# Prompt Improver ## Overview A skill that analyzes and improves prompts based on general LLM/agent best practices. It focuses on verifiability, clear scope, explicit constraints, and context economy so the agent can execute with minimal back-and-forth. If you are running in Claude Code, also read `references/claude.md` and apply the additional Claude-specific techniques. If you are running in Codex CLI, also read `references/codex.md` and apply the additional Codex-specific techniques. When the input is a document that instructs an agent (e.g., plan files, AGENTS.md, system instruction docs), treat the document as the improvement target; identify issues and propose concrete improvements, and include a revised draft when helpful. ## Workflow ### Step 0: Classify Task and Complexity Classify the task and decide whether an explicit exploration/planning phase should be recommended: - Task type: bugfix, feature, refactor, research, UI/visual, docs, ops - Complexity: single-file/small change vs multi-file/uncertain impact - Risk: data safety, security, compatibility, performance - Input type: prompt vs agent-instruction document (plan files, AGENTS.md, system instruction docs) If the task is complex or ambiguous, the improved prompt should explicitly request an exploration/planning phase before implementation. ### Step 1: Analyze the Prompt Analyze the user-provided prompt from the following perspectives: 1. **Verifiability**: Does it include means for Claude to verify its own work? 2. **Specificity**: Are files, scenarios, and constraints clearly specified? 3. **Context**: Is necessary background information provided? 4. **Scope**: Is the task scope appropriately defined? 5. **Expected Outcome**: Are success criteria clear? 6. **Constraints**: Are language/runtime versions, dependencies, security, or compatibility requirements specified? 7. **Context Economy**: Is the prompt concise and focused, without unnecessary information? 8. **Execution Preference**: I