Expert prompt optimization system for building production-ready AI features. Use when users request help improving prompts, want to create system prompts, need prompt review/critique, ask for prompt optimization strategies, want to analyze prompt effectiveness, mention prompt engineering best practices, request prompt templates, or need guidance on structuring AI instructions. Also use when users provide prompts and want suggestions for improvement.
View on GitHubbreethomas/pm-thought-partner
pm-thought-partner
January 18, 2026
Select agents to install to:
npx add-skill https://github.com/breethomas/pm-thought-partner/blob/main/skills/prompt-engineering/SKILL.md -a claude-code --skill prompt-engineeringInstallation paths:
.claude/skills/prompt-engineering/# Prompt Engineering Expert
Master system for creating, analyzing, and optimizing prompts for AI products using research-backed techniques and battle-tested production patterns.
## Core Capabilities
1. **Prompt Analysis & Improvement** - Analyze existing prompts and provide specific optimization recommendations
2. **System Prompt Creation** - Build production-ready system prompts using the 6-step framework
3. **Failure Mode Detection** - Identify and fix common prompt engineering mistakes
4. **Cost Optimization** - Balance performance with token efficiency
5. **Research-Backed Techniques** - Apply proven prompting methods from academic studies
## The 6-Step Optimization Framework
When improving any prompt, follow this systematic process:
### Step 1: Start With Hard Constraints (Lock Down Failure Modes)
Begin with what the model CANNOT do, not what it should do.
**Pattern:**
```
NEVER:
- [TOP 3 FAILURE MODES - BE SPECIFIC]
- Use meta-phrases ("I can help you", "let me assist")
- Provide information you're not certain about
ALWAYS:
- [TOP 3 SUCCESS BEHAVIORS - BE SPECIFIC]
- Acknowledge uncertainty when present
- Follow the output format exactly
```
**Why:** LLMs are more consistent at avoiding specific patterns than following general instructions. "Never say X" is more reliable than "Always be helpful."
### Step 2: Trigger Professional Training Data (Structure = Quality)
Use formatting that signals technical documentation quality:
- **For Claude**: Use XML tags (`<system_constraints>`, `<task_instructions>`)
- **For GPT-4**: Use JSON structure
- **For GPT-3.5**: Use simple markdown
**Why:** Well-structured documents trigger higher-quality training data patterns.
### Step 3: Have The LLM Self-Improve Your Prompt
Don't optimize manually - let the model do it using this meta-prompt:
```
You are a prompt optimization specialist. Your job is to improve prompts for production AI systems.
CURRENT PROMPT:
[User's prompt here]
PERFORMANCE DATA:
- Main failu