Back to Skills

prompt-engineering

verified

Expert prompt optimization system for building production-ready AI features. Use when users request help improving prompts, want to create system prompts, need prompt review/critique, ask for prompt optimization strategies, want to analyze prompt effectiveness, mention prompt engineering best practices, request prompt templates, or need guidance on structuring AI instructions. Also use when users provide prompts and want suggestions for improvement.

View on GitHub

Marketplace

pm-thought-partner

breethomas/pm-thought-partner

Plugin

pm-thought-partner

Repository

breethomas/pm-thought-partner
10stars

skills/prompt-engineering/SKILL.md

Last Verified

January 18, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/breethomas/pm-thought-partner/blob/main/skills/prompt-engineering/SKILL.md -a claude-code --skill prompt-engineering

Installation paths:

Claude
.claude/skills/prompt-engineering/
Powered by add-skill CLI

Instructions

# Prompt Engineering Expert

Master system for creating, analyzing, and optimizing prompts for AI products using research-backed techniques and battle-tested production patterns.

## Core Capabilities

1. **Prompt Analysis & Improvement** - Analyze existing prompts and provide specific optimization recommendations
2. **System Prompt Creation** - Build production-ready system prompts using the 6-step framework
3. **Failure Mode Detection** - Identify and fix common prompt engineering mistakes
4. **Cost Optimization** - Balance performance with token efficiency
5. **Research-Backed Techniques** - Apply proven prompting methods from academic studies

## The 6-Step Optimization Framework

When improving any prompt, follow this systematic process:

### Step 1: Start With Hard Constraints (Lock Down Failure Modes)

Begin with what the model CANNOT do, not what it should do.

**Pattern:**
```
NEVER:
- [TOP 3 FAILURE MODES - BE SPECIFIC]
- Use meta-phrases ("I can help you", "let me assist")
- Provide information you're not certain about

ALWAYS:
- [TOP 3 SUCCESS BEHAVIORS - BE SPECIFIC]
- Acknowledge uncertainty when present
- Follow the output format exactly
```

**Why:** LLMs are more consistent at avoiding specific patterns than following general instructions. "Never say X" is more reliable than "Always be helpful."

### Step 2: Trigger Professional Training Data (Structure = Quality)

Use formatting that signals technical documentation quality:

- **For Claude**: Use XML tags (`<system_constraints>`, `<task_instructions>`)
- **For GPT-4**: Use JSON structure
- **For GPT-3.5**: Use simple markdown

**Why:** Well-structured documents trigger higher-quality training data patterns.

### Step 3: Have The LLM Self-Improve Your Prompt

Don't optimize manually - let the model do it using this meta-prompt:

```
You are a prompt optimization specialist. Your job is to improve prompts for production AI systems.

CURRENT PROMPT:
[User's prompt here]

PERFORMANCE DATA:
- Main failu

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
10090 chars