Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, improving LLM outputs, or designing production prompt templates.
View on GitHubHermeticOrmus/after-the-third-cup
llm-application-dev
plugins/llm-application-dev/skills/prompt-engineering-patterns/SKILL.md
January 21, 2026
Select agents to install to:
npx add-skill https://github.com/HermeticOrmus/after-the-third-cup/blob/main/plugins/llm-application-dev/skills/prompt-engineering-patterns/SKILL.md -a claude-code --skill prompt-engineering-patternsInstallation paths:
.claude/skills/prompt-engineering-patterns/# Prompt Engineering Patterns Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability. ## When to Use This Skill - Designing complex prompts for production LLM applications - Optimizing prompt performance and consistency - Implementing structured reasoning patterns (chain-of-thought, tree-of-thought) - Building few-shot learning systems with dynamic example selection - Creating reusable prompt templates with variable interpolation - Debugging and refining prompts that produce inconsistent outputs - Implementing system prompts for specialized AI assistants ## Core Capabilities ### 1. Few-Shot Learning - Example selection strategies (semantic similarity, diversity sampling) - Balancing example count with context window constraints - Constructing effective demonstrations with input-output pairs - Dynamic example retrieval from knowledge bases - Handling edge cases through strategic example selection ### 2. Chain-of-Thought Prompting - Step-by-step reasoning elicitation - Zero-shot CoT with "Let's think step by step" - Few-shot CoT with reasoning traces - Self-consistency techniques (sampling multiple reasoning paths) - Verification and validation steps ### 3. Prompt Optimization - Iterative refinement workflows - A/B testing prompt variations - Measuring prompt performance metrics (accuracy, consistency, latency) - Reducing token usage while maintaining quality - Handling edge cases and failure modes ### 4. Template Systems - Variable interpolation and formatting - Conditional prompt sections - Multi-turn conversation templates - Role-based prompt composition - Modular prompt components ### 5. System Prompt Design - Setting model behavior and constraints - Defining output formats and structure - Establishing role and expertise - Safety guidelines and content policies - Context setting and background information ## Quick Start ```python from prompt_optimi