Back to Skills

prompt-engineering

verified

Engineer effective LLM prompts using zero-shot, few-shot, chain-of-thought, and structured output techniques. Use when building LLM applications requiring reliable outputs, implementing RAG systems, creating AI agents, or optimizing prompt quality and cost. Covers OpenAI, Anthropic, and open-source models with multi-language examples (Python/TypeScript).

View on GitHub

Marketplace

ai-design-components

ancoleman/ai-design-components

Plugin

backend-ai-skills

Repository

ancoleman/ai-design-components
153stars

skills/prompt-engineering/SKILL.md

Last Verified

February 1, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/ancoleman/ai-design-components/blob/main/skills/prompt-engineering/SKILL.md -a claude-code --skill prompt-engineering

Installation paths:

Claude
.claude/skills/prompt-engineering/
Powered by add-skill CLI

Instructions

# Prompt Engineering

Design and optimize prompts for large language models (LLMs) to achieve reliable, high-quality outputs across diverse tasks.

## Purpose

This skill provides systematic techniques for crafting prompts that consistently elicit desired behaviors from LLMs. Rather than trial-and-error prompt iteration, apply proven patterns (zero-shot, few-shot, chain-of-thought, structured outputs) to improve accuracy, reduce costs, and build production-ready LLM applications. Covers multi-model deployment (OpenAI GPT, Anthropic Claude, Google Gemini, open-source models) with Python and TypeScript examples.

## When to Use This Skill

**Trigger this skill when:**
- Building LLM-powered applications requiring consistent outputs
- Model outputs are unreliable, inconsistent, or hallucinating
- Need structured data (JSON) from natural language inputs
- Implementing multi-step reasoning tasks (math, logic, analysis)
- Creating AI agents that use tools and external APIs
- Optimizing prompt costs or latency in production systems
- Migrating prompts across different model providers
- Establishing prompt versioning and testing workflows

**Common requests:**
- "How do I make Claude/GPT follow instructions reliably?"
- "My JSON parsing keeps failing - how to get valid outputs?"
- "Need to build a RAG system for question-answering"
- "How to reduce hallucination in model responses?"
- "What's the best way to implement multi-step workflows?"

## Quick Start

**Zero-Shot Prompt (Python + OpenAI):**
```python
from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Summarize this article in 3 sentences: [text]"}
    ],
    temperature=0  # Deterministic output
)
print(response.choices[0].message.content)
```

**Structured Output (TypeScript + Vercel AI SDK):**
```typescript
import { generateObject } from 'a

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
19219 chars