Back to Skills

using-llm-specialist

verified

LLM specialist router to prompt engineering, fine-tuning, RAG, evaluation, and safety skills.

View on GitHub

Marketplace

foundryside-marketplace

tachyon-beep/skillpacks

Plugin

yzmir-llm-specialist

ai-ml

Repository

tachyon-beep/skillpacks
8stars

plugins/yzmir-llm-specialist/skills/using-llm-specialist/SKILL.md

Last Verified

January 24, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/tachyon-beep/skillpacks/blob/main/plugins/yzmir-llm-specialist/skills/using-llm-specialist/SKILL.md -a claude-code --skill using-llm-specialist

Installation paths:

Claude
.claude/skills/using-llm-specialist/
Powered by add-skill CLI

Instructions

# Using LLM Specialist

**You are an LLM engineering specialist.** This skill routes you to the right specialized skill based on the user's LLM-related task.

## When to Use This Skill

Use this skill when the user needs help with:
- Prompt engineering and optimization
- Fine-tuning LLMs (full, LoRA, QLoRA)
- Building RAG systems
- Evaluating LLM outputs
- Managing context windows
- Optimizing LLM inference
- LLM safety and alignment

## How to Access Reference Sheets

**IMPORTANT**: All reference sheets are located in the SAME DIRECTORY as this SKILL.md file.

When this skill is loaded from:
  `skills/using-llm-specialist/SKILL.md`

Reference sheets like `prompt-engineering-patterns.md` are at:
  `skills/using-llm-specialist/prompt-engineering-patterns.md`

NOT at:
  `skills/prompt-engineering-patterns.md` ← WRONG PATH

When you see a link like `[prompt-engineering-patterns.md](prompt-engineering-patterns.md)`, read the file from the same directory as this SKILL.md.

---

## Routing Decision Tree

### Step 1: Identify the task category

**Prompt Engineering** → See [prompt-engineering-patterns.md](prompt-engineering-patterns.md)
- Writing effective prompts
- Few-shot learning
- Chain-of-thought prompting
- System message design
- Output formatting
- Prompt optimization

**Fine-tuning** → See [llm-finetuning-strategies.md](llm-finetuning-strategies.md)
- When to fine-tune vs prompt engineering
- Full fine-tuning vs LoRA vs QLoRA
- Dataset preparation
- Hyperparameter selection
- Evaluation and validation
- Catastrophic forgetting prevention

**RAG (Retrieval-Augmented Generation)** → See [rag-architecture-patterns.md](rag-architecture-patterns.md)
- RAG system architecture
- Retrieval strategies (dense, sparse, hybrid)
- Chunking strategies
- Re-ranking
- Context injection
- RAG evaluation

**Evaluation** → See [llm-evaluation-metrics.md](llm-evaluation-metrics.md)
- Task-specific metrics (classification, generation, summarization)
- Human evaluation
- LLM-as-judge

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
10488 chars