Write clear, effective technical documentation following industry-proven patterns from exemplary projects and authoritative style guides, with built-in countermeasures for common LLM documentation issues
View on GitHubcodethread/claude-code-plugins
doc-writer
plugins/doc-writer/skills/writing-documentation/SKILL.md
January 18, 2026
Select agents to install to:
npx add-skill https://github.com/codethread/claude-code-plugins/blob/main/plugins/doc-writer/skills/writing-documentation/SKILL.md -a claude-code --skill writing-documentationInstallation paths:
.claude/skills/writing-documentation/# Writing Documentation Skill ## Documentation Types (Diátaxis Framework) - **Tutorial** - Learning-oriented, step-by-step - **How-to** - Task-oriented, specific problems - **Reference** - Technical specifications - **Explanation** - Clarifies concepts Details in `references/best-practices.md`. ## Writing for Claude Code **CRITICAL**: When writing documentation that Claude reads (SKILL.md, CLAUDE.md, commands, agents): ### 1. Test Claude's Base Knowledge First Verify what Claude already knows: ```bash # Use haiku for cost-effective testing (gives same quality answers as sonnet) claude --print --model haiku "Do NOT use any skills. How would you [perform task]?" claude --print --model haiku "Do NOT use any skills. When should you [make decision]?" ``` ### 2. Document ONLY Unique Patterns Include only what Claude wouldn't naturally do: - ✓ Opinionated architectural choices - ✓ Counter-intuitive decisions - ✓ Project-specific conventions - ✓ Non-default patterns Remove redundant content: - ✗ Standard library usage - ✗ Common best practices - ✗ Well-known patterns - ✗ Basic language features ### 3. Example: React Skill Reduction Testing revealed Claude knows TanStack Query/Zustand/RTL patterns but doesn't default to: - "Test stores, not components" (counter-cultural) - "NO useState for complex logic" (prescriptive) - "Inline actions unless repeated 2+" (specific rule) Result: 328→125 lines (-62%) by documenting only unique opinions. ## Verifying Technical Accuracy ### API Verification Workflow When documenting unfamiliar APIs or libraries: **1. Launch researcher agent:** ``` Use Task tool to launch researcher agent to verify [API/library] documentation ``` Researcher agent uses Context7 MCP to fetch official API docs and verify method signatures. **2. Read the codebase:** For internal/project APIs: ``` Read relevant source files to verify method signatures exist ``` **3. State version requirements:** - Specify versions when certain: `# Using pand