Perform 12-Factor Agents compliance analysis on any codebase. Use when evaluating agent architecture, reviewing LLM-powered systems, or auditing agentic applications against the 12-Factor methodology.
View on GitHubskills/agent-architecture-analysis/SKILL.md
February 1, 2026
Select agents to install to:
npx add-skill https://github.com/existential-birds/beagle/blob/main/skills/agent-architecture-analysis/SKILL.md -a claude-code --skill agent-architecture-analysisInstallation paths:
.claude/skills/agent-architecture-analysis/# 12-Factor Agents Compliance Analysis > Reference: [12-Factor Agents](https://github.com/humanlayer/12-factor-agents) ## Input Parameters | Parameter | Description | Required | |-----------|-------------|----------| | `docs_path` | Path to documentation directory (for existing analyses) | Optional | | `codebase_path` | Root path of the codebase to analyze | Required | ## Analysis Framework ### Factor 1: Natural Language to Tool Calls **Principle:** Convert natural language inputs into structured, deterministic tool calls using schema-validated outputs. **Search Patterns:** ```bash # Look for Pydantic schemas grep -r "class.*BaseModel" --include="*.py" grep -r "TaskDAG\|TaskResponse\|ToolCall" --include="*.py" # Look for JSON schema generation grep -r "model_json_schema\|json_schema" --include="*.py" # Look for structured output generation grep -r "output_type\|response_model" --include="*.py" ``` **File Patterns:** `**/agents/*.py`, `**/schemas/*.py`, `**/models/*.py` **Compliance Criteria:** | Level | Criteria | |-------|----------| | **Strong** | All LLM outputs use Pydantic/dataclass schemas with validators | | **Partial** | Some outputs typed, but dict returns or unvalidated strings exist | | **Weak** | LLM returns raw strings parsed manually or with regex | **Anti-patterns:** - `json.loads(llm_response)` without schema validation - `output.split()` or regex parsing of LLM responses - `dict[str, Any]` return types from agents - No validation between LLM output and handler execution --- ### Factor 2: Own Your Prompts **Principle:** Treat prompts as first-class code you control, version, and iterate on. **Search Patterns:** ```bash # Look for embedded prompts grep -r "SYSTEM_PROMPT\|system_prompt" --include="*.py" grep -r '""".*You are' --include="*.py" # Look for template systems grep -r "jinja\|Jinja\|render_template" --include="*.py" find . -name "*.jinja2" -o -name "*.j2" # Look for prompt directories find . -type d -name "prompts" ``` **F