Synthesize outputs from multiple AI models into a comprehensive, verified assessment. Use when: (1) User pastes feedback/analysis from multiple LLMs (Claude, GPT, Gemini, etc.) about code or a project, (2) User wants to consolidate model outputs into a single reliable document, (3) User needs conflicting model claims resolved against actual source code. This skill verifies model claims against the codebase, resolves contradictions with evidence, and produces a more reliable assessment than any single model.
View on GitHubskills/multi-model-meta-analysis/SKILL.md
February 5, 2026
Select agents to install to:
npx add-skill https://github.com/petekp/agent-skills/blob/main/skills/multi-model-meta-analysis/SKILL.md -a claude-code --skill multi-model-meta-analysisInstallation paths:
.claude/skills/multi-model-meta-analysis/# Multi-Model Synthesis
Combine outputs from multiple AI models into a verified, comprehensive assessment by cross-referencing claims against the actual codebase.
## Core Principle
Models hallucinate and contradict each other. The source code is the source of truth. Every significant claim must be verified before inclusion in the final assessment.
## Process
### 1. Extract Claims
Parse each model's output and extract discrete claims:
- Factual assertions about the code ("function X does Y", "there's no error handling in Z")
- Recommendations ("should add validation", "refactor this pattern")
- Identified issues ("bug in line N", "security vulnerability")
Tag each claim with its source model.
### 2. Deduplicate
Group semantically equivalent claims:
- "Lacks input validation" = "No sanitization" = "User input not checked"
- "Should use async/await" = "Convert to promises" = "Make asynchronous"
Create canonical phrasing. Track which models mentioned each.
### 3. Verify Against Source
For each factual claim or identified issue:
```
CLAIM: "The auth middleware doesn't check token expiry"
VERIFY: Read the auth middleware file
FINDING: [Confirmed | Refuted | Partially true | Cannot verify]
EVIDENCE: [Quote relevant code or explain why claim is wrong]
```
Use Grep, Glob, and Read tools to locate and examine relevant code. Do not trust model claims without verification.
### 4. Resolve Conflicts
When models contradict each other:
1. Identify the specific disagreement
2. Examine the actual code
3. Determine which model (if any) is correct
4. Document the resolution with evidence
```
CONFLICT: Model A says "uses SHA-256", Model B says "uses MD5"
INVESTIGATION: Read crypto.js lines 45-60
RESOLUTION: Model B is correct - line 52 shows MD5 usage
EVIDENCE: `const hash = crypto.createHash('md5')`
```
### 5. Synthesize Assessment
Produce a final document that:
- States verified facts (not model opinions)
- Cites evidence for significant claims
- Notes where verifi