Orchestrate multiple frontier LLMs (Claude, GPT-5.1, Gemini 3.0 Pro, Perplexity Sonar, Grok 4.1) for comprehensive research using LLM Council pattern with peer review and synthesis
View on GitHubkrishagel/geoffrey
geoffrey
January 24, 2026
Select agents to install to:
npx add-skill https://github.com/krishagel/geoffrey/blob/main/skills/multi-model-research/SKILL.md -a claude-code --skill multi-model-researchInstallation paths:
.claude/skills/multi-model-research/# Multi-Model Research Agent
Implements Karpathy's LLM Council pattern for superior research through parallel queries, peer review, and chairman synthesis.
## Architecture
**Geoffrey/Claude (Native Council Member):**
- Routes simple vs complex queries
- Calls external API orchestrator (`research.py`)
- Provides my own research response
- Conducts peer review phase
- Requests GPT-5.1 synthesis (chairman)
- Saves final report to Obsidian
**Python External API Orchestrator:**
- Fetches responses from GPT-5.1, Gemini 3.0 Pro, Perplexity Sonar, Grok 4.1
- Returns JSON with all external responses
- I handle all orchestration and synthesis
## When to Use This Skill
Use multi-model research when:
- **Complex analysis needed** - Multiple perspectives valuable
- **Factual verification critical** - Cross-model validation
- **Comprehensive coverage required** - No single model sufficient
- **Current information essential** - Perplexity provides web grounding
- **Contested topics** - Benefit from diverse model perspectives
## Simple vs Council Mode
**Simple Mode** (Perplexity only):
- Factual lookups
- Current events
- Quick research with citations
- Completes in <15 seconds
**Council Mode** (Full council):
- Comparative analysis
- Deep research
- Multiple perspectives needed
- Strategic questions
- Completes in <90 seconds
## Workflow
### Simple Query
```
User: "What are the latest developments in quantum computing?"
↓
I decide: Simple query (factual, current)
↓
I call: uv run scripts/research.py --query "..." --models perplexity
↓
I read: JSON response from Perplexity
↓
I format: Markdown report with citations
↓
I save: To Obsidian Geoffrey/Research folder
↓
I return: Summary to user with Obsidian link
```
### Council Query
```
User: "Compare the AI strategies of OpenAI, Anthropic, and Google"
↓
I decide: Council query (comparative, complex)
↓
I call: uv run scripts/research.py --query "..." --models gpt,gemini,perplexity,g