Back to Skills

second-opinion

verified

WHEN: User faces complex architectural decisions, asks for "another perspective" or "second opinion", multiple valid approaches exist, reviewing critical/security-sensitive code, design trade-offs, or user says "sanity check", "what do you think", or asks about contentious patterns WHEN NOT: Simple questions, straightforward implementations, routine code changes, user has expressed strong preference, user explicitly declines other opinions

View on GitHub

Marketplace

gopher-ai

gopherguides/gopher-ai

Plugin

llm-tools

Repository

gopherguides/gopher-ai
3stars

plugins/llm-tools/skills/second-opinion/SKILL.md

Last Verified

January 21, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/gopherguides/gopher-ai/blob/main/plugins/llm-tools/skills/second-opinion/SKILL.md -a claude-code --skill second-opinion

Installation paths:

Claude
.claude/skills/second-opinion/
Powered by add-skill CLI

Instructions

# Second Opinion Skill

Proactively suggest getting another LLM's perspective when the situation warrants it.

## Trigger Conditions

Suggest a second opinion when you detect:

### 1. Architectural Decisions
- Choosing between design patterns (e.g., repository vs service layer)
- Database schema design decisions
- API design choices (REST vs GraphQL, versioning strategy)
- Service decomposition (monolith vs microservices)
- State management approaches

### 2. Complex Trade-offs
- Performance vs. readability
- Flexibility vs. simplicity
- DRY vs. explicit code
- Build vs. buy decisions
- Consistency vs. availability trade-offs

### 3. Critical Code Reviews
- Security-sensitive code (authentication, authorization, crypto)
- Performance-critical paths
- Complex algorithms or data structures
- Code handling financial transactions or PII
- Concurrency and threading logic

### 4. Explicit Requests (trigger words)
- "another perspective"
- "second opinion"
- "sanity check"
- "what do you think"
- "am I on the right track"
- "does this make sense"
- "is this a good approach"

## How to Suggest

When conditions are met, offer specific options:

> This involves [type of decision]. Would you like a second opinion from another LLM?
>
> - `/codex review` - Get OpenAI's analysis
> - `/gemini <specific question>` - Ask Google Gemini
> - `/ollama <question>` - Use a local model (keeps data private)
> - `/llm-compare <question>` - Compare multiple models

**Tailor the suggestion to the context:**

For security-sensitive code:
> Since this involves authentication logic, you might want a second security review. Try `/codex review` or `/ollama` (keeps code local) for another perspective.

For architectural decisions:
> This is a significant architectural choice. Different models sometimes weigh trade-offs differently. Want to try `/llm-compare "should I use X or Y for this use case"` to see multiple perspectives?

For complex algorithms:
> This algorithm has some complexity. A second s

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
4395 chars