Use when planning features and need current API docs, library patterns, or external knowledge; when testing hypotheses about technology choices or claims; when verifying assumptions before design decisions - gathers well-sourced, current information from the internet to inform technical decisions
View on GitHubed3dai/ed3d-plugins
ed3d-research-agents
January 25, 2026
Select agents to install to:
npx add-skill https://github.com/ed3dai/ed3d-plugins/blob/main/plugins/ed3d-research-agents/skills/researching-on-the-internet/SKILL.md -a claude-code --skill researching-on-the-internetInstallation paths:
.claude/skills/researching-on-the-internet/# Researching on the Internet
## Overview
Gather accurate, current, well-sourced information from the internet to inform planning and design decisions. Test hypotheses, verify claims, and find authoritative sources for APIs, libraries, and best practices.
## When to Use
**Use for:**
- Finding current API documentation before integration design
- Testing hypotheses ("Is library X faster than Y?", "Does approach Z work with version N?")
- Verifying technical claims or assumptions
- Researching library comparison and alternatives
- Finding best practices and current community consensus
**Don't use for:**
- Information already in codebase (use codebase search)
- General knowledge within Claude's training (just answer directly)
- Project-specific conventions (check CLAUDE.md)
## Core Research Workflow
1. **Define question clearly** - specific beats vague
2. **Search official sources first** - docs, release notes, changelogs
3. **Cross-reference** - verify claims across multiple sources
4. **Evaluate quality** - tier sources (official → verified → community)
5. **Report concisely** - lead with answer, provide links and evidence
## Hypothesis Testing
When given a hypothesis to test:
1. **Identify falsifiable claims** - break hypothesis into testable parts
2. **Search for supporting evidence** - what confirms this?
3. **Search for disproving evidence** - what contradicts this?
4. **Evaluate source quality** - weight evidence by tier
5. **Report findings** - supported/contradicted/inconclusive with evidence
6. **Note confidence level** - strong consensus vs single source vs conflicting info
**Example:**
```
Hypothesis: "Library X is faster than Y for large datasets"
Search for:
✓ Benchmarks comparing X and Y
✓ Performance documentation for both
✓ GitHub issues mentioning performance
✓ Real-world case studies
Report:
- Supported: [evidence with links]
- Contradicted: [evidence with links]
- Conclusion: [supported/contradicted/mixed] with [confidence level]
```
#