Calculate statistical power and required sample sizes for research studies. Use when: (1) Designing experiments to determine sample size, (2) Justifying sample size for grant proposals or protocols, (3) Evaluating adequacy of existing studies, (4) Meeting NIH rigor standards for pre-registration, (5) Conducting retrospective power analysis to interpret null results.
View on GitHubastoreyai/ai_scientist
research-assistant
January 20, 2026
Select agents to install to:
npx add-skill https://github.com/astoreyai/ai_scientist/blob/main/skills/power-analysis/SKILL.md -a claude-code --skill power-analysisInstallation paths:
.claude/skills/power-analysis/# Statistical Power Analysis Skill ## Purpose Calculate statistical power and determine required sample sizes for research studies. Essential for experimental design, grant writing, and meeting NIH rigor and reproducibility standards. ## Core Concepts ### Statistical Power **Definition:** Probability of detecting a true effect when it exists (1 - β) **Standard:** Power ≥ 0.80 (80%) is typically required for NIH grants and pre-registration ### Key Parameters 1. **Effect Size (d, r, η²)** - Magnitude of the phenomenon 2. **Alpha (α)** - Type I error rate (typically 0.05) 3. **Power (1-β)** - Probability of detecting effect (typically 0.80) 4. **Sample Size (N)** - Number of participants/observations needed ### The Relationship ``` Power = f(Effect Size, Sample Size, Alpha, Test Type) For given effect size and alpha: ↑ Sample Size → ↑ Power ↑ Effect Size → ↓ Sample Size needed ``` ## When to Use This Skill ### Pre-Study (Prospective Power Analysis) 1. **Grant Proposals** - Justify requested sample size 2. **Study Design** - Determine recruitment needs 3. **Pre-Registration** - Document planned sample size with justification 4. **Resource Planning** - Estimate time and cost requirements 5. **Ethical Review** - Minimize participants while maintaining power ### Post-Study (Retrospective/Sensitivity Analysis) 1. **Null Results** - Was study adequately powered? 2. **Publication** - Report achieved power 3. **Meta-Analysis** - Assess individual study adequacy 4. **Study Critique** - Evaluate power of published work ## Common Study Designs ### 1. Independent Samples T-Test **Use:** Compare two independent groups **Formula:** ``` N per group = 2 * (z_α/2 + z_β)² * σ² / d² Where: - d = effect size (Cohen's d) - α = significance level (typ. 0.05) - β = Type II error (1 - power) - σ² = pooled variance ``` **Example:** ``` Research Question: Does intervention improve test scores vs. control? Effect Size: d = 0.5 (medium effect) Alpha: 0.05 Power: 0.80 Result: N =