Adaptive epoch selection for Walk-Forward Optimization. TRIGGERS - WFO epoch, epoch selection, WFE optimization, overfitting epochs.
View on GitHubFebruary 5, 2026
Select agents to install to:
npx add-skill https://github.com/terrylica/cc-skills/blob/main/plugins/quant-research/skills/adaptive-wfo-epoch/SKILL.md -a claude-code --skill adaptive-wfo-epochInstallation paths:
.claude/skills/adaptive-wfo-epoch/# Adaptive Walk-Forward Epoch Selection (AWFES)
Machine-readable reference for adaptive epoch selection within Walk-Forward Optimization (WFO). Optimizes training epochs per-fold using Walk-Forward Efficiency (WFE) as the objective.
## When to Use This Skill
Use this skill when:
- Selecting optimal training epochs for ML models in WFO
- Avoiding overfitting via Walk-Forward Efficiency metrics
- Implementing per-fold adaptive epoch selection
- Computing efficient frontiers for epoch-performance trade-offs
- Carrying epoch priors across WFO folds
## Quick Start
```python
from adaptive_wfo_epoch import AWFESConfig, compute_efficient_frontier
# Generate epoch candidates from search bounds and granularity
config = AWFESConfig.from_search_space(
min_epoch=100,
max_epoch=2000,
granularity=5, # Number of frontier points
)
# config.epoch_configs → [100, 211, 447, 945, 2000] (log-spaced)
# Per-fold epoch sweep
for fold in wfo_folds:
epoch_metrics = []
for epoch in config.epoch_configs:
is_sharpe, oos_sharpe = train_and_evaluate(fold, epochs=epoch)
wfe = config.compute_wfe(is_sharpe, oos_sharpe, n_samples=len(fold.train))
epoch_metrics.append({"epoch": epoch, "wfe": wfe, "is_sharpe": is_sharpe})
# Select from efficient frontier
selected_epoch = compute_efficient_frontier(epoch_metrics)
# Carry forward to next fold as prior
prior_epoch = selected_epoch
```
## Methodology Overview
### What This Is
Per-fold adaptive epoch selection where:
1. Train models across a range of epochs (e.g., 400, 800, 1000, 2000)
2. Compute WFE = OOS_Sharpe / IS_Sharpe for each epoch count
3. Find the "efficient frontier" - epochs maximizing WFE vs training cost
4. Select optimal epoch from frontier for OOS evaluation
5. Carry forward as prior for next fold
### What This Is NOT
- **NOT early stopping**: Early stopping monitors validation loss continuously; this evaluates discrete candidates post-hoc
- **NOT Bayesian optimizati