This skill should be used when the user asks to "evaluate agent performance", "build test framework", "measure agent quality", "create evaluation rubrics", or mentions LLM-as-judge, multi-dimensional evaluation, agent testing, or quality gates for agent pipelines.
View on GitHubKalyanikhandare29/Agent-Skills-for-Context-Engineering
cognitive-architecture
skills/evaluation/SKILL.md
January 21, 2026
Select agents to install to:
npx add-skill https://github.com/Kalyanikhandare29/Agent-Skills-for-Context-Engineering/blob/main/skills/evaluation/SKILL.md -a claude-code --skill evaluationInstallation paths:
.claude/skills/evaluation/# Evaluation Methods for Agent Systems Evaluation of agent systems requires different approaches than traditional software or even standard language model applications. Agents make dynamic decisions, are non-deterministic between runs, and often lack single correct answers. Effective evaluation must account for these characteristics while providing actionable feedback. A robust evaluation framework enables continuous improvement, catches regressions, and validates that context engineering choices achieve intended effects. ## When to Activate Activate this skill when: - Testing agent performance systematically - Validating context engineering choices - Measuring improvements over time - Catching regressions before deployment - Building quality gates for agent pipelines - Comparing different agent configurations - Evaluating production systems continuously ## Core Concepts Agent evaluation requires outcome-focused approaches that account for non-determinism and multiple valid paths. Multi-dimensional rubrics capture various quality aspects: factual accuracy, completeness, citation accuracy, source quality, and tool efficiency. LLM-as-judge provides scalable evaluation while human evaluation catches edge cases. The key insight is that agents may find alternative paths to goals—the evaluation should judge whether they achieve right outcomes while following reasonable processes. **Performance Drivers: The 95% Finding** Research on the BrowseComp evaluation (which tests browsing agents' ability to locate hard-to-find information) found that three factors explain 95% of performance variance: | Factor | Variance Explained | Implication | |--------|-------------------|-------------| | Token usage | 80% | More tokens = better performance | | Number of tool calls | ~10% | More exploration helps | | Model choice | ~5% | Better models multiply efficiency | This finding has significant implications for evaluation design: - **Token budgets matter**: Evaluate agents with r