Back to Skills

evaluation-rubrics

verified

Use when need explicit quality criteria and scoring scales to evaluate work consistently, compare alternatives objectively, set acceptance thresholds, reduce subjective bias, or when user mentions rubric, scoring criteria, quality standards, evaluation framework, inter-rater reliability, or grade/assess work.

View on GitHub

Marketplace

Plugin

thinking-frameworks-skills

Repository

lyndonkl/claude
15stars

skills/evaluation-rubrics/SKILL.md

Last Verified

January 24, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/lyndonkl/claude/blob/main/skills/evaluation-rubrics/SKILL.md -a claude-code --skill evaluation-rubrics

Installation paths:

Claude
.claude/skills/evaluation-rubrics/
Powered by add-skill CLI

Instructions

# Evaluation Rubrics

## Table of Contents
- [Purpose](#purpose)
- [When to Use](#when-to-use)
- [What Is It?](#what-is-it)
- [Workflow](#workflow)
- [Common Patterns](#common-patterns)
- [Guardrails](#guardrails)
- [Quick Reference](#quick-reference)

## Purpose

Evaluation Rubrics provide explicit criteria and performance scales to assess quality consistently, fairly, and transparently. This skill guides you through rubric design—from identifying meaningful criteria to writing clear performance descriptors—to enable objective evaluation, reduce bias, align teams on standards, and give actionable feedback.

## When to Use

Use this skill when:

- **Quality assessment**: Code reviews, design critiques, writing evaluation, product launches, academic grading
- **Competitive evaluation**: Vendor selection, hiring candidates, grant proposals, pitch competitions, award judging
- **Progress tracking**: Sprint reviews, skill assessments, training completion, certification exams
- **Standardization**: Multiple reviewers need to score consistently (inter-rater reliability), reduce subjective bias
- **Feedback delivery**: Provide clear, actionable feedback tied to specific criteria (not just "good" or "needs work")
- **Threshold setting**: Define minimum acceptable quality (e.g., "must score ≥3/5 on all criteria to pass")
- **Process improvement**: Identify systematic weaknesses (many submissions score low on same criterion → need better guidance)

Trigger phrases: "rubric", "scoring criteria", "evaluation framework", "quality standards", "how do we grade this", "what does good look like", "consistent assessment", "inter-rater reliability"

## What Is It?

An evaluation rubric is a structured scoring tool with:
- **Criteria**: What dimensions of quality are being assessed (e.g., clarity, completeness, originality)
- **Scale**: Numeric or qualitative levels (e.g., 1-5, Novice-Expert, Below/Meets/Exceeds)
- **Descriptors**: Explicit descriptions of what each level looks like fo

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
13982 chars