Back to Skills

project-health

verified

AI-agent readiness auditing for project documentation and workflows. Evaluates whether future Claude Code sessions can understand docs, execute workflows literally, and resume work effectively. Use when onboarding AI agents to a project or ensuring context continuity. Includes three specialized agents: context-auditor (AI-readability), workflow-validator (process executability), handoff-checker (session continuity). Use PROACTIVELY before handing off projects to other AI sessions or team members.

View on GitHub

Marketplace

jezweb-skills

jezweb/claude-skills

Plugin

all

Repository

jezweb/claude-skills
239stars

skills/project-health/SKILL.md

Last Verified

February 1, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/jezweb/claude-skills/blob/main/skills/project-health/SKILL.md -a claude-code --skill project-health

Installation paths:

Claude
.claude/skills/project-health/
Powered by add-skill CLI

Instructions

# Project Health: AI-Agent Readiness Auditing

**Status**: Active
**Updated**: 2026-01-30
**Focus**: Ensuring documentation and workflows are executable by AI agents

## Overview

This skill evaluates project health from an **AI-agent perspective** - not just whether docs are well-written for humans, but whether future Claude Code sessions can:

1. **Understand** the documentation without ambiguity
2. **Execute** workflows by following instructions literally
3. **Resume** work effectively with proper context handoff

## When to Use

- Before handing off a project to another AI session
- When onboarding AI agents to contribute to a codebase
- After major refactors to ensure docs are still AI-executable
- When workflows fail because agents "didn't understand"
- Periodic health checks for AI-maintained projects

## Agent Selection Guide

| Situation | Use Agent | Why |
|-----------|-----------|-----|
| "Will another Claude session understand this?" | **context-auditor** | Checks for ambiguous references, implicit knowledge, incomplete examples |
| "Will this workflow actually execute?" | **workflow-validator** | Verifies steps are discrete, ordered, and include verification |
| "Can a new session pick up where I left off?" | **handoff-checker** | Validates SESSION.md, phase tracking, context preservation |
| Full project health audit | All three | Comprehensive AI-readiness assessment |

## Key Principles

### 1. Literal Interpretation

AI agents follow instructions literally. Documentation that works for humans (who fill in gaps) may fail for agents.

**Human-friendly** (ambiguous):
> "Update the config file with your settings"

**AI-friendly** (explicit):
> "Edit `wrangler.jsonc` and set `account_id` to your Cloudflare account ID (find it at dash.cloudflare.com → Overview → Account ID)"

### 2. Explicit Over Implicit

Never assume the agent knows:
- Which file you mean
- What "obvious" next steps are
- Environment state or prerequisites
- What success looks like

###

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
5687 chars