Back to Skills

criterium

verified

Use this skill when users ask about benchmarking Clojure code, measuring performance, profiling execution time, or using the criterium library. Covers the 0.5.x API including bench macro, bench plans, viewers, domain analysis, and argument generation.

View on GitHub

Marketplace

criterium

hugoduncan/criterium

Plugin

criterium

development

Repository

hugoduncan/criterium
1.2kstars

skills/criterium/SKILL.md

Last Verified

January 20, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/hugoduncan/criterium/blob/main/skills/criterium/SKILL.md -a claude-code --skill criterium

Installation paths:

Claude
.claude/skills/criterium/
Powered by add-skill CLI

Instructions

# Criterium

Statistically rigorous benchmarking for Clojure that accounts for JVM warmup, garbage collection, and measurement overhead.

## Overview

Criterium is the standard benchmarking library for Clojure. Unlike naive timing approaches, it provides:

- **JVM-aware measurement** - Handles JIT warmup and GC interference
- **Statistical rigor** - Bootstrap confidence intervals, outlier detection
- **Multiple output formats** - Text, structured data, interactive charts

**Library:** `criterium/criterium`
**Current Version:** 0.5.x (alpha)
**License:** EPL-1.0

**Note:** The 0.4.x API (`criterium.core/bench`) is deprecated. Use `criterium.bench/bench` for all new code.

## Quick Start

```clojure
(require '[criterium.bench :as bench])

(bench/bench (+ 1 1))
```

Output:
```
      Elapsed Time: 2.15 ns  3σ [2.08 2.22]  min 2.07
Outliers (outliers / samples): low-severe 0 (0.0%), low-mild 0 (0.0%), high-mild 3 (1.5%), high-severe 0 (0.0%)
Sample Scheme: 200 samples with batch-size 4651 (930200 evaluations)
```

The output shows:
- **Mean time** (2.15 ns) with 3-sigma confidence bounds
- **Outlier counts** by category (low/high, mild/severe)
- **Sample scheme** - how measurements were collected

## Core Concepts

Criterium uses a three-stage pipeline:

```
Collection → Analysis → View
```

1. **Collection** - Gather raw timing samples using collectors
2. **Analysis** - Apply statistical computations (mean, bootstrap CI, outliers)
3. **View** - Format and present results through viewers

### The Measured Abstraction

The `bench` macro wraps your expression in a `measured` - a benchmarkable unit that:
- Prevents constant folding by hoisting arguments
- Supports batched evaluation for fast expressions
- Provides zero-allocation measurement

You rarely interact with `measured` directly, but it enables advanced patterns like argument generation. See [Argument Generation](#argument-generation) for explicit usage with test.check generators.

## Basic Benchmarking

### The be

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
11753 chars