Back to Skills

model-compare

verified

Compare 3D CAD models using boolean operations (IoU, Dice, precision/recall). Use when evaluating generated models against gold references, diffing CAD revisions, or computing similarity metrics for ML training. Triggers on: model diff, compare models, IoU, intersection over union, model similarity, CAD comparison, STEP diff, 3D evaluation, gold reference, generated model, precision recall 3D.

View on GitHub

Marketplace

vibecad

rawwerks/VibeCAD

Plugin

build123d

Repository

rawwerks/VibeCAD
5stars

plugins/build123d/skills/model-compare/SKILL.md

Last Verified

January 21, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/rawwerks/VibeCAD/blob/main/plugins/build123d/skills/model-compare/SKILL.md -a claude-code --skill model-compare

Installation paths:

Claude
.claude/skills/model-compare/
Powered by add-skill CLI

Instructions

# 3D Model Comparison Tool

Compare CAD models using boolean operations to compute similarity metrics like IoU, Dice, precision, and recall. Useful for:

- Evaluating ML-generated models against gold references
- Comparing revisions of CAD designs
- Computing metrics for training 3D generative models
- Visualizing geometric differences

## Quick Start

```bash
# Compare two STEP files
uvx --from build123d python scripts/model_diff.py reference.step generated.step

# JSON output for training pipelines
uvx --from build123d python scripts/model_diff.py ref.step gen.step --json --no-export

# Demo mode (no files needed)
uvx --from build123d python scripts/model_diff.py --demo
```

## Supported Formats

| Format | Extension | Notes |
|--------|-----------|-------|
| STEP | `.step`, `.stp` | Recommended - full CAD fidelity |
| BREP | `.brep` | OpenCASCADE native format |
| STL | `.stl` | Mesh format - may have boolean issues |

## Output Metrics

### Primary Metrics (for ML training)

| Metric | Range | Description |
|--------|-------|-------------|
| **IoU** (Jaccard) | 0-1 | `|A∩B| / |A∪B|` - standard similarity |
| **Dice** (F1) | 0-1 | `2|A∩B| / (|A|+|B|)` - more sensitive to small overlaps |
| **Precision** | 0-1 | `|A∩B| / |B|` - how much of generated is correct |
| **Recall** | 0-1 | `|A∩B| / |A|` - how much of reference was captured |

### Diagnostic Metrics

| Metric | Description |
|--------|-------------|
| `volume_ratio` | B/A volume ratio (1.0 = same size) |
| `center_offset` | Distance between centers of mass |
| `bbox_iou` | Bounding box IoU (coarse alignment) |
| `size_ratio_x/y/z` | Per-axis scale comparison |
| `surface_ratio` | Surface area comparison |

### Interpretation

The tool provides automatic interpretation:
- **Over-generating**: Low precision, high extra geometry
- **Under-generating**: Low recall, missing geometry
- **Size issues**: Volume ratio far from 1.0
- **Position issues**: Large center offset

## CLI Options

```
usage: model_diff.py

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
4691 chars