Back to Skills

rlm

verified

Recursive Language Model for processing large contexts (>50KB). Use for complex analysis tasks where token efficiency matters. Achieves 40% token savings by letting the LLM programmatically explore context via Query() and FINAL() patterns.

View on GitHub

Marketplace

rlm

XiaoConstantine/rlm-go

Plugin

rlm

Repository

XiaoConstantine/rlm-go
5stars

plugins/rlm/skills/rlm/SKILL.md

Last Verified

January 15, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/XiaoConstantine/rlm-go/blob/main/plugins/rlm/skills/rlm/SKILL.md -a claude-code --skill rlm

Installation paths:

Claude
.claude/skills/rlm/
Powered by add-skill CLI

Instructions

# RLM - Recursive Language Model

**RLM** is an inference-time scaling strategy that enables LLMs to handle arbitrarily long contexts by treating prompts as external objects that can be programmatically examined and recursively processed.

- **License:** MIT
- **Repository:** https://github.com/XiaoConstantine/rlm-go

## When to Use

Use `rlm` instead of direct LLM calls when:
- Processing **large contexts** (>50KB of text)
- Token efficiency is important (40% savings on large contexts)
- The task requires **iterative exploration** of data
- Complex analysis that benefits from sub-queries

## Do NOT Use When

- Context is small (<10KB) - overhead not worth it
- Simple single-turn questions
- Tasks that don't require data exploration

## Command Usage

```bash
# Basic usage with context file
~/.local/bin/rlm -context <file> -query "<query>" -verbose

# With inline context
~/.local/bin/rlm -context-string "data" -query "<query>"

# Pipe context from stdin
cat largefile.txt | ~/.local/bin/rlm -query "<query>"

# JSON output for programmatic use
~/.local/bin/rlm -context <file> -query "<query>" -json
```

## Options

| Flag | Description | Default |
|------|-------------|---------|
| `-context` | Path to context file | - |
| `-context-string` | Context string directly | - |
| `-query` | Query to run against context | Required |
| `-model` | LLM model to use | claude-sonnet-4-20250514 |
| `-max-iterations` | Maximum iterations | 30 |
| `-verbose` | Enable verbose output | false |
| `-json` | Output result as JSON | false |
| `-log-dir` | Directory for JSONL logs | - |

## How It Works

RLM uses a Go REPL environment where LLM-generated code can:

1. **Access context** as a string variable
2. **Make recursive sub-LLM calls** via `Query()` for focused analysis
3. **Use standard Go operations** for text processing
4. **Signal completion** with `FINAL()` when done

### The Query() Pattern

```go
// LLM generates code like this inside the REPL:
chunk := context[0:10000]
summa

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
3223 chars