Back to Skills

tanstack-ai

verified

TanStack AI (alpha) provider-agnostic type-safe chat with streaming for OpenAI, Anthropic, Gemini, Ollama. Use for chat APIs, React/Solid frontends with useChat/ChatClient, isomorphic tools, tool approval flows, agent loops, multimodal inputs, or troubleshooting streaming and tool definitions.

View on GitHub

Marketplace

claude-skills

secondsky/claude-skills

Plugin

tanstack-ai

ai

Repository

secondsky/claude-skills
28stars

plugins/tanstack-ai/skills/tanstack-ai/SKILL.md

Last Verified

January 24, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/secondsky/claude-skills/blob/main/plugins/tanstack-ai/skills/tanstack-ai/SKILL.md -a claude-code --skill tanstack-ai

Installation paths:

Claude
.claude/skills/tanstack-ai/
Powered by add-skill CLI

Instructions

# TanStack AI (Provider-Agnostic LLM SDK)

**Status**: Production Ready ✅  
**Last Updated**: 2025-12-09  
**Dependencies**: Node.js 18+, TypeScript 5+; React 18+ for `@tanstack/ai-react`; Solid 1.8+ for `@tanstack/ai-solid`  
**Latest Versions**: @tanstack/ai@latest (alpha), @tanstack/ai-react@latest, @tanstack/ai-client@latest, adapters: @tanstack/ai-openai@latest @tanstack/ai-anthropic@latest @tanstack/ai-gemini@latest @tanstack/ai-ollama@latest

---

## Quick Start (7 Minutes)

### 1) Install core + adapter

```bash
pnpm add @tanstack/ai @tanstack/ai-react @tanstack/ai-openai
# swap adapters as needed: @tanstack/ai-anthropic @tanstack/ai-gemini @tanstack/ai-ollama
pnpm add zod              # recommended for tool schemas
```

**Why this matters:**
- Core is framework-agnostic; React binding just wraps the headless client. citeturn1search3
- Adapters abstract provider quirks so you can change models without rewriting code. citeturn1search3

### 2) Ship a streaming chat endpoint (Next.js or TanStack Start)

```ts
// app/api/chat/route.ts (Next.js) or src/routes/api/chat.ts (TanStack Start)
import { chat, toStreamResponse } from '@tanstack/ai'
import { openai } from '@tanstack/ai-openai'
import { tools } from '@/tools/definitions' // definitions only

export async function POST(request: Request) {
  const { messages, conversationId } = await request.json()
  const stream = chat({
    adapter: openai(),
    messages,
    model: 'gpt-4o',
    tools,
  })
  return toStreamResponse(stream)
}
```

**CRITICAL:**
- Pass tool **definitions** to the server so the LLM can request them; implementations live in their runtimes. citeturn0search7
- Always stream; chunked responses keep UIs responsive and reduce token waste. citeturn0search1

### 3) Wire the client with `useChat` + SSE

```tsx
// components/Chat.tsx
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
import { clientTools } from '@tanstack/ai-client'
import { updateUIDef } from '@/tools/d

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
13179 chars