TanStack AI (alpha) provider-agnostic type-safe chat with streaming for OpenAI, Anthropic, Gemini, Ollama. Use for chat APIs, React/Solid frontends with useChat/ChatClient, isomorphic tools, tool approval flows, agent loops, multimodal inputs, or troubleshooting streaming and tool definitions.
View on GitHubsecondsky/claude-skills
tanstack-ai
January 24, 2026
Select agents to install to:
npx add-skill https://github.com/secondsky/claude-skills/blob/main/plugins/tanstack-ai/skills/tanstack-ai/SKILL.md -a claude-code --skill tanstack-aiInstallation paths:
.claude/skills/tanstack-ai/# TanStack AI (Provider-Agnostic LLM SDK)
**Status**: Production Ready ✅
**Last Updated**: 2025-12-09
**Dependencies**: Node.js 18+, TypeScript 5+; React 18+ for `@tanstack/ai-react`; Solid 1.8+ for `@tanstack/ai-solid`
**Latest Versions**: @tanstack/ai@latest (alpha), @tanstack/ai-react@latest, @tanstack/ai-client@latest, adapters: @tanstack/ai-openai@latest @tanstack/ai-anthropic@latest @tanstack/ai-gemini@latest @tanstack/ai-ollama@latest
---
## Quick Start (7 Minutes)
### 1) Install core + adapter
```bash
pnpm add @tanstack/ai @tanstack/ai-react @tanstack/ai-openai
# swap adapters as needed: @tanstack/ai-anthropic @tanstack/ai-gemini @tanstack/ai-ollama
pnpm add zod # recommended for tool schemas
```
**Why this matters:**
- Core is framework-agnostic; React binding just wraps the headless client. citeturn1search3
- Adapters abstract provider quirks so you can change models without rewriting code. citeturn1search3
### 2) Ship a streaming chat endpoint (Next.js or TanStack Start)
```ts
// app/api/chat/route.ts (Next.js) or src/routes/api/chat.ts (TanStack Start)
import { chat, toStreamResponse } from '@tanstack/ai'
import { openai } from '@tanstack/ai-openai'
import { tools } from '@/tools/definitions' // definitions only
export async function POST(request: Request) {
const { messages, conversationId } = await request.json()
const stream = chat({
adapter: openai(),
messages,
model: 'gpt-4o',
tools,
})
return toStreamResponse(stream)
}
```
**CRITICAL:**
- Pass tool **definitions** to the server so the LLM can request them; implementations live in their runtimes. citeturn0search7
- Always stream; chunked responses keep UIs responsive and reduce token waste. citeturn0search1
### 3) Wire the client with `useChat` + SSE
```tsx
// components/Chat.tsx
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
import { clientTools } from '@tanstack/ai-client'
import { updateUIDef } from '@/tools/d