Vercel AI SDK for building chat interfaces with streaming. Use when implementing useChat hook, handling tool calls, streaming responses, or building chat UI. Triggers on useChat, @ai-sdk/react, UIMessage, ChatStatus, streamText, toUIMessageStreamResponse, addToolOutput, onToolCall, sendMessage.
View on GitHubskills/vercel-ai-sdk/SKILL.md
February 1, 2026
Select agents to install to:
npx add-skill https://github.com/existential-birds/beagle/blob/main/skills/vercel-ai-sdk/SKILL.md -a claude-code --skill vercel-ai-sdkInstallation paths:
.claude/skills/vercel-ai-sdk/# Vercel AI SDK
The Vercel AI SDK provides React hooks and server utilities for building streaming chat interfaces with support for tool calls, file attachments, and multi-step reasoning.
## Quick Reference
### Basic useChat Setup
```typescript
import { useChat } from '@ai-sdk/react';
const { messages, status, sendMessage, stop, regenerate } = useChat({
id: 'chat-id',
messages: initialMessages,
onFinish: ({ message, messages, isAbort, isError }) => {
console.log('Chat finished');
},
onError: (error) => {
console.error('Chat error:', error);
}
});
// Send a message
sendMessage({ text: 'Hello', metadata: { createdAt: Date.now() } });
// Send with files
sendMessage({
text: 'Analyze this',
files: fileList // FileList or FileUIPart[]
});
```
### ChatStatus States
The `status` field indicates the current state of the chat:
- **`ready`**: Chat is idle and ready to accept new messages
- **`submitted`**: Message sent to API, awaiting response stream start
- **`streaming`**: Response actively streaming from the API
- **`error`**: An error occurred during the request
### Message Structure
Messages use the `UIMessage` type with a parts-based structure:
```typescript
interface UIMessage {
id: string;
role: 'system' | 'user' | 'assistant';
metadata?: unknown;
parts: Array<UIMessagePart>; // text, file, tool-*, reasoning, etc.
}
```
Part types include:
- `text`: Text content with optional streaming state
- `file`: File attachments (images, documents)
- `tool-{toolName}`: Tool invocations with state machine
- `reasoning`: AI reasoning traces
- `data-{typeName}`: Custom data parts
### Server-Side Streaming
```typescript
import { streamText } from 'ai';
import { convertToModelMessages } from 'ai';
const result = streamText({
model: openai('gpt-4'),
messages: convertToModelMessages(uiMessages),
tools: {
getWeather: tool({
description: 'Get weather',
inputSchema: z.object({ city: z.string() }),
execute: async