jeremylongshore/claude-code-plugins-plus-skills
langchain-pack
plugins/saas-packs/langchain-pack/skills/langchain-sdk-patterns/SKILL.md
January 22, 2026
Select agents to install to:
npx add-skill https://github.com/jeremylongshore/claude-code-plugins-plus-skills/blob/main/plugins/saas-packs/langchain-pack/skills/langchain-sdk-patterns/SKILL.md -a claude-code --skill langchain-sdk-patternsInstallation paths:
.claude/skills/langchain-sdk-patterns/# LangChain SDK Patterns
## Overview
Production-ready patterns for LangChain applications including LCEL chains, structured output, and error handling.
## Prerequisites
- Completed `langchain-install-auth` setup
- Familiarity with async/await patterns
- Understanding of error handling best practices
## Core Patterns
### Pattern 1: Type-Safe Chain with Pydantic
```python
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
class SentimentResult(BaseModel):
"""Structured output for sentiment analysis."""
sentiment: str = Field(description="positive, negative, or neutral")
confidence: float = Field(description="Confidence score 0-1")
reasoning: str = Field(description="Brief explanation")
llm = ChatOpenAI(model="gpt-4o-mini")
structured_llm = llm.with_structured_output(SentimentResult)
prompt = ChatPromptTemplate.from_template(
"Analyze the sentiment of: {text}"
)
chain = prompt | structured_llm
# Returns typed SentimentResult
result: SentimentResult = chain.invoke({"text": "I love LangChain!"})
print(f"Sentiment: {result.sentiment} ({result.confidence})")
```
### Pattern 2: Retry with Fallback
```python
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import RunnableWithFallbacks
primary = ChatOpenAI(model="gpt-4o")
fallback = ChatAnthropic(model="claude-3-5-sonnet-20241022")
# Automatically falls back on failure
robust_llm = primary.with_fallbacks([fallback])
response = robust_llm.invoke("Hello!")
```
### Pattern 3: Async Batch Processing
```python
import asyncio
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
chain = prompt | llm
async def process_batch(texts: list[str]) -> list:
"""Process multiple texts concurrently."""
inputs = [{"t