Back to Skills

langchain-prod-checklist

verified
View on GitHub

Marketplace

claude-code-plugins-plus

jeremylongshore/claude-code-plugins-plus-skills

Plugin

langchain-pack

ai-ml

Repository

jeremylongshore/claude-code-plugins-plus-skills
1.1kstars

plugins/saas-packs/langchain-pack/skills/langchain-prod-checklist/SKILL.md

Last Verified

January 22, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/jeremylongshore/claude-code-plugins-plus-skills/blob/main/plugins/saas-packs/langchain-pack/skills/langchain-prod-checklist/SKILL.md -a claude-code --skill langchain-prod-checklist

Installation paths:

Claude
.claude/skills/langchain-prod-checklist/
Powered by add-skill CLI

Instructions

# LangChain Production Checklist

## Overview
Comprehensive checklist for deploying LangChain applications to production with reliability, security, and performance.

## Prerequisites
- LangChain application developed and tested
- Infrastructure provisioned
- CI/CD pipeline configured

## Production Checklist

### 1. Configuration & Secrets
- [ ] All API keys in secrets manager (not env vars in code)
- [ ] Environment-specific configurations separated
- [ ] Fallback values for non-critical settings
- [ ] Configuration validation on startup

```python
from pydantic_settings import BaseSettings
from pydantic import Field, SecretStr

class Settings(BaseSettings):
    """Validated configuration."""
    openai_api_key: SecretStr = Field(..., env="OPENAI_API_KEY")
    model_name: str = "gpt-4o-mini"
    max_retries: int = Field(default=3, ge=1, le=10)
    timeout_seconds: int = Field(default=30, ge=5, le=120)

    class Config:
        env_file = ".env"

settings = Settings()  # Validates on import
```

### 2. Error Handling & Resilience
- [ ] Retry logic with exponential backoff
- [ ] Fallback models configured
- [ ] Circuit breaker for cascading failures
- [ ] Graceful degradation strategy

```python
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

primary = ChatOpenAI(model="gpt-4o-mini", max_retries=3)
fallback = ChatAnthropic(model="claude-3-5-sonnet-20241022")

robust_llm = primary.with_fallbacks([fallback])
```

### 3. Observability
- [ ] Structured logging configured
- [ ] Metrics collection enabled
- [ ] Distributed tracing (LangSmith or OpenTelemetry)
- [ ] Alerting rules defined

```python
import os

# LangSmith tracing
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = settings.langsmith_api_key
os.environ["LANGCHAIN_PROJECT"] = "production"

# Prometheus metrics
from prometheus_client import Counter, Histogram

llm_requests = Counter("langchain_llm_requests_total", "Total LLM requests")
llm_late

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
5338 chars