jeremylongshore/claude-code-plugins-plus-skills
langchain-pack
plugins/saas-packs/langchain-pack/skills/langchain-deploy-integration/SKILL.md
January 22, 2026
Select agents to install to:
npx add-skill https://github.com/jeremylongshore/claude-code-plugins-plus-skills/blob/main/plugins/saas-packs/langchain-pack/skills/langchain-deploy-integration/SKILL.md -a claude-code --skill langchain-deploy-integrationInstallation paths:
.claude/skills/langchain-deploy-integration/# LangChain Deploy Integration
## Overview
Deploy LangChain applications to production using containers and cloud platforms with best practices for scaling and reliability.
## Prerequisites
- LangChain application ready for production
- Docker installed
- Cloud provider account (GCP, AWS, or Azure)
- API keys stored in secrets manager
## Instructions
### Step 1: Create Dockerfile
```dockerfile
# Dockerfile
FROM python:3.11-slim as builder
WORKDIR /app
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt
# Production stage
FROM python:3.11-slim
WORKDIR /app
# Copy installed packages from builder
COPY --from=builder /root/.local /root/.local
ENV PATH=/root/.local/bin:$PATH
# Copy application code
COPY src/ ./src/
COPY main.py .
# Create non-root user
RUN useradd --create-home appuser
USER appuser
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD python -c "import requests; requests.get('http://localhost:8080/health')"
EXPOSE 8080
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]
```
### Step 2: Create FastAPI Application
```python
# main.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from contextlib import asynccontextmanager
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Initialize LLM on startup
llm = None
chain = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global llm, chain
# Startup
llm = ChatOpenAI(
model=os.environ.get("MODEL_NAME", "gpt-4o-mini"),
max_retries=3
)
prompt = ChatPromptTemplate.from_template("{input}")
chain = prompt | llm | StrOutputParser()
yield
# Sh