Back to Skills

langchain-migration-deep-dive

verified
View on GitHub

Marketplace

claude-code-plugins-plus

jeremylongshore/claude-code-plugins-plus-skills

Plugin

langchain-pack

ai-ml

Repository

jeremylongshore/claude-code-plugins-plus-skills
1.1kstars

plugins/saas-packs/langchain-pack/skills/langchain-migration-deep-dive/SKILL.md

Last Verified

January 22, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/jeremylongshore/claude-code-plugins-plus-skills/blob/main/plugins/saas-packs/langchain-pack/skills/langchain-migration-deep-dive/SKILL.md -a claude-code --skill langchain-migration-deep-dive

Installation paths:

Claude
.claude/skills/langchain-migration-deep-dive/
Powered by add-skill CLI

Instructions

# LangChain Migration Deep Dive

## Overview
Comprehensive strategies for migrating to LangChain from legacy LLM implementations or other frameworks.

## Prerequisites
- Existing LLM application to migrate
- Understanding of current architecture
- Test coverage for validation
- Staging environment for testing

## Migration Scenarios

### Scenario 1: Raw OpenAI SDK to LangChain

#### Before (Raw SDK)
```python
# legacy_openai.py
import openai

client = openai.OpenAI()

def chat(message: str, history: list = None) -> str:
    messages = [{"role": "system", "content": "You are helpful."}]

    if history:
        messages.extend(history)

    messages.append({"role": "user", "content": message})

    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages,
        temperature=0.7
    )

    return response.choices[0].message.content
```

#### After (LangChain)
```python
# langchain_chat.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
from langchain_core.messages import HumanMessage, AIMessage

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are helpful."),
    MessagesPlaceholder(variable_name="history"),
    ("user", "{message}")
])

chain = prompt | llm | StrOutputParser()

def chat(message: str, history: list = None) -> str:
    # Convert legacy format to LangChain messages
    lc_history = []
    if history:
        for msg in history:
            if msg["role"] == "user":
                lc_history.append(HumanMessage(content=msg["content"]))
            elif msg["role"] == "assistant":
                lc_history.append(AIMessage(content=msg["content"]))

    return chain.invoke({"message": message, "history": lc_history})
```

### Scenario 2: LlamaIndex to LangChain

#### Before (LlamaIndex)
```python
# legacy_llamaindex.py
f

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
9807 chars