Back to Skills

langchain-core-workflow-a

verified
View on GitHub

Marketplace

claude-code-plugins-plus

jeremylongshore/claude-code-plugins-plus-skills

Plugin

langchain-pack

ai-ml

Repository

jeremylongshore/claude-code-plugins-plus-skills
1.1kstars

plugins/saas-packs/langchain-pack/skills/langchain-core-workflow-a/SKILL.md

Last Verified

January 22, 2026

Install Skill

Select agents to install to:

Scope:
npx add-skill https://github.com/jeremylongshore/claude-code-plugins-plus-skills/blob/main/plugins/saas-packs/langchain-pack/skills/langchain-core-workflow-a/SKILL.md -a claude-code --skill langchain-core-workflow-a

Installation paths:

Claude
.claude/skills/langchain-core-workflow-a/
Powered by add-skill CLI

Instructions

# LangChain Core Workflow A: Chains & Prompts

## Overview
Build production-ready chains using LangChain Expression Language (LCEL) with prompt templates, output parsers, and composition patterns.

## Prerequisites
- Completed `langchain-install-auth` setup
- Understanding of prompt engineering basics
- Familiarity with Python type hints

## Instructions

### Step 1: Create Prompt Templates
```python
from langchain_core.prompts import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
    MessagesPlaceholder
)

# Simple template
simple_prompt = ChatPromptTemplate.from_template(
    "Translate '{text}' to {language}"
)

# Chat-style template
chat_prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template(
        "You are a {role}. Respond in {style} style."
    ),
    MessagesPlaceholder(variable_name="history", optional=True),
    HumanMessagePromptTemplate.from_template("{input}")
])
```

### Step 2: Build LCEL Chains
```python
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser, JsonOutputParser

llm = ChatOpenAI(model="gpt-4o-mini")

# Basic chain: prompt -> llm -> parser
basic_chain = simple_prompt | llm | StrOutputParser()

# Invoke the chain
result = basic_chain.invoke({
    "text": "Hello, world!",
    "language": "Spanish"
})
print(result)  # "Hola, mundo!"
```

### Step 3: Chain Composition
```python
from langchain_core.runnables import RunnablePassthrough, RunnableParallel

# Sequential chain
chain1 = prompt1 | llm | StrOutputParser()
chain2 = prompt2 | llm | StrOutputParser()

sequential = chain1 | (lambda x: {"summary": x}) | chain2

# Parallel execution
parallel = RunnableParallel(
    summary=prompt1 | llm | StrOutputParser(),
    keywords=prompt2 | llm | StrOutputParser(),
    sentiment=prompt3 | llm | StrOutputParser()
)

results = parallel.invoke({"text": "Your input text"})
# Returns: {"summary": "...", "keywords": "...", "sentiment

Validation Details

Front Matter
Required Fields
Valid Name Format
Valid Description
Has Sections
Allowed Tools
Instruction Length:
4182 chars