Use when tackling complex reasoning tasks requiring step-by-step logic, multi-step arithmetic, commonsense reasoning, symbolic manipulation, or problems where simple prompting fails - provides comprehensive guide to Chain-of-Thought and related prompting techniques (Zero-shot CoT, Self-Consistency, Tree of Thoughts, Least-to-Most, ReAct, PAL, Reflexion) with templates, decision matrices, and research-backed patterns
View on GitHubNeoLabHQ/context-engineering-kit
customaize-agent
plugins/customaize-agent/skills/thought-based-reasoning/SKILL.md
February 2, 2026
Select agents to install to:
npx add-skill https://github.com/NeoLabHQ/context-engineering-kit/blob/main/plugins/customaize-agent/skills/thought-based-reasoning/SKILL.md -a claude-code --skill thought-based-reasoningInstallation paths:
.claude/skills/thought-based-reasoning/# Thought-Based Reasoning Techniques for LLMs ## Overview Chain-of-Thought (CoT) prompting and its variants encourage LLMs to generate intermediate reasoning steps before arriving at a final answer, significantly improving performance on complex reasoning tasks. These techniques transform how models approach problems by making implicit reasoning explicit. ## Quick Reference | Technique | When to Use | Complexity | Accuracy Gain | |-----------|-------------|------------|---------------| | Zero-shot CoT | Quick reasoning, no examples available | Low | +20-60% | | Few-shot CoT | Have good examples, consistent format needed | Medium | +30-70% | | Self-Consistency | High-stakes decisions, need confidence | Medium | +10-20% over CoT | | Tree of Thoughts | Complex problems requiring exploration | High | +50-70% on hard tasks | | Least-to-Most | Multi-step problems with subproblems | Medium | +30-80% | | ReAct | Tasks requiring external information | Medium | +15-35% | | PAL | Mathematical/computational problems | Medium | +10-15% | | Reflexion | Iterative improvement, learning from errors | High | +10-20% | --- ## Core Techniques ### 1. Chain-of-Thought (CoT) Prompting **Paper**: "Chain of Thought Prompting Elicits Reasoning in Large Language Models" (Wei et al., 2022) **Citations**: 14,255+ #### When to Use - Multi-step arithmetic or math word problems - Commonsense reasoning requiring logical deduction - Symbolic reasoning tasks - When you have good exemplars showing reasoning #### How It Works Provide few-shot examples that include intermediate reasoning steps, not just question-answer pairs. The model learns to generate similar step-by-step reasoning. #### Prompt Template ``` Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11. Q: The cafeteria had 23 apples. If they