This skill should be used when the user asks to "start an LLM project", "design batch pipeline", "evaluate task-model fit", "structure agent project", or mentions pipeline architecture, agent-assisted development, cost estimation, or choosing between LLM and traditional approaches.
View on GitHubmuratcankoylan/Agent-Skills-for-Context-Engineering
cognitive-architecture
January 22, 2026
Select agents to install to:
npx add-skill https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering/blob/1bda9bd46e28d344dc260f476258423e7c06ccf3/skills/project-development/SKILL.md -a claude-code --skill project-developmentInstallation paths:
.claude/skills/project-development/# Project Development Methodology This skill covers the principles for identifying tasks suited to LLM processing, designing effective project architectures, and iterating rapidly using agent-assisted development. The methodology applies whether building a batch processing pipeline, a multi-agent research system, or an interactive agent application. ## When to Activate Activate this skill when: - Starting a new project that might benefit from LLM processing - Evaluating whether a task is well-suited for agents versus traditional code - Designing the architecture for an LLM-powered application - Planning a batch processing pipeline with structured outputs - Choosing between single-agent and multi-agent approaches - Estimating costs and timelines for LLM-heavy projects ## Core Concepts ### Task-Model Fit Recognition Not every problem benefits from LLM processing. The first step in any project is evaluating whether the task characteristics align with LLM strengths. This evaluation should happen before writing any code. **LLM-suited tasks share these characteristics:** | Characteristic | Why It Fits | |----------------|-------------| | Synthesis across sources | LLMs excel at combining information from multiple inputs | | Subjective judgment with rubrics | LLMs handle grading, evaluation, and classification with criteria | | Natural language output | When the goal is human-readable text, not structured data | | Error tolerance | Individual failures do not break the overall system | | Batch processing | No conversational state required between items | | Domain knowledge in training | The model already has relevant context | **LLM-unsuited tasks share these characteristics:** | Characteristic | Why It Fails | |----------------|--------------| | Precise computation | Math, counting, and exact algorithms are unreliable | | Real-time requirements | LLM latency is too high for sub-second responses | | Perfect accuracy requirements | Hallucination risk makes 100% acc