LLM function calling and tool use patterns. Use when enabling LLMs to call external tools, defining tool schemas, implementing tool execution loops, or getting structured output from LLMs.
View on GitHubyonatangross/orchestkit
ork-llm-core
January 25, 2026
Select agents to install to:
npx add-skill https://github.com/yonatangross/orchestkit/blob/main/plugins/ork-llm-core/skills/function-calling/SKILL.md -a claude-code --skill function-callingInstallation paths:
.claude/skills/function-calling/# Function Calling
Enable LLMs to use external tools and return structured data.
## Basic Tool Definition (2026 Best Practice)
```python
# OpenAI format with strict mode (2026 recommended)
tools = [{
"type": "function",
"function": {
"name": "search_documents",
"description": "Search the document database for relevant content",
"strict": True, # ← 2026: Enables structured output validation
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
},
"limit": {
"type": "integer",
"description": "Max results to return"
}
},
"required": ["query", "limit"], # All props required when strict
"additionalProperties": False # ← 2026: Required for strict mode
}
}
}]
# Note: With strict=True:
# - All properties must be listed in "required"
# - additionalProperties must be False
# - No "default" values (provide via code instead)
```
## Tool Execution Loop
```python
async def run_with_tools(messages: list, tools: list) -> str:
"""Execute tool calls until LLM returns final answer."""
while True:
response = await llm.chat(messages=messages, tools=tools)
# Check if LLM wants to call tools
if not response.tool_calls:
return response.content
# Execute each tool call
for tool_call in response.tool_calls:
result = await execute_tool(
tool_call.function.name,
json.loads(tool_call.function.arguments)
)
# Add tool result to conversation
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(result)
})
# Continue loop (LLM will proces