Tools
Tools are capabilities that the AI model can invoke to interact with the outside world. Understanding how tools work is key to building powerful AI applications.
What Are Tools?
Tools (also called "functions") let the model:
- Execute shell commands
- Read and write files
- Search the web
- Call external APIs
- Interact with databases
- Anything you can program
Without tools, the model can only generate text. With tools, it can take action.
# Without tools:
# User: "What files are in my directory?"
# Model: "I don't have access to your filesystem."
# With tools:
# User: "What files are in my directory?"
# Model: [calls execute_command("ls")]
# Tool: "file1.py\nfile2.py\nREADME.md"
# Model: "Your directory contains:
# - file1.py
# - file2.py
# - README.md"
Anatomy of a Tool
Every tool has three components:
1. Schema (ToolSchema)
Describes the tool to the model: what it does and what parameters it accepts. Uses JSON Schema format.
2. Executor (Function)
The actual code that runs when the tool is called. Receives parameters and returns a result.
3. Result (ToolResult)
The output sent back to the model. Can be text, structured data, or even images.
from jaato import ToolSchema, ToolResult
# 1. Schema - tells model what the tool does
schema = ToolSchema(
name="get_weather",
description="Get current weather for a city",
parameters={
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
},
"units": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["city"]
}
)
# 2. Executor - actual implementation
def get_weather(city: str, units: str = "celsius"):
# Call weather API...
return f"Weather in {city}: Sunny, 22°C"
# 3. Result - what goes back to model
result = ToolResult(
call_id="call_123",
name="get_weather",
result="Weather in Paris: Sunny, 22°C"
)
Execution Flow
When you send a message, the client handles a multi-step process:
- Send — Your message goes to the model
- Decide — Model decides to use a tool
- Call — Model returns a FunctionCall
- Execute — ToolExecutor runs the function
- Return — Result sent back to model
- Continue — Model processes result
- Repeat — Steps 2-6 may repeat
- Complete — Model returns final text
This loop continues until the model decides it has enough information to respond with text only.
# What happens inside send_message():
# You send: "What's the weather in Paris?"
# Loop iteration 1:
# Model response: FunctionCall(
# name="get_weather",
# args={"city": "Paris"}
# )
# Executor runs: get_weather("Paris")
# Result: "Sunny, 22°C"
# Send result back to model
# Loop iteration 2:
# Model response: "The weather in Paris is
# sunny with a temperature
# of 22°C."
# No function calls → loop ends
# Final response returned to you
# "Compare weather in Paris and London"
# Iteration 1: Model calls BOTH tools
# FunctionCall(name="get_weather", args={"city": "Paris"})
# FunctionCall(name="get_weather", args={"city": "London"})
# Both execute, results sent back
# Iteration 2: Model synthesizes
# "Paris: Sunny, 22°C
# London: Cloudy, 18°C
# Paris is warmer and sunnier."
The ToolExecutor
ToolExecutor is responsible for running tools safely.
It provides:
| Feature | Description |
|---|---|
| Registry | Maps tool names to functions |
| Permissions | Check before executing |
| Auto-background | Run slow tools in background |
| Output | Stream output via callback |
| Accounting | Track execution stats |
Normally you don't interact with ToolExecutor directly — the client manages it for you.
from shared import ToolExecutor
# Create executor
executor = ToolExecutor()
# Register tools
executor.register("get_weather", get_weather)
executor.register("search_web", search_web)
# Execute a tool
success, result = executor.execute(
"get_weather",
{"city": "Paris"}
)
if success:
print(f"Result: {result}")
else:
print(f"Error: {result}")
# Client creates executor internally
client.configure_tools(registry)
# Or configure custom tools
client.configure_custom_tools(
tools=[weather_schema],
executors={"get_weather": get_weather}
)
# Execution happens automatically
# when model calls tools
Permission Control
Not all tool calls should execute automatically. The permission system lets you control what runs.
Permission Levels
| Level | Behavior |
|---|---|
| Auto-approve | Execute immediately |
| Prompt | Ask user for approval |
| Deny | Never execute |
Auto-Approved Tools
Plugins can mark certain tools as safe for auto-approval via
get_auto_approved_tools(). These typically include
read-only operations.
For a complete guide on configuring permission policies, channels, and user commands, see the Permissions Guide. For API reference, see the Permission Plugin.
from shared import PermissionPlugin
# Create permission plugin
perm = PermissionPlugin()
perm.initialize({
"config_path": "permissions.json"
})
# Configure client with permissions
client.configure_tools(
registry,
permission_plugin=perm
)
# Now tool calls go through permission check
{
"default": "prompt",
"rules": [
{
"tool": "read_file",
"action": "allow"
},
{
"tool": "execute_command",
"action": "prompt"
},
{
"tool": "delete_file",
"action": "deny"
}
]
}
class SafePlugin:
name = "safe"
def get_auto_approved_tools(self):
# These don't need permission
return [
"get_current_time",
"get_system_info"
]
def get_tool_schemas(self):
return [...]
Auto-Backgrounding
Some tools take a long time to run. Auto-backgrounding detects slow operations and runs them in a background thread, allowing the conversation to continue.
How It Works
- Tool starts executing
- If it exceeds threshold (e.g., 5 seconds)
- Execution moves to background
- Model receives "running in background" status
- Model can check status or continue
Configuration
Configure via the CLI plugin or ToolExecutor:
auto_background_enabled— Enable/disableauto_background_threshold— Seconds before backgrounding
# Via CLI plugin config
registry.expose_tool("cli", config={
"auto_background_threshold": 5.0 # seconds
})
# Via ToolExecutor directly
executor = ToolExecutor(
auto_background_enabled=True,
auto_background_pool_size=4
)
# User: "Run the test suite"
# Model calls: execute_command("pytest")
# Executor starts running pytest...
# After 5 seconds, still running:
# - Moves to background thread
# - Returns: "Command running in background
# (task_id: abc123)"
# Model can:
# - Wait and check status
# - Continue with other work
# - Ask user to wait
# When complete:
# - Results available via task_id
# - Model retrieves and summarizes
Error Handling
Tools can fail. The executor handles errors gracefully and reports them back to the model.
Error Types
| Type | Cause |
|---|---|
| Not Found | Tool doesn't exist |
| Permission | Execution denied |
| Execution | Tool threw an exception |
| Timeout | Execution took too long |
Errors are wrapped in ToolResult with
is_error=True. The model sees the error message
and can decide how to proceed.
def read_file(path: str) -> str:
try:
with open(path) as f:
return f.read()
except FileNotFoundError:
raise ValueError(f"File not found: {path}")
except PermissionError:
raise ValueError(f"Permission denied: {path}")
# Executor catches exceptions and returns:
# ToolResult(
# call_id="...",
# name="read_file",
# result="Error: File not found: /foo/bar",
# is_error=True
# )
# User: "Read /nonexistent/file.txt"
# Model calls: read_file("/nonexistent/file.txt")
# Tool returns error
# Model receives:
# ToolResult(is_error=True,
# result="File not found")
# Model responds:
# "I couldn't read that file because it
# doesn't exist. Would you like me to
# create it, or did you mean a different
# path?"
Output Streaming
Tools can stream output in real-time via the output callback. This is useful for long-running operations or verbose tools.
The callback set on the executor (via client) receives:
source— Plugin/tool nametext— Output textmode— "write" or "append"
# CLI plugin streams command output:
# User: "Run pytest -v"
# Model calls: execute_command("pytest -v")
# Output callback receives:
# ("cli", "===== test session starts =====", "write")
# ("cli", "\ncollected 10 items", "append")
# ("cli", "\ntest_foo.py::test_one PASSED", "append")
# ("cli", "\ntest_foo.py::test_two PASSED", "append")
# ...
# User sees output in real-time!
class StreamingPlugin:
name = "streaming"
def __init__(self):
self._output_callback = None
def set_output_callback(self, callback):
self._output_callback = callback
def _stream_output(self, text, mode="append"):
if self._output_callback:
self._output_callback(self.name, text, mode)
def long_operation(self):
self._stream_output("Starting...", "write")
for i in range(10):
time.sleep(0.5)
self._stream_output(f"\nStep {i+1}/10")
return "Complete!"
Next Steps
- Providers — Multi-provider abstraction
- Building Plugins — Create your own tools
- Permissions — Configure security
- ToolExecutor API — Full reference
# Types you'll work with:
from jaato import (
# Declare tools
ToolSchema,
# Model requests tool use
FunctionCall,
# Tool returns result
ToolResult,
# Attach files/images
Attachment,
)
# Flow:
# ToolSchema → Model → FunctionCall
# → Executor → ToolResult
# → Model → Response