The Client
JaatoClient is the central orchestrator of the framework. It manages connections to AI providers, coordinates tool execution, and maintains conversation state.
Overview
The client follows a simple lifecycle:
- Create — Instantiate with a provider
- Connect — Establish connection to the AI model
- Configure — Set up tools and plugins
- Converse — Send messages and receive responses
The client abstracts away provider-specific details, giving you a consistent interface regardless of which AI service you're using.
from jaato import JaatoClient, PluginRegistry
# 1. Create
client = JaatoClient()
# 2. Connect (reads JAATO_PROVIDER and MODEL_NAME from env)
client.connect()
# 3. Configure
registry = PluginRegistry(model_name=client.model_name)
registry.discover()
registry.expose_tool("cli")
client.configure_tools(registry)
# 4. Converse
response = client.send_message(
"Hello!",
on_output=lambda s, t, m: print(t, end="")
)
Architecture
The client sits between your application and the AI provider, coordinating several components:
| Component | Role |
|---|---|
| ModelProvider | Handles communication with AI services |
| ToolExecutor | Runs tools when the model requests them |
| PluginRegistry | Manages available tools and their schemas |
| Session | Maintains conversation history |
When you send a message, the client:
- Forwards the message to the provider
- Receives the response (text or tool calls)
- If tool calls: executes them and sends results back
- Repeats until the model returns final text
Conversation State
The client maintains full conversation history internally. Each message exchange adds to the history, enabling multi-turn conversations with context.
History Structure
History is a list of Message objects, each containing:
role— USER, MODEL, or TOOLparts— Content (text, function calls, results)
Turn Boundaries
A "turn" is one user message plus all model responses and tool executions until the next user message. The client tracks turn boundaries for operations like reverting.
# Send some messages
client.send_message("What's 2+2?", on_output=handler)
client.send_message("And 3+3?", on_output=handler)
# Get full history
history = client.get_history()
print(f"Messages: {len(history)}")
for msg in history:
print(f"{msg.role}: {msg.text[:50]}...")
# Get turn boundaries
turns = client.get_turn_boundaries()
print(f"Turns: {len(turns)}")
# Revert to first turn
client.revert_to_turn(0)
# Or reset completely
client.reset_session()
# After "List files" conversation:
[
Message(role=USER, parts=[
Part(text="List files in current dir")
]),
Message(role=MODEL, parts=[
Part(function_call=FunctionCall(
name="execute_command",
args={"command": "ls"}
))
]),
Message(role=TOOL, parts=[
Part(function_response=ToolResult(
name="execute_command",
result="file1.py\nfile2.py\n..."
))
]),
Message(role=MODEL, parts=[
Part(text="Here are the files...")
])
]
Output Streaming
The on_output callback provides real-time visibility
into what's happening during message processing.
Callback Signature
| Parameter | Values | Meaning |
|---|---|---|
source |
"model", plugin name |
Who produced the output |
text |
string | The output content |
mode |
"write", "append" |
New block or continuation |
Use mode to determine formatting: "write" starts
a new output block, "append" continues the previous one.
# Simple callback
def on_output(source, text, mode):
print(text, end="")
# With source formatting
def on_output(source, text, mode):
if mode == "write":
print(f"\n[{source}] ", end="")
print(text, end="")
# Collecting output by source
outputs = {"model": [], "cli": []}
def on_output(source, text, mode):
if source in outputs:
outputs[source].append(text)
print(text, end="")
response = client.send_message(
"Run ls -la",
on_output=on_output
)
print(f"\nModel said: {''.join(outputs['model'])}")
print(f"CLI output: {''.join(outputs['cli'])}")
# Callback receives this sequence:
#
# ("model", "I'll list the files", "write")
# ("model", " for you.", "append")
# ("cli", "file1.py\nfile2.py", "write")
# ("model", "Here are the files:", "write")
# ("model", "\n- file1.py", "append")
# ("model", "\n- file2.py", "append")
Context Management
AI models have limited context windows. The client provides tools to monitor and manage context usage.
Monitoring
get_context_limit()— Model's max tokensget_context_usage()— Current usage statsget_turn_accounting()— Per-turn token counts
Garbage Collection
GC plugins automatically manage context by removing or summarizing old messages when approaching limits.
| Strategy | Description |
|---|---|
gc_truncate |
Remove oldest messages |
gc_summarize |
Summarize old messages |
gc_hybrid |
Combined approach |
# Check context limits
limit = client.get_context_limit()
usage = client.get_context_usage()
print(f"Limit: {limit:,} tokens")
print(f"Used: {usage['total_tokens']:,} tokens")
print(f"Available: {limit - usage['total_tokens']:,}")
# Per-turn breakdown
for turn in client.get_turn_accounting():
print(f"Turn {turn['turn_id']}: "
f"{turn['tokens']} tokens, "
f"{turn['duration_ms']}ms")
from shared.plugins.gc_truncate import (
create_plugin as create_gc
)
from shared.plugins.gc import GCConfig
# Create GC plugin
gc = create_gc()
gc.initialize({"preserve_recent_turns": 5})
# Configure: trigger at 80% capacity
# (or set JAATO_GC_THRESHOLD env var to override default)
client.set_gc_plugin(gc, GCConfig(
threshold_percent=80.0, # default, can omit
check_before_send=True
))
# Now context is managed automatically
# Or trigger manually:
result = client.manual_gc()
print(f"Freed {result.tokens_freed} tokens")
# Get notified when threshold is crossed
def on_gc_threshold(percent: float, threshold: float):
print(f"⚠️ Context at {percent:.1f}%!")
print(f"GC will run after this turn")
# Pass callback to send_message
response = client.send_message(
"Long conversation...",
on_output=handler,
on_gc_threshold=on_gc_threshold
)
# GC runs automatically after turn completes
# if threshold was exceeded during streaming
Session Persistence
Session plugins allow saving and resuming conversations across application restarts.
Use Cases
- Resume interrupted work
- Share conversation context
- Debugging and replay
- Long-running tasks
Session Data
Sessions store:
- Full conversation history
- Tool configuration
- User inputs (for replay)
- Metadata (timestamps, model info)
from shared.plugins.session import (
create_plugin as create_session
)
# Setup session plugin
session = create_session()
session.initialize({
"storage_path": ".jaato/sessions"
})
client.set_session_plugin(session)
# Have a conversation...
client.send_message("Hello!", on_output=handler)
client.send_message("Remember: X=42", on_output=handler)
# Save session
session_id = client.save_session()
print(f"Saved: {session_id}")
# Later: resume
client.resume_session(session_id)
client.send_message("What is X?", on_output=handler)
# Model remembers: "X is 42"
# List all sessions
for info in client.list_sessions():
print(f"{info.id}")
print(f" Created: {info.created_at}")
print(f" Messages: {info.message_count}")
# Delete old session
client.delete_session("session_abc123")
# Auto-resume last session
from shared.plugins.session import SessionConfig
client.set_session_plugin(
session,
SessionConfig(auto_resume_last=True)
)
Next Steps
Now that you understand the client, explore these related concepts:
- Plugins — How the plugin system works
- Tools — Tool execution in depth
- Providers — Multi-provider abstraction
- JaatoClient API — Full method reference
- Connection Recovery — Handle disconnects and resume sessions
from jaato import JaatoClient, PluginRegistry
from shared.plugins.gc_truncate import create_plugin as gc
from shared.plugins.session import create_plugin as session
# Full-featured client setup
client = JaatoClient()
client.connect() # Reads JAATO_PROVIDER and MODEL_NAME from env
# Tools
registry = PluginRegistry(model_name=client.model_name)
registry.discover()
registry.expose_tool("cli")
registry.expose_tool("file_edit")
client.configure_tools(registry)
# Context management
gc_plugin = gc()
gc_plugin.initialize({"preserve_recent_turns": 10})
client.set_gc_plugin(gc_plugin)
# Session persistence
session_plugin = session()
session_plugin.initialize({"storage_path": ".sessions"})
client.set_session_plugin(session_plugin)
# Ready for conversation
def handler(s, t, m):
print(t, end="")
response = client.send_message(
"Help me refactor this code",
on_output=handler
)