Providers
Providers are the abstraction layer between jaato and AI services. They translate between jaato's unified types and provider-specific SDKs.
Why Provider Abstraction?
Different AI services have different APIs, SDKs, and data formats. The provider abstraction gives you:
- Portability — Switch providers without code changes
- Consistency — Same types everywhere
- Isolation — SDK details hidden
- Extensibility — Add new providers easily
Your application code uses Message, ToolSchema,
and ProviderResponse — never provider-specific types.
from jaato import JaatoClient, Message, Role
# Your code doesn't know which provider
client = JaatoClient()
client.connect() # Reads JAATO_PROVIDER and MODEL_NAME from env
# Same API regardless of provider
response = client.send_message(
"Hello!",
on_output=handler
)
# History is provider-agnostic
history = client.get_history()
for msg in history:
print(f"{msg.role}: {msg.text}")
# Switch provider = change env var or one line
client = JaatoClient(provider_name="anthropic")
# Rest of code unchanged!
Available Providers
| Provider | Name | Models |
|---|---|---|
| Anthropic | anthropic |
Claude 3, 3.5, Sonnet 4, Opus 4 |
| Google GenAI | google_genai |
Gemini 1.5, 2.0, 2.5 |
| GitHub Models | github_models |
GPT-4o, Claude, Gemini, Llama |
| Claude CLI | claude_cli |
Claude (via subscription) |
| Antigravity | antigravity |
Gemini 3, Claude (via Google) |
| Ollama | ollama |
Qwen, Llama, Mistral (local) |
| Zhipu AI | zhipuai |
GLM-5, GLM-4.7, GLM-4.6, GLM-4.5 |
All providers support function calling and streaming. Set the provider
via JAATO_PROVIDER environment variable or pass
provider_name to JaatoClient().
# Recommended: configure via .env file
# JAATO_PROVIDER=anthropic
# MODEL_NAME=claude-sonnet-4-20250514
client = JaatoClient()
client.connect() # Reads provider and model from env
# Override provider in code (model still from env)
client = JaatoClient(provider_name="anthropic")
client.connect()
# Override both provider and model in code
client = JaatoClient(provider_name="google_genai")
client.connect(model="gemini-2.5-flash")
client = JaatoClient(provider_name="ollama")
client.connect(model="qwen3:32b")
from shared import discover_providers, load_provider
# Find available providers
providers = discover_providers()
print(f"Available: {list(providers.keys())}")
# ['anthropic', 'google_genai', 'github_models',
# 'claude_cli', 'antigravity', 'ollama', 'zhipuai']
# Load a specific provider
provider = load_provider("anthropic")
print(f"Loaded: {provider.name}")
Provider Protocol
All providers implement the ModelProviderPlugin
protocol. This defines the interface that jaato uses.
Core Methods
-
initialize(config)
Setup with credentials
-
connect(model)
Set the model to use
-
create_session(...)
Create/reset conversation session
-
send_message(msg)
Send text, get response
-
send_tool_results(...)
Send tool execution results
-
get_history()
Get conversation history
Streaming & Cancellation Methods
-
supports_streaming()
Check if provider supports streaming
-
supports_stop()
Check if provider supports mid-turn cancellation
-
send_message_streaming(...)
Stream response with cancellation support
-
send_tool_results_streaming(...)
Stream tool result response
from typing import Protocol, List, Optional
from jaato import (
Message, Part, ToolSchema, ToolResult,
ProviderResponse,
)
from shared import ProviderConfig
class ModelProviderPlugin(Protocol):
name: str
def initialize(self, config: ProviderConfig) -> None:
"""Setup with credentials."""
...
def connect(self, model: str) -> None:
"""Set model to use."""
...
@property
def is_connected(self) -> bool:
"""Check if connected."""
...
def create_session(
self,
system_instruction: Optional[str],
tools: List[ToolSchema],
history: Optional[List[Message]]
) -> None:
"""Create or reset session."""
...
def send_message(self, msg: str) -> ProviderResponse:
"""Send message, get response."""
...
def send_tool_results(
self,
results: List[ToolResult]
) -> ProviderResponse:
"""Send tool results."""
...
def get_history(self) -> List[Message]:
"""Get conversation history."""
...
# Streaming & cancellation methods
def supports_streaming(self) -> bool:
"""Check if streaming is supported."""
...
def supports_stop(self) -> bool:
"""Check if mid-turn cancellation is supported."""
...
def send_message_streaming(
self,
msg: str,
on_chunk: StreamingCallback,
cancel_token: Optional[CancelToken] = None
) -> ProviderResponse:
"""Stream response tokens."""
...
def send_tool_results_streaming(
self,
results: List[ToolResult],
on_chunk: StreamingCallback,
cancel_token: Optional[CancelToken] = None
) -> ProviderResponse:
"""Stream tool result response."""
...
Streaming & Cancellation
Providers support streaming responses with cancellation for graceful interruption of long-running generation.
StreamingCallback
Callback type for receiving streamed tokens:
CancelToken
Thread-safe cancellation signal. Pass to streaming methods to enable mid-generation cancellation.
Key Features
- Real-time output — Tokens delivered as generated
- Graceful cancellation — Stop between chunks
- Cooperative design — Provider checks token between chunks
- Partial results — Get accumulated text even if cancelled
supports_streaming() to check at runtime.
from jaato import CancelToken, CancelledException
# Create cancellation token
token = CancelToken()
# Callback for streamed chunks
def on_chunk(text: str):
print(text, end="", flush=True)
# Check streaming support
if provider.supports_streaming():
try:
response = provider.send_message_streaming(
"Generate a story...",
on_chunk=on_chunk,
cancel_token=token
)
except CancelledException:
print("\nGeneration cancelled")
else:
# Fallback to non-streaming
response = provider.send_message("Generate...")
import threading
token = CancelToken()
# Cancel after timeout
def cancel_later():
time.sleep(30)
token.cancel()
threading.Thread(target=cancel_later).start()
# This will stop after ~30 seconds
response = provider.send_message_streaming(
msg, on_chunk, cancel_token=token
)
Type Conversion
Providers convert between jaato's unified types and SDK-specific types. This happens transparently inside the provider.
Type Mapping (per provider)
Each provider converts jaato types to its SDK's native format. This happens transparently inside the provider.
| jaato Type | Purpose |
|---|---|
Message |
Conversation message (maps to each SDK's message type) |
Part |
Message content piece (text, images, function calls) |
ToolSchema |
Tool declaration (maps to each SDK's tool definition) |
FunctionCall |
Model's tool invocation request |
ToolResult |
Tool execution result |
# Each provider implements type conversion.
# Example: inside a provider implementation:
from jaato import Message, Part, ToolSchema, ProviderResponse
def to_sdk_messages(msg: Message):
"""Convert jaato Message to SDK-specific format."""
# Each provider maps jaato types to its SDK:
# - Anthropic: anthropic.types.MessageParam
# - Google: genai.types.Content
# - GitHub: azure.ai.inference ChatRequestMessage
...
def from_sdk_response(resp) -> ProviderResponse:
"""Convert SDK response to jaato type."""
# All providers return the same ProviderResponse
return ProviderResponse(
text=...,
function_calls=...,
usage=...
)
# Your code never sees SDK-specific types —
# the same Message, ToolSchema, ProviderResponse
# works across all providers.
Provider Configuration
Providers are configured via ProviderConfig, which
holds credentials and connection details. Each provider has its own
authentication requirements.
Common ProviderConfig Fields
-
api_key
str
API key for authentication
-
project
str
Cloud project ID (provider-specific)
-
location
str
Region/location (provider-specific)
-
credentials_path
str
Path to credentials file
-
extra
Dict
Provider-specific options
from shared import ProviderConfig, load_provider
# Load any provider
provider = load_provider("anthropic") # or "google_genai", etc.
# Configure with ProviderConfig
provider.initialize(ProviderConfig(
api_key="...", # Common
extra={"timeout": 60} # Provider-specific
))
# Connect to a model (or read MODEL_NAME from env)
provider.connect("claude-sonnet-4-20250514")
from jaato import JaatoClient
# JaatoClient handles provider setup
# Reads JAATO_PROVIDER and MODEL_NAME from env
client = JaatoClient()
client.connect()
# Or specify provider explicitly (model still from env)
client = JaatoClient(provider_name="anthropic")
client.connect()
Building a Provider
To add a new AI service, implement the
ModelProviderPlugin protocol.
Steps
- Create provider directory in
shared/plugins/model_provider/ - Implement the protocol interface
- Add type converters for the SDK
- Register in provider discovery
Key Responsibilities
- Initialize SDK client with credentials
- Convert jaato types ↔ SDK types
- Manage conversation session
- Handle tool calling loop
- Track token usage
# shared/plugins/model_provider/anthropic/provider.py
from jaato import (
ProviderResponse,
Message, ToolSchema, ToolResult
)
from shared import ModelProviderPlugin, ProviderConfig
import anthropic
class AnthropicProvider:
name = "anthropic"
def __init__(self):
self._client = None
self._model = None
self._history = []
def initialize(self, config: ProviderConfig):
self._client = anthropic.Anthropic(
api_key=config.api_key
)
def connect(self, model: str):
self._model = model
@property
def is_connected(self) -> bool:
return self._client and self._model
def create_session(self, system, tools, history):
self._system = system
self._tools = self._convert_tools(tools)
self._history = history or []
def send_message(self, msg: str) -> ProviderResponse:
# Convert to Anthropic format
# Call API
# Convert response back
response = self._client.messages.create(
model=self._model,
system=self._system,
messages=self._to_sdk_messages(),
tools=self._tools
)
return self._from_sdk_response(response)
# ... converters ...
Unified Type System
The types in shared/plugins/model_provider/types.py
form the common vocabulary across all providers.
Core Types
| Type | Purpose |
|---|---|
Message |
Conversation message |
Part |
Message content piece |
ToolSchema |
Tool declaration |
FunctionCall |
Model's tool request |
ToolResult |
Tool execution result |
ProviderResponse |
Model response |
TokenUsage |
Token statistics |
See the Types Reference for complete documentation.
from jaato import (
Message, Part, Role,
ToolSchema, FunctionCall, ToolResult,
ProviderResponse, TokenUsage
)
# Create a message
msg = Message.from_text(Role.USER, "Hello!")
# Create a tool schema
tool = ToolSchema(
name="search",
description="Search the web",
parameters={
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"]
}
)
# These work with ANY provider
# Google, Anthropic, OpenAI — same code
# Recommended: import from jaato (public API)
from jaato import Message, ToolSchema
# Direct import (also works)
from jaato_sdk.plugins.model_provider.types import (
Message,
Part,
Role,
ToolSchema,
FunctionCall,
ToolResult,
ProviderResponse,
TokenUsage,
FinishReason,
Attachment
)
Next Steps
- Types Reference — Full type documentation
- Building Plugins — Create tool plugins
- Quickstart — Get started with jaato
# Provider plugins location:
# shared/plugins/model_provider/
# ├── __init__.py
# ├── base.py # Protocol definition
# ├── types.py # Unified types
# ├── anthropic/ # Anthropic Claude
# ├── google_genai/ # Google GenAI / Vertex AI
# ├── github_models/ # GitHub Models API
# ├── claude_cli/ # Claude CLI wrapper
# ├── antigravity/ # Google IDE backend
# ├── ollama/ # Ollama local models
# └── zhipuai/ # Zhipu AI GLM models