Building Plugins

Learn how to create custom tool plugins that extend jaato's capabilities. Plugins can expose tools to the model, provide user commands, or both.

Overview

A plugin is a Python module that follows a simple protocol. At minimum, it provides:

  • Tool schemas — What tools are available
  • Executors — Functions that run the tools

Optionally, plugins can also provide user commands, auto-approved tools, and prompt enrichment.

Import Pattern

Throughout this guide, examples use from jaato import ToolSchema. This is the recommended import pattern for all plugins (both in-tree and external).

When jaato is installed as a package, from jaato import ... imports from the public API. The internal shared/ directory is an implementation detail.

Plugin structure
# shared/plugins/my_plugin/
#   __init__.py
#   plugin.py

# __init__.py
from .plugin import create_plugin

# plugin.py
class MyPlugin:
    def initialize(self, config):
        """Called once with configuration."""
        pass

    def get_tool_schemas(self):
        """Return list of ToolSchema."""
        return [...]

    def get_executors(self):
        """Return dict of name -> callable."""
        return {...}

Step 1: Create the Plugin Class

Start by creating a new directory under shared/plugins/ with your plugin name. Then create the main plugin class.

Required Members

All plugins must implement these members to satisfy the ToolPlugin protocol. The registry uses isinstance(plugin, ToolPlugin) at runtime and will reject plugins missing any of these.

Member Type/Returns Purpose
name str (property) Unique identifier for this plugin
initialize(config) None Setup with configuration dict
shutdown() None Cleanup when plugin is disabled
get_tool_schemas() List[ToolSchema] Declare available tools
get_executors() Dict[str, Callable] Map tool names to functions
get_system_instructions() Optional[str] Instructions for the model (return None if not needed)
get_auto_approved_tools() List[str] Tool names that skip permission checks (return [] if none)
get_user_commands() List[UserCommand] Commands users can invoke directly (return [] if none)
Protocol Compliance

The registry validates plugins using Python's @runtime_checkable protocol. If any required member is missing, you'll see: "plugin does not implement ToolPlugin protocol"

Optional Methods

These methods extend plugin capabilities but are not required for protocol compliance.

Method Returns Purpose
subscribes_to_prompt_enrichment() bool Return True to enable prompt enrichment
enrich_prompt(prompt) PromptEnrichmentResult Discover and inject context the user didn't provide (see multimodal plugin example)
supports_interactivity() bool Declares if plugin requires user interaction (see Step 4.5)
get_supported_channels() List[str] Compatible channels: console, queue, webhook, file (see Step 4.5)
set_channel() None Configure the interaction channel for the plugin (see Step 4.5)
Prompt Enrichment

Prompt enrichment allows plugins to discover and provide relevant information that the user didn't explicitly include. For example, the multimodal plugin detects @image.png references in prompts and adds context about the viewImage tool availability.

How it works: The framework checks if a plugin implements subscribes_to_prompt_enrichment() and returns True. If so, the framework will call enrich_prompt(prompt) before sending each user message to the model. Both methods must be implemented for enrichment to work.

Complete plugin class (all required members)
from typing import Dict, List, Optional, Any, Callable
from jaato import ToolSchema

class WeatherPlugin:
    """Plugin that provides weather information."""

    @property
    def name(self) -> str:
        """Unique identifier for this plugin."""
        return "weather"

    def __init__(self):
        self.api_key = None
        self.default_units = "celsius"

    def initialize(self, config: dict):
        """Called by registry with configuration."""
        self.api_key = config.get("api_key")
        self.default_units = config.get("units", "celsius")

    def shutdown(self):
        """Cleanup when plugin is disabled."""
        pass  # Nothing to clean up

    def get_tool_schemas(self) -> List[ToolSchema]:
        """Declare the tools this plugin provides."""
        return [
            ToolSchema(
                name="get_weather",
                description="Get current weather for a city",
                parameters={
                    "type": "object",
                    "properties": {
                        "city": {
                            "type": "string",
                            "description": "City name"
                        }
                    },
                    "required": ["city"]
                }
            )
        ]

    def get_executors(self) -> Dict[str, Callable]:
        """Map tool names to executor functions."""
        return {"get_weather": self._get_weather}

    def get_system_instructions(self) -> Optional[str]:
        """Instructions for the model."""
        return None  # No special instructions needed

    def get_auto_approved_tools(self) -> List[str]:
        """Tools that skip permission checks."""
        return ["get_weather"]  # Read-only, safe

    def get_user_commands(self) -> List:
        """User-invokable commands."""
        return []  # No user commands

    def _get_weather(self, city: str):
        """Execute the get_weather tool."""
        return f"Weather in {city}: 22°C, Sunny"
Optional: Prompt enrichment
# See multimodal plugin for a real example
def subscribes_to_prompt_enrichment(self) -> bool:
    """Return True to enable prompt enrichment."""
    return True

def enrich_prompt(self, prompt: str):
    """Discover context the user didn't provide.

    Example: multimodal plugin detects @image.png
    references and adds viewImage tool context.
    """
    from jaato_sdk.plugins.base import PromptEnrichmentResult
    # Detect patterns, add context...
    return PromptEnrichmentResult(
        prompt=prompt,  # Modified or original
        metadata={}     # Any discovered info
    )
Deprecated: get_prompt_enrichment
# OLD (deprecated) - static text added to prompt
def get_prompt_enrichment(self):
    return """
You have access to weather tools. Use get_weather
to check current conditions for any city worldwide.
Temperature units can be celsius or fahrenheit.
Always specify the city name clearly.
"""

Step 2: Define Tool Schemas

Tool schemas tell the model what tools are available and how to use them. Use clear descriptions—the model relies on these to decide when to use your tool.

ToolSchema Fields

Field Type Description
name str Unique tool identifier
description str What the tool does (model reads this)
parameters dict JSON Schema for parameters
Write Good Descriptions
The model uses descriptions to decide when to call your tool. Be specific: instead of "searches files", say "searches file contents using regex patterns, returns matching lines with context".

Parameter Type Mapping

JSON Schema types in parameters map to Python types in your executor function signatures. Use this table as a reference:

JSON Schema Type Python Type Example Value
"string" str "hello"
"number" float 3.14
"integer" int 42
"boolean" bool True
"array" list [1, 2, 3]
"object" dict {"key": "value"}
Schema examples
from jaato import ToolSchema

# Simple tool with required parameter
search_schema = ToolSchema(
    name="search_files",
    description="""
    Search file contents using regex patterns.
    Returns matching lines with file path and
    line numbers. Use for finding code, config
    values, or text patterns.
    """,
    parameters={
        "type": "object",
        "properties": {
            "pattern": {
                "type": "string",
                "description": "Regex pattern to search"
            },
            "path": {
                "type": "string",
                "description": "Directory to search in"
            },
            "file_types": {
                "type": "array",
                "items": {"type": "string"},
                "description": "File extensions: ['.py', '.js']"
            }
        },
        "required": ["pattern"]
    }
)

# Tool with enum parameter
format_schema = ToolSchema(
    name="format_code",
    description="Format source code file",
    parameters={
        "type": "object",
        "properties": {
            "file_path": {
                "type": "string",
                "description": "Path to file"
            },
            "style": {
                "type": "string",
                "enum": ["black", "autopep8", "yapf"],
                "description": "Formatter to use"
            }
        },
        "required": ["file_path", "style"]
    }
)

# Tool with no parameters
list_schema = ToolSchema(
    name="list_todos",
    description="List all pending todo items",
    parameters={
        "type": "object",
        "properties": {}
    }
)

Step 3: Implement Executors

Executors are the functions that actually run when the model calls your tool. They receive the parameters from the model and return a result string.

Executor Requirements

  • Function signature must match schema parameters
  • Return a string (the result shown to the model)
  • Handle errors gracefully—return error messages, don't raise
  • Keep execution time reasonable

Return Values

The return value is sent back to the model as the tool result. Format it clearly—the model needs to understand and use this output.

Executor implementation
import subprocess
import json

class MyPlugin:
    def get_executors(self):
        return {
            "search_files": self._search_files,
            "run_tests": self._run_tests,
        }

    def _search_files(
        self,
        pattern: str,
        path: str = ".",
        file_types: list = None
    ) -> str:
        """
        Search for pattern in files.

        Returns formatted results or error message.
        """
        try:
            cmd = ["grep", "-rn", pattern, path]

            if file_types:
                for ft in file_types:
                    cmd.extend(["--include", f"*{ft}"])

            result = subprocess.run(
                cmd,
                capture_output=True,
                text=True,
                timeout=30
            )

            if result.returncode == 0:
                return result.stdout or "No matches found"
            elif result.returncode == 1:
                return "No matches found"
            else:
                return f"Error: {result.stderr}"

        except subprocess.TimeoutExpired:
            return "Error: Search timed out"
        except Exception as e:
            return f"Error: {str(e)}"

    def _run_tests(
        self,
        test_path: str = None,
        verbose: bool = False
    ) -> str:
        """Run pytest and return results."""
        try:
            cmd = ["pytest"]
            if test_path:
                cmd.append(test_path)
            if verbose:
                cmd.append("-v")

            result = subprocess.run(
                cmd,
                capture_output=True,
                text=True,
                timeout=300
            )

            # Return both stdout and stderr
            output = result.stdout
            if result.stderr:
                output += f"\n\nStderr:\n{result.stderr}"

            return output

        except Exception as e:
            return f"Error running tests: {e}"

Step 4: Add User Commands (Optional)

User commands let users invoke functionality directly without going through the model. Useful for quick actions or when you want deterministic behavior.

UserCommand Fields

Field Description
name Command name (e.g., "search")
description Help text for the command
share_with_model Add output to conversation history?
share_with_model

share_with_model=True: Output goes to history, model sees it

share_with_model=False: Output only shown to user

This does NOT expose the command as a model tool.

Adding user commands
from jaato import UserCommand

class MyPlugin:
    def get_user_commands(self):
        """Define commands users can invoke directly."""
        return [
            UserCommand(
                "search",
                "Search files: /search ",
                share_with_model=True
            ),
            UserCommand(
                "clear_cache",
                "Clear plugin cache",
                share_with_model=False
            ),
        ]

    def execute_user_command(
        self,
        command_name: str,
        args: dict
    ) -> tuple:
        """
        Execute a user command.

        Returns:
            (result, share_with_model)
        """
        if command_name == "search":
            pattern = args.get("query", "")
            result = self._search_files(pattern)
            return (result, True)

        elif command_name == "clear_cache":
            self._cache = {}
            return ("Cache cleared", False)

        return (f"Unknown command: {command_name}", False)
User invokes command
# User types: /search TODO
#
# With share_with_model=True:
#   1. Plugin runs search
#   2. Results shown to user
#   3. Results added to conversation history
#   4. Model can reference results in next turn

# With share_with_model=False:
#   1. Plugin runs command
#   2. Results shown to user only
#   3. Model never sees the output

Step 4.5: Interactive Plugins (Optional)

Plugins that require user interaction (permissions, questions, progress reporting) should implement the interactivity protocol. This allows clients to verify compatibility and configure the appropriate interaction channel.

Channel Types

  • console — Standard terminal stdin/stdout
  • queue — Callback-based I/O for TUI/rich clients
  • webhook — HTTP-based remote interaction
  • file — Filesystem-based communication

When to Use

Implement this protocol if your plugin:

  • Prompts users for approval (like permission checks)
  • Asks questions requiring user input
  • Presents selection dialogs
  • Reports progress with real-time updates

Clients can use supports_interactivity() to check if a plugin requires user interaction, then verify the plugin supports their channel type via get_supported_channels() before loading.

Interactive plugin example
class ProgressPlugin:
    """Plugin with interactive progress reporting."""

    def __init__(self):
        self._reporter = None

    # ... tool schemas and executors ...

    def supports_interactivity(self) -> bool:
        """Declare interactive features."""
        return True

    def get_supported_channels(self) -> List[str]:
        """List compatible channels."""
        return ["console", "queue", "webhook", "file"]

    def set_channel(
        self,
        channel_type: str,
        channel_config: Optional[Dict[str, Any]] = None
    ) -> None:
        """Configure the interaction channel."""
        if channel_type not in self.get_supported_channels():
            raise ValueError(f"Unsupported channel: {channel_type}")

        # Create appropriate reporter/channel
        if channel_type == "console":
            config = channel_config or {}
            # For queue-based clients, config includes:
            #   - output_callback: (source, text, mode) -> None
            self._reporter = ConsoleReporter(config)
        elif channel_type == "webhook":
            self._reporter = WebhookReporter(channel_config)
        # ... handle other channel types ...
Client compatibility check
# Rich TUI client checking compatibility
def load_plugin(plugin: ToolPlugin) -> bool:
    if plugin.supports_interactivity():
        supported = plugin.get_supported_channels()

        if "queue" not in supported:
            print(f"Warning: {plugin.name} doesn't support queue channel")
            print(f"Supported: {supported}")
            return False

        # Configure for TUI with callbacks
        plugin.set_channel("queue", {
            "output_callback": self.output_callback,
            "input_queue": self.input_queue,
            "prompt_callback": self.prompt_callback
        })

    return True

Step 5: Register the Plugin

Finally, create the factory function and register your plugin so it can be discovered by the registry.

Factory Function

The create_plugin() function is called by the registry during discovery. It should return a new instance of your plugin.

Plugin Metadata

Add a PLUGIN_INFO dict to help with discovery and provide metadata about your plugin.

__init__.py
# shared/plugins/weather/__init__.py

from .plugin import WeatherPlugin

PLUGIN_INFO = {
    "name": "weather",
    "description": "Weather information tools",
    "version": "1.0.0",
    "author": "Your Name",
}

def create_plugin():
    """Factory function called by registry."""
    return WeatherPlugin()
Using your plugin
from jaato import JaatoClient, PluginRegistry

# Create client and registry
client = JaatoClient()
client.connect()  # Reads JAATO_PROVIDER and MODEL_NAME from env

registry = PluginRegistry(model_name=client.model_name)
registry.discover()

# Expose your plugin
registry.expose_tool("weather")

# Configure client with tools
client.configure_tools(registry)

# Now the model can use get_weather
response = client.send_message(
    "What's the weather in Tokyo?",
    on_output=lambda s, t, m: print(t, end="")
)

Step 6: Package with pyproject.toml

For distributable plugins, create a proper Python package with pyproject.toml. This enables installation via pip and proper dependency management.

Project Structure

my-jaato-plugin/
├── pyproject.toml
├── README.md
├── src/
│   └── my_plugin/
│       ├── __init__.py
│       └── plugin.py
└── tests/
    └── test_plugin.py

Key Fields

Field Purpose
name Package name for pip install
dependencies Required packages
entry-points Plugin discovery hook
Entry Points
The [project.entry-points] section allows jaato to discover your plugin automatically when installed.
Plugin Dependencies
Out-of-tree plugins need to declare jaato as a dependency to access the type system (ToolSchema, Message, etc.) and plugin base classes. This allows your plugin to import from shared and work correctly when installed.
pyproject.toml
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[project]
name = "jaato-plugin-weather"
version = "1.0.0"
description = "Weather tools for jaato"
readme = "README.md"
requires-python = ">=3.10"
license = "MIT"
authors = [
    { name = "Your Name", email = "you@example.com" }
]

dependencies = [
    # Required: jaato provides types and base classes
    "jaato>=1.0.0",
    # Optional: external dependencies for your plugin
    "requests>=2.28.0",
]

[project.optional-dependencies]
dev = [
    "pytest>=7.0",
    "pytest-asyncio",
]

# Entry point for plugin discovery
[project.entry-points."jaato.plugins"]
weather = "my_plugin:create_plugin"

[tool.hatch.build.targets.wheel]
packages = ["src/my_plugin"]
Install and use
# Install in development mode
pip install -e .

# Or install from PyPI (after publishing)
pip install jaato-plugin-weather

# Plugin is now discoverable
from jaato import PluginRegistry

registry = PluginRegistry()
registry.discover()  # Finds your plugin!
registry.expose_tool("weather")
src/my_plugin/__init__.py
from .plugin import WeatherPlugin

PLUGIN_INFO = {
    "name": "weather",
    "description": "Weather information tools",
    "version": "1.0.0",
}

def create_plugin():
    """Entry point for jaato plugin discovery."""
    return WeatherPlugin()

Step 7: Testing Your Plugin

Before integrating your plugin into the main system, test it standalone to verify correctness. This ensures your plugin works independently of the jaato client.

Testing Approach

  1. Import your plugin using the factory function
  2. Initialize with test configuration
  3. Verify tool schemas are correctly defined
  4. Test each executor with valid inputs
  5. Test error handling with invalid inputs
  6. Verify output format and content

What to Test

  • Schema Validation — Correct names, descriptions, parameters
  • Executor Mapping — All schemas have matching executors
  • Happy Path — Valid inputs produce expected outputs
  • Error Cases — Invalid inputs return error messages (not exceptions)
  • Configuration — initialize() properly sets config values
  • Edge Cases — Boundary conditions, empty inputs, special characters
Environment Setup
If your test needs dependencies, create a virtual environment first:
python3 -m venv .venv
source .venv/bin/activate
pip install -e jaato-sdk/. -e "jaato-server/.[all]" -e "jaato-tui/.[all]"
test_my_plugin.py
#!/usr/bin/env python3
"""Test script for my plugin."""

import json


def test_plugin():
    """Test plugin implementation."""
    print("Testing My Plugin")
    print("=" * 60)

    # Test 1: Import and create
    print("\n[1] Importing plugin...")
    from shared.plugins.my_plugin import create_plugin, PLUGIN_INFO
    plugin = create_plugin()
    print(f"✓ Created: {PLUGIN_INFO['name']}")

    # Test 2: Initialize
    print("\n[2] Initializing...")
    plugin.initialize({"precision": 3})
    print("✓ Initialized with config")

    # Test 3: Get schemas
    print("\n[3] Getting tool schemas...")
    schemas = plugin.get_tool_schemas()
    print(f"✓ Found {len(schemas)} tools")
    for schema in schemas:
        print(f"  - {schema.name}")

    # Test 4: Get executors
    print("\n[4] Getting executors...")
    executors = plugin.get_executors()
    assert len(executors) == len(schemas), \
        "Executor count must match schema count"
    print(f"✓ All {len(executors)} executors present")

    # Test 5: Test valid input
    print("\n[5] Testing valid input...")
    result = plugin._my_tool("valid_input")
    print(f"✓ Result: {result[:50]}...")

    # Verify it's valid JSON (if using JSON format)
    try:
        data = json.loads(result)
        assert "result" in data
        print("✓ Valid JSON structure")
    except json.JSONDecodeError:
        print("  (Plain text result)")

    # Test 6: Test error handling
    print("\n[6] Testing error handling...")
    error_result = plugin._my_tool("")  # Invalid
    assert "Error" in error_result or "error" in error_result.lower()
    print(f"✓ Error handled: {error_result[:50]}...")

    # Test 7: Test edge cases
    print("\n[7] Testing edge cases...")
    edge_result = plugin._my_tool("   ")  # Whitespace
    assert isinstance(edge_result, str)
    print("✓ Edge case handled")

    print("\n" + "=" * 60)
    print("All tests passed! ✓")
    return True


if __name__ == "__main__":
    import sys
    success = test_plugin()
    sys.exit(0 if success else 1)
Run tests
# With virtual environment
.venv/bin/python test_my_plugin.py

# Or use pytest for more features
.venv/bin/pytest test_my_plugin.py -v
Example output
Testing My Plugin
============================================================

[1] Importing plugin...
✓ Created: my_plugin

[2] Initializing...
✓ Initialized with config

[3] Getting tool schemas...
✓ Found 2 tools
  - my_tool
  - another_tool

[4] Getting executors...
✓ All 2 executors present

[5] Testing valid input...
✓ Result: {"result": "success", "data": "output"}...
✓ Valid JSON structure

[6] Testing error handling...
✓ Error handled: Error: parameter required...

[7] Testing edge cases...
✓ Edge case handled

============================================================
All tests passed! ✓

Best Practices

Error Handling

Never let exceptions propagate from executors. Always catch and return meaningful error messages.

Timeouts

Long-running operations should have timeouts. The model is waiting for your result.

Clear Output

Format output so the model can parse and use it. JSON works well for structured data.

Minimal Dependencies

Keep external dependencies minimal. If you need them, document them clearly.

Error handling pattern
def _my_executor(self, param: str) -> str:
    """Always return a string, never raise."""
    try:
        # Validate input
        if not param:
            return "Error: parameter required"

        # Do the work with timeout
        result = self._do_work(param, timeout=30)

        # Format output clearly
        return json.dumps({
            "status": "success",
            "data": result
        }, indent=2)

    except TimeoutError:
        return "Error: operation timed out"
    except ValueError as e:
        return f"Error: invalid input - {e}"
    except Exception as e:
        # Log for debugging, return generic message
        logging.error(f"Executor failed: {e}")
        return f"Error: {str(e)}"
Structured output
# Good: structured, parseable
def _search(self, query: str) -> str:
    results = self._do_search(query)
    return json.dumps({
        "query": query,
        "count": len(results),
        "results": [
            {"file": r.file, "line": r.line, "text": r.text}
            for r in results[:10]
        ]
    }, indent=2)

# Also good: clear text format
def _search(self, query: str) -> str:
    results = self._do_search(query)
    lines = [f"Found {len(results)} matches for '{query}':"]
    for r in results[:10]:
        lines.append(f"  {r.file}:{r.line}: {r.text}")
    return "\n".join(lines)

Next Steps

Now that you know how to build plugins, explore these related topics:

Complete plugin example
# shared/plugins/calculator/__init__.py
from .plugin import CalculatorPlugin

PLUGIN_INFO = {
    "name": "calculator",
    "description": "Math operations",
}

def create_plugin():
    return CalculatorPlugin()

# shared/plugins/calculator/plugin.py
from jaato import ToolSchema

class CalculatorPlugin:
    def initialize(self, config):
        self.precision = config.get("precision", 2)

    def get_tool_schemas(self):
        return [
            ToolSchema(
                name="calculate",
                description="Evaluate math expression",
                parameters={
                    "type": "object",
                    "properties": {
                        "expression": {
                            "type": "string",
                            "description": "Math expression"
                        }
                    },
                    "required": ["expression"]
                }
            )
        ]

    def get_executors(self):
        return {"calculate": self._calculate}

    def _calculate(self, expression: str) -> str:
        try:
            # Safe eval for math only
            result = eval(expression, {"__builtins__": {}})
            return f"{expression} = {round(result, self.precision)}"
        except Exception as e:
            return f"Error: {e}"