Custom Agents

Build purpose-built AI agents tailored to your specific workflows and domain.

Overview

While Iris comes with powerful built-in capabilities, the platform is designed to be extended. You can create custom agents that combine specialized prompts, curated tool sets, and model configurations to solve domain-specific problems. Whether you need a legal document reviewer, a financial analyst, or a customer support bot, the agent framework gives you full control.

Agent Structure

Every Iris agent is composed of four configurable layers:

  • System Prompt — defines the agent's persona, instructions, and behavioral constraints.
  • Tools — the set of capabilities the agent can invoke during execution.
  • Model — which LLM provider and model to use (Claude, GPT-4, etc.).
  • Context — uploaded documents, knowledge bases, and conversation history that inform the agent's responses.

Creating a Tool

Tools are the building blocks of agent capabilities. Each tool is a Python module in backend/core/tools/ that follows a three-method contract. Here is how to create one from scratch:

Step 1: Create the File

Create a new file following the naming convention sb_<name>_tool.py in the tools directory.

# backend/core/tools/sb_weather_tool.py

class WeatherTool:
    def get_name(self) -> str:
        return "get_weather"

    def get_schema(self) -> dict:
        return {
            "name": "get_weather",
            "description": "Get current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City name or coordinates"
                    },
                    "units": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "Temperature unit"
                    }
                },
                "required": ["location"]
            }
        }

    async def execute(self, params: dict) -> dict:
        location = params["location"]
        units = params.get("units", "celsius")
        # Call your weather API here
        result = await fetch_weather(location, units)
        return {"temperature": result.temp, "condition": result.condition}

Step 2: Register the Tool

Import your tool in the tools registry so the agent engine discovers it at startup. Add any required API keys to your .env file.

Tool Schema

Tool schemas use JSON Schema to describe parameters. The schema is passed directly to the LLM so the model understands what arguments to provide. Key schema fields include:

  • name — a unique, snake_case identifier for the tool.
  • description — a clear explanation of what the tool does and when to use it.
  • parameters — an object schema defining each input, its type, and whether it is required.

Tip: Write detailed descriptions for each parameter. The LLM uses these descriptions to decide which values to pass, so clarity directly improves tool-use accuracy.

Model Selection

Iris supports multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for each agent:

  • Claude (Anthropic) — excels at nuanced reasoning, long-context analysis, and following complex instructions.
  • GPT-4 (OpenAI) — strong at creative generation, code writing, and broad general knowledge.
  • Vision Models — GPT-4V and Claude 3 can analyze images, charts, and screenshots alongside text.

Example: Building a Custom Research Agent

Here is a practical example that combines a tailored system prompt with specific tools to create a focused research agent:

{
  "name": "Market Research Agent",
  "model": "claude-sonnet-4-20250514",
  "system_prompt": "You are a market research analyst. Always cite sources, use data tables when presenting numbers, and structure findings with clear headings.",
  "tools": [
    "web_search",
    "browse_website",
    "arxiv_search",
    "create_document"
  ],
  "max_iterations": 15
}

Best Practices

  • Keep tools focused — each tool should do one thing well. Compose multiple simple tools rather than building one monolithic tool.
  • Write clear prompts — the system prompt is the most important configuration. Be explicit about the agent's role, constraints, and output format.
  • Validate inputs — always validate parameters in your execute method before making external API calls.
  • Handle errors gracefully — return structured error messages so the LLM can recover and try alternative approaches.
  • Test in isolation — write unit tests for each tool's execute method before integrating it into an agent workflow.