# kitai


<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

Kitai is a short agent framework that aims to implement the core
features of Google’s ADK in as simple and concise a way as possible.
Read through in one sitting to get a better understanding of how
hierarchical agents work under the hood, or adapt and modify it to make
a framework that fits your exact use case.

## Features

- **Hierarchical agents** - subagents with automatic transfer tools for
  navigation
- **Simple tools** - any function is a tool, no decorators needed
- **`tool_ctx` injection** - shared state dict automatically passed to
  tools that need it
- **Instruction templating** - system prompts auto-filled from state
  using `{var}` syntax
- **Narrative casting** - prior agent messages are summarised and tagged
  for clarity
- **History filtering** - subagents can ignore irrelevant history to
  preserve context
- **Before/after callbacks** - hook into the agent loop to skip, modify,
  or replace calls
- **Any LLM provider** - built on LiteLLM

## Installation

``` sh
pip install kitai
```

## Quick start

``` python
from kitai import Agent, Runner

def mult(a:int, b:int):
    "Multiplies two numbers together"
    return a * b

def save(result:str, tool_ctx):
    "Saves some result passed into storage"
    tool_ctx["final_result"] = result

m = "anthropic/claude-haiku-4-5"
a_saver = Agent(m, name="saver", desc="Saves results into storage.", sp="You receive results from other agents and save them with your tool.", tools=[save])
a_maths = Agent(m, name="math_agent", desc="Is good at doing maths.", sp="Always do maths using tools and always pass to saver to save results.", tools=[mult], subagents=[a_saver])
a_writing = Agent("anthropic/claude-opus-4-6", name="writer", desc="A smart and expensive model to help with writing", sp="You help users with writing")
a_root = Agent(m, name="root", desc="General agent", sp="A general coordinator that delegates to sub agents as needed", subagents=[a_maths])

r = Runner(a_root)
r("What's 259875 * 175825?")
r.state
```

🔧 transfer_to_agent({“agent_name”: “math_agent”})

<details>

- id: `chatcmpl-fb0465db-7131-43b5-a04b-cfc54cca2726`
- model: `claude-haiku-4-5-20251001`
- finish_reason: `tool_calls`
- usage:
  `Usage(completion_tokens=60, prompt_tokens=614, total_tokens=674, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None, text_tokens=60, image_tokens=None, video_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None, video_tokens=None, cache_creation_tokens=0, cache_creation_token_details=CacheCreationTokenDetails(ephemeral_5m_input_tokens=0, ephemeral_1h_input_tokens=0)), cache_creation_input_tokens=0, cache_read_input_tokens=0, inference_geo='not_available', speed=None)`

</details>

🔧 mult({“a”: 259875, “b”: 175825})

<details>

- id: `chatcmpl-6fd553b0-d664-4700-a3a0-993d35b42f62`
- model: `claude-haiku-4-5-20251001`
- finish_reason: `tool_calls`
- usage:
  `Usage(completion_tokens=90, prompt_tokens=726, total_tokens=816, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None, text_tokens=90, image_tokens=None, video_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None, video_tokens=None, cache_creation_tokens=0, cache_creation_token_details=CacheCreationTokenDetails(ephemeral_5m_input_tokens=0, ephemeral_1h_input_tokens=0)), cache_creation_input_tokens=0, cache_read_input_tokens=0, inference_geo='not_available', speed=None)`

</details>

Now let me save this result to storage:

🔧 transfer_to_agent({“agent_name”: “saver”})

<details>

- id: `chatcmpl-e8af0e95-0465-48ea-adce-61a45730e6fb`
- model: `claude-haiku-4-5-20251001`
- finish_reason: `tool_calls`
- usage:
  `Usage(completion_tokens=83, prompt_tokens=832, total_tokens=915, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None, text_tokens=83, image_tokens=None, video_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None, video_tokens=None, cache_creation_tokens=0, cache_creation_token_details=CacheCreationTokenDetails(ephemeral_5m_input_tokens=0, ephemeral_1h_input_tokens=0)), cache_creation_input_tokens=0, cache_read_input_tokens=0, inference_geo='not_available', speed=None)`

</details>

🔧 save({“result”: “259875 \* 175825 = 45692521875”})

<details>

- id: `chatcmpl-e1dabf48-5b91-4af8-a495-2a1a31e54756`
- model: `claude-haiku-4-5-20251001`
- finish_reason: `tool_calls`
- usage:
  `Usage(completion_tokens=73, prompt_tokens=787, total_tokens=860, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None, text_tokens=73, image_tokens=None, video_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None, video_tokens=None, cache_creation_tokens=0, cache_creation_token_details=CacheCreationTokenDetails(ephemeral_5m_input_tokens=0, ephemeral_1h_input_tokens=0)), cache_creation_input_tokens=0, cache_read_input_tokens=0, inference_geo='not_available', speed=None)`

</details>

<details>

- id: `chatcmpl-919aad53-f580-4acf-b5d4-b5d200cb4f3e`
- model: `claude-haiku-4-5-20251001`
- finish_reason: `stop`
- usage:
  `Usage(completion_tokens=31, prompt_tokens=873, total_tokens=904, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None, text_tokens=31, image_tokens=None, video_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None, video_tokens=None, cache_creation_tokens=0, cache_creation_token_details=CacheCreationTokenDetails(ephemeral_5m_input_tokens=0, ephemeral_1h_input_tokens=0)), cache_creation_input_tokens=0, cache_read_input_tokens=0, inference_geo='not_available', speed=None)`

</details>

    {'final_result': '259875 * 175825 = 45692521875'}
