Agent Extensions
Overview
Section titled “Overview”The Prompty agent loop (see Agent Mode) supports six optional extensions that give you fine-grained control over every iteration. All extensions are opt-in — pass only the ones you need.
| Extension | Purpose | Parameter |
|---|---|---|
| Events | Observe loop activity (tool calls, errors, etc.) | on_event / onEvent |
| Cancellation | Cooperatively abort a running loop | cancel / signal / cancellationToken |
| Context Window | Auto-trim messages to fit the model’s context | context_budget / contextBudget |
| Guardrails | Validate input, output, and tool calls | guardrails |
| Steering | Inject user messages mid-loop | steering |
| Parallel Tools | Execute tool calls concurrently | parallel_tool_calls / parallelToolCalls |
Additionally, each runtime provides a typed tool decorator/attribute that turns a regular function into a registered tool.
Execution Order (per iteration)
Section titled “Execution Order (per iteration)”1. Check cancellation2. Drain steering messages3. Trim context window4. Input guardrail5. Check cancellation (again, before LLM call)6. Call LLM7. Output guardrail8. Tool guardrails (per tool)9. Execute tools (serial or parallel)10. Format tool results → append to messagesEvents
Section titled “Events”Subscribe to structured events emitted during the agent loop. Event callbacks must not block — exceptions are silently swallowed to keep the loop running.
from prompty import invoke_agent, loadfrom prompty.core import AgentEvent, EventCallback
def my_callback(event: AgentEvent) -> None: print(f"[{event.event_type}] {event.data}")
agent = load("agent.prompty")result = invoke_agent( agent, inputs={"question": "Hello"}, tools={"get_weather": get_weather}, on_event=my_callback,)import { load, invokeAgent } from "@prompty/core";
const agent = await load("agent.prompty");const result = await invokeAgent(agent, { question: "Hello" }, { tools: { get_weather: getWeather }, onEvent: (eventType, data) => { console.log(`[${eventType}]`, data); },});using Prompty.Core;
var agent = PromptyLoader.Load("agent.prompty");var result = await Pipeline.InvokeAgentAsync( agent, new() { ["question"] = "Hello" }, tools: tools, onEvent: (eventType, data) => { Console.WriteLine($"[{eventType}] {string.Join(", ", data)}"); });Event Types
Section titled “Event Types”| Event | When | Data |
|---|---|---|
tool_call_start | Before each tool executes | name, arguments |
tool_result | After each tool executes | name, result |
status | Informational (e.g., steering injected) | message |
messages_updated | Messages array changed | messages |
done | Loop completed normally | response, messages |
error | A guardrail denied or error occurred | message |
cancelled | Loop was cancelled | iteration |
Cancellation
Section titled “Cancellation”Cooperatively cancel a running agent loop. The loop checks for cancellation at the top of each iteration and just before the LLM call.
import threadingfrom prompty import invoke_agent, loadfrom prompty.core import CancellationToken
token = CancellationToken()
# Cancel from another thread after 5 secondsthreading.Timer(5.0, token.cancel).start()
try: result = invoke_agent(agent, inputs, tools, cancel=token)except CancelledError: print("Agent loop was cancelled")import { invokeAgent, CancelledError } from "@prompty/core";
const controller = new AbortController();
// Cancel after 5 secondssetTimeout(() => controller.abort(), 5000);
try { const result = await invokeAgent(agent, inputs, { tools, signal: controller.signal, });} catch (err) { if (err instanceof CancelledError) { console.log("Agent loop was cancelled"); }}using Prompty.Core;
var cts = new CancellationTokenSource();
// Cancel after 5 secondscts.CancelAfter(TimeSpan.FromSeconds(5));
try{ var result = await Pipeline.InvokeAgentAsync( agent, inputs, tools, cancellationToken: cts.Token );}catch (OperationCanceledException){ Console.WriteLine("Agent loop was cancelled");}Context Window Management
Section titled “Context Window Management”Automatically trim messages to fit within a character budget. The trimmer preserves system messages and the most recent conversation turns, replacing dropped messages with a compact summary.
result = invoke_agent( agent, inputs={"question": "Summarize our conversation"}, tools=tools, context_budget=50_000, # characters)const result = await invokeAgent(agent, inputs, { tools, contextBudget: 50_000,});var result = await Pipeline.InvokeAgentAsync( agent, inputs, tools, contextBudget: 50_000);How Trimming Works
Section titled “How Trimming Works”- Estimate the character cost of all messages (role overhead + text + tool call JSON)
- Partition into leading system messages vs. the rest
- Drop the oldest non-system messages until within budget (keeping at least 2)
- Summarize dropped messages into a compact
[Context summary: ...]block - Inject the summary as a user message after the system messages
Guardrails
Section titled “Guardrails”Validate messages at three checkpoints in the loop: before the LLM call (input),
after the LLM responds (output), and before each tool executes (tool). If a
guardrail denies, a GuardrailError is raised.
from prompty.core import Guardrails, GuardrailResult, GuardrailError
def check_input(messages): """Block prompt injection attempts.""" for msg in messages: if "ignore previous instructions" in msg.text.lower(): return GuardrailResult(allowed=False, reason="Prompt injection detected") return GuardrailResult(allowed=True)
def check_tool(name, args): """Only allow known-safe tools.""" if name == "delete_all_data": return GuardrailResult(allowed=False, reason="Dangerous tool blocked") return GuardrailResult(allowed=True)
guardrails = Guardrails(input=check_input, tool=check_tool)
try: result = invoke_agent(agent, inputs, tools, guardrails=guardrails)except GuardrailError as e: print(f"Blocked: {e.reason}")import { Guardrails, GuardrailError } from "@prompty/core";
const guardrails = new Guardrails({ input: (messages) => { for (const msg of messages) { if (msg.text.toLowerCase().includes("ignore previous instructions")) { return { allowed: false, reason: "Prompt injection detected" }; } } return { allowed: true }; }, tool: (name, args) => { if (name === "delete_all_data") { return { allowed: false, reason: "Dangerous tool blocked" }; } return { allowed: true }; },});
try { const result = await invokeAgent(agent, inputs, { tools, guardrails });} catch (err) { if (err instanceof GuardrailError) { console.log(`Blocked: ${err.reason}`); }}using Prompty.Core;
var guardrails = new Guardrails( input: (messages) => { foreach (var msg in messages) { if (msg.Text.Contains("ignore previous instructions", StringComparison.OrdinalIgnoreCase)) return new GuardrailResult(false, "Prompt injection detected"); } return new GuardrailResult(true); }, tool: (name, args) => { if (name == "delete_all_data") return new GuardrailResult(false, "Dangerous tool blocked"); return new GuardrailResult(true); });
try{ var result = await Pipeline.InvokeAgentAsync( agent, inputs, tools, guardrails: guardrails );}catch (GuardrailError e){ Console.WriteLine($"Blocked: {e.Reason}");}Guardrail Checkpoints
Section titled “Guardrail Checkpoints”| Hook | When | Receives | Typical Use |
|---|---|---|---|
input | Before LLM call | Full message list | Prompt injection detection, content policy |
output | After LLM response | Assistant message | Toxicity filtering, PII detection |
tool | Before each tool call | Tool name + args | Block dangerous operations, rate limiting |
Steering
Section titled “Steering”Inject additional user messages into a running agent loop from outside. This is useful for human-in-the-loop scenarios where you want to redirect the agent mid-conversation.
import threadingfrom prompty.core import Steering
steering = Steering()
def user_input_loop(): while True: msg = input("You: ") steering.send(msg)
# Run input loop in backgroundthreading.Thread(target=user_input_loop, daemon=True).start()
result = invoke_agent(agent, inputs, tools, steering=steering)import { Steering } from "@prompty/core";
const steering = new Steering();
// Inject a message that will be picked up at the next iterationsteering.send("Actually, check Paris instead of London");
const result = await invokeAgent(agent, inputs, { tools, steering,});using Prompty.Core;
var steering = new Steering();
// From another thread or async contextsteering.Send("Actually, check Paris instead of London");
var result = await Pipeline.InvokeAgentAsync( agent, inputs, tools, steering: steering);At the top of each iteration, the loop calls steering.drain() to collect all
pending messages and appends them to the conversation. The steering queue is
thread-safe in all runtimes.
Parallel Tool Execution
Section titled “Parallel Tool Execution”When the LLM requests multiple tool calls in a single response, you can execute them concurrently instead of sequentially.
# Async mode uses asyncio.gather for true parallelismresult = await invoke_agent_async( agent, inputs, tools, parallel_tool_calls=True,)// Uses Promise.all for concurrent executionconst result = await invokeAgent(agent, inputs, { tools, parallelToolCalls: true,});// Uses Task.WhenAll for concurrent executionvar result = await Pipeline.InvokeAgentAsync( agent, inputs, tools, parallelToolCalls: true);Combining Extensions
Section titled “Combining Extensions”All extensions compose naturally. Here’s a fully-configured agent call:
from prompty import invoke_agent_async, load_asyncfrom prompty.core import ( CancellationToken, Guardrails, GuardrailResult, Steering,)
agent = await load_async("agent.prompty")token = CancellationToken()steering = Steering()
guardrails = Guardrails( input=lambda msgs: GuardrailResult(allowed=True), output=lambda msg: GuardrailResult(allowed=True), tool=lambda name, args: GuardrailResult(allowed=True),)
result = await invoke_agent_async( agent, inputs={"question": "Plan my trip"}, tools=tools, on_event=lambda e: print(f"[{e.event_type}]"), cancel=token, context_budget=50_000, guardrails=guardrails, steering=steering, parallel_tool_calls=True, max_iterations=20,)import { load, invokeAgent, Guardrails, Steering, CancelledError,} from "@prompty/core";
const agent = await load("agent.prompty");const controller = new AbortController();const steering = new Steering();const guardrails = new Guardrails({ input: () => ({ allowed: true }), output: () => ({ allowed: true }), tool: () => ({ allowed: true }),});
const result = await invokeAgent(agent, { question: "Plan my trip" }, { tools, onEvent: (type, data) => console.log(`[${type}]`), signal: controller.signal, contextBudget: 50_000, guardrails, steering, parallelToolCalls: true, maxIterations: 20,});using Prompty.Core;
var agent = PromptyLoader.Load("agent.prompty");var cts = new CancellationTokenSource();var steering = new Steering();var guardrails = new Guardrails( input: _ => new GuardrailResult(true), output: _ => new GuardrailResult(true), tool: (_, _) => new GuardrailResult(true));
var result = await Pipeline.InvokeAgentAsync( agent, new() { ["question"] = "Plan my trip" }, tools: tools, maxIterations: 20, onEvent: (type, data) => Console.WriteLine($"[{type}]"), cancellationToken: cts.Token, contextBudget: 50_000, guardrails: guardrails, steering: steering, parallelToolCalls: true);Typed Tool Functions
Section titled “Typed Tool Functions”The .prompty file is the single source of truth — tools are declared in
frontmatter so the file is a complete, portable exchange format. The runtime
needs a handler function for each declared tool. Each runtime provides a
decorator or attribute that makes writing these handlers clean: you get typed
parameters instead of raw JSON, and the boilerplate disappears.
The .prompty File Declares, Your Code Implements
Section titled “The .prompty File Declares, Your Code Implements”Tools are always declared in the .prompty frontmatter — this is what gets
sent to the LLM so it knows what tools are available:
# agent.prompty (frontmatter excerpt)tools: - name: get_weather kind: function description: Get the current weather for a city parameters: - name: city kind: string description: City name required: true - name: units kind: string default: celsiusYour code then provides the handler — the function that actually runs when
the LLM calls get_weather. This is where @tool / tool() / [Tool] helps.
Before and After
Section titled “Before and After”# ❌ Without @tool — manual dict, raw JSON parsingimport json
def get_weather(args_json): args = json.loads(args_json) city = args["city"] units = args.get("units", "celsius") return f"72°F in {city}"
tools = {"get_weather": get_weather}
result = invoke_agent(agent, inputs, tools=tools)# ✅ With @tool + bind_tools — typed, validated, cleanfrom prompty import tool, bind_tools, invoke_agent
@tooldef get_weather(city: str, units: str = "celsius") -> str: """Get the current weather for a city.""" return f"72°F in {city}"
# bind_tools validates names match the .prompty declarationstools = bind_tools(agent, [get_weather])result = invoke_agent(agent, inputs, tools=tools)// ❌ Without tool() — manual dict, untyped argsconst tools = { get_weather: (args: Record<string, unknown>) => { const city = args.city as string; return `72°F in ${city}`; },};
const result = await invokeAgent(agent, inputs, { tools });// ✅ With tool() + bindTools — typed, validated handlerimport { tool, bindTools, invokeAgent } from "@prompty/core";
const getWeather = tool( (city: string, units?: string) => `72°F in ${city}`, { name: "get_weather", description: "Get the current weather for a city", parameters: [ { name: "city", kind: "string", required: true }, { name: "units", kind: "string", default: "celsius" }, ], },);
// bindTools validates names match the .prompty declarationsconst tools = bindTools(agent, [getWeather]);const result = await invokeAgent(agent, inputs, { tools });// ❌ Without [Tool] — manual dict, manual JSON parsingvar tools = new Dictionary<string, Func<string, Task<string>>>{ ["get_weather"] = async (argsJson) => { var args = JsonSerializer.Deserialize<Dictionary<string, object?>>(argsJson)!; var city = args["city"]?.ToString() ?? "unknown"; var units = args.GetValueOrDefault("units")?.ToString() ?? "celsius"; return $"72°F in {city}"; },};
var result = await Pipeline.InvokeAgentAsync(agent, inputs, tools: tools);// ✅ With [Tool] + BindTools — typed parameters, validated, no JSON parsingusing Prompty.Core;
public class WeatherService{ [Tool(Name = "get_weather", Description = "Get the current weather")] public string GetWeather(string city, string units = "celsius") { return $"72°F in {city}"; }}
// BindTools validates [Tool] names match the .prompty declarationsvar service = new WeatherService();var tools = ToolAttribute.BindTools(agent, service);
var result = await Pipeline.InvokeAgentAsync(agent, inputs, tools: tools);End-to-End Example
Section titled “End-to-End Example”A complete agent with the .prompty file and matching handlers:
# agent.prompty---name: assistantmodel: id: gpt-4o provider: openai apiType: chattools: - name: get_weather kind: function description: Get the current weather for a city parameters: - name: city kind: string required: true - name: get_time kind: function description: Get the current time in a timezone parameters: - name: timezone kind: string required: trueinputs: - name: question kind: string---system:You are a helpful assistant with access to weather and time tools.
user:{{question}}from prompty import load, invoke_agent, tool, bind_tools
@tooldef get_weather(city: str) -> str: """Get the current weather for a city.""" return f"72°F and sunny in {city}"
@tooldef get_time(timezone: str) -> str: """Get the current time in a timezone.""" return f"3:42 PM in {timezone}"
agent = load("agent.prompty")
# bind_tools validates that each @tool name matches a declaration# in agent.tools, then returns the handler dicttools = bind_tools(agent, [get_weather, get_time])
result = invoke_agent( agent, inputs={"question": "What's the weather in Tokyo?"}, tools=tools,)print(result)import { load, invokeAgent, tool, bindTools } from "@prompty/core";
const getWeather = tool( (city: string) => `72°F and sunny in ${city}`, { name: "get_weather", description: "Get the current weather for a city", parameters: [{ name: "city", kind: "string", required: true }], },);
const getTime = tool( (timezone: string) => `3:42 PM in ${timezone}`, { name: "get_time", description: "Get the current time in a timezone", parameters: [{ name: "timezone", kind: "string", required: true }], },);
const agent = await load("agent.prompty");
// bindTools validates names against agent.tools declarationsconst tools = bindTools(agent, [getWeather, getTime]);
const result = await invokeAgent(agent, { question: "Weather in Tokyo?" }, { tools,});console.log(result);using Prompty.Core;
public class AssistantTools{ [Tool(Name = "get_weather", Description = "Get the current weather")] public string GetWeather(string city) { return $"72°F and sunny in {city}"; }
[Tool(Name = "get_time", Description = "Get the current time")] public string GetTime(string timezone) { return $"3:42 PM in {timezone}"; }}
var agent = PromptyLoader.Load("agent.prompty");var service = new AssistantTools();
// BindTools validates [Tool] methods against agent.Tools declarationsvar tools = ToolAttribute.BindTools(agent, service);
var result = await Pipeline.InvokeAgentAsync( agent, new() { ["question"] = "What's the weather in Tokyo?" }, tools: tools);Console.WriteLine(result);Customization Options
Section titled “Customization Options”# Bare decorator — uses function name and docstring@tooldef my_func(x: str) -> str: """This becomes the description.""" return x
# With overrides@tool(name="custom_name", description="Custom description")def my_func_v2(x: str) -> str: return x
# Access the generated FunctionTool definitionprint(my_func.__tool__.name) # "my_func"print(my_func.__tool__.description) # "This becomes the description."print(my_func.__tool__.parameters) # [Property(name="x", kind="string", ...)]const helper = tool(fn, { name: "helper", parameters: [...],});
// Access the generated FunctionToolconsole.log(helper.__tool__.name); // "helper"// [Tool] with no arguments — uses method name as-is[Tool]public string MyMethod(string x) => x;// Tool name = "MyMethod"
// [Tool] with overrides[Tool(Name = "custom_name", Description = "Custom description")]public string MyMethod2(string x) => x;// Tool name = "custom_name"
// Build a FunctionTool definition without registeringvar method = typeof(MyClass).GetMethod("MyMethod")!;var toolDef = ToolAttribute.BuildFromMethod(method);Console.WriteLine(toolDef.Name); // "MyMethod"Console.WriteLine(toolDef.Parameters); // [Property { Name = "x", Kind = "string" }]Type Mappings
Section titled “Type Mappings”The decorator/attribute maps language types to schema kinds automatically:
| Python | TypeScript | C# | Schema Kind |
|---|---|---|---|
str | "string" | string | string |
int | "integer" | int, long | integer |
float | "float" | float, double | float |
bool | "boolean" | bool | boolean |
list | "array" | List<T>, arrays | array |
dict | "object" | Dictionary<,> | object |