Agent with Tool Calling
Overview
Section titled “Overview”An agent is a prompt that can call your functions. The flow:
- You define tools in the
.promptyfrontmatter - You register the matching functions in your code
- The runtime sends the tools to the LLM
- If the LLM returns a
tool_callsresponse, the runtime calls your function, appends the result to the conversation, and calls the LLM again - This loops until the LLM returns a normal text response
User message → LLM (with tool definitions) → tool_calls: get_weather("Seattle") → Your function returns "72°F and sunny" → LLM (with tool result in context) → "The weather in Seattle is 72°F and sunny!"1. Define Your Tool Functions
Section titled “1. Define Your Tool Functions”def get_weather(city: str) -> str: """Get the current weather for a city.""" # In production, call a real weather API return f"72°F and sunny in {city}"
def get_time(timezone: str) -> str: """Get the current time in a timezone.""" from datetime import datetime, timezone as tz return datetime.now(tz.utc).isoformat()function getWeather(city: string): string { // In production, call a real weather API return `72°F and sunny in ${city}`;}
function getTime(timezone: string): string { return new Date().toISOString();}2. Write the .prompty File
Section titled “2. Write the .prompty File”Create agent.prompty:
---name: weather-agentdescription: An agent that can check weather and timemodel: id: gpt-4o-mini provider: openai apiType: chat connection: kind: key apiKey: ${env:OPENAI_API_KEY} options: temperature: 0tools: - name: get_weather kind: function description: Get the current weather for a city parameters: properties: - name: city kind: string description: The city name, e.g. "Seattle" required: true - name: get_time kind: function description: Get the current time in a timezone parameters: properties: - name: timezone kind: string description: IANA timezone, e.g. "America/New_York" required: trueinputSchema: properties: - name: question kind: string default: What's the weather in Seattle?template: format: kind: jinja2 parser: kind: prompty---system:You are a helpful assistant with access to weather and time tools.Always use the tools when the user asks about weather or time.
user:{{question}}3. Run the Agent
Section titled “3. Run the Agent”import prompty
# Define your tool functionsdef get_weather(city: str) -> str: return f"72°F and sunny in {city}"
def get_time(timezone: str) -> str: from datetime import datetime, timezone as tz return datetime.now(tz.utc).isoformat()
# Execute with the agent loop — tools are called automaticallyresult = prompty.execute_agent( "agent.prompty", inputs={"question": "What's the weather in Seattle and the time in Tokyo?"}, tools={ "get_weather": get_weather, "get_time": get_time, },)print(result)# → "The weather in Seattle is 72°F and sunny, and the current time in Tokyo is ..."Step-by-step variant:
import prompty
agent = prompty.load("agent.prompty")
# Run the agent loop with explicit tool functionsmessages = prompty.prepare(agent, inputs={"question": "Weather in NYC?"})result = prompty.execute_agent( agent, inputs={"question": "Weather in NYC?"}, tools={ "get_weather": get_weather, "get_time": get_time, },)print(result)import { executeAgent } from "@prompty/core";
function getWeather(city: string): string { return `72°F and sunny in ${city}`;}
function getTime(timezone: string): string { return new Date().toISOString();}
const result = await executeAgent("agent.prompty", { inputs: { question: "What's the weather in Seattle?" }, tools: { get_weather: getWeather, get_time: getTime, },});
console.log(result);4. Async Tool Functions
Section titled “4. Async Tool Functions”If your tools call external APIs, use async functions to avoid blocking:
import httpximport prompty
async def get_weather(city: str) -> str: """Async weather lookup.""" async with httpx.AsyncClient() as client: resp = await client.get(f"https://api.weather.com/v1/{city}") data = resp.json() return f"{data['temp']}°F, {data['condition']}"
async def main(): result = await prompty.execute_agent_async( "agent.prompty", inputs={"question": "Weather in London?"}, tools={ "get_weather": get_weather, }, ) print(result)async function getWeather(city: string): Promise<string> { const resp = await fetch(`https://api.weather.com/v1/${city}`); const data = await resp.json(); return `${data.temp}°F, ${data.condition}`;}
const result = await executeAgent("agent.prompty", { inputs: { question: "Weather in London?" }, tools: { get_weather: getWeather },});5. Multiple Tools Example
Section titled “5. Multiple Tools Example”You can define as many tools as needed. Here’s a more complete agent with database and search capabilities:
---name: research-agentmodel: id: gpt-4o provider: openai apiType: chat connection: kind: key apiKey: ${env:OPENAI_API_KEY} options: temperature: 0tools: - name: search_docs kind: function description: Search internal documentation parameters: properties: - name: query kind: string description: The search query required: true - name: limit kind: integer description: Max number of results (default 5) - name: get_user kind: function description: Look up a user by email parameters: properties: - name: email kind: string description: The user's email address required: true - name: send_email kind: function description: Send an email to a user parameters: properties: - name: to kind: string description: Recipient email required: true - name: subject kind: string description: Email subject required: true - name: body kind: string description: Email body required: trueinputSchema: properties: - name: request kind: stringtemplate: format: kind: jinja2 parser: kind: prompty---system:You are an office assistant. You can search docs, look up users, and send emails.Always confirm before sending emails.
user:{{request}}import prompty
def search_docs(query: str, limit: int = 5) -> str: # Your search implementation return f"Found {limit} results for '{query}'"
def get_user(email: str) -> str: return '{"name": "Jane Doe", "email": "jane@example.com", "role": "Engineer"}'
def send_email(to: str, subject: str, body: str) -> str: # Your email implementation return f"Email sent to {to}"
result = prompty.execute_agent( "research-agent.prompty", inputs={"request": "Find docs about onboarding and email a summary to jane@example.com"}, tools={ "search_docs": search_docs, "get_user": get_user, "send_email": send_email, },)import { executeAgent } from "@prompty/core";
const tools = { search_docs: (query: string, limit = 5) => `Found ${limit} results for '${query}'`, get_user: (email: string) => JSON.stringify({ name: "Jane Doe", email, role: "Engineer" }), send_email: (to: string, subject: string, body: string) => `Email sent to ${to}`,};
const result = await executeAgent("research-agent.prompty", { inputs: { request: "Find docs about onboarding and email a summary to jane@example.com" }, tools,});6. Error Handling
Section titled “6. Error Handling”import prompty
try: result = prompty.execute_agent( "agent.prompty", inputs={"question": "What's the weather?"}, tools={ "get_weather": get_weather, # Missing get_time — will raise if the LLM calls it! }, )except ValueError as e: print(f"Tool error: {e}")except Exception as e: print(f"Execution error: {e}")try { const result = await executeAgent("agent.prompty", { inputs: { question: "What's the weather?" }, tools: { get_weather: getWeather, }, });} catch (error) { console.error("Agent error:", error);}Common pitfalls:
| Issue | Cause | Fix |
|---|---|---|
ValueError: Tool 'X' not found | Function not registered | Add it to tool_functions |
| Agent loops forever | LLM keeps calling tools | Set maxOutputTokens or add “respond when done” to the system prompt |
| Wrong arguments passed | Schema mismatch | Ensure parameters in .prompty match your function signature |
| Tool returns non-string | Runtime expects string | Always return a string from tool functions (use json.dumps() for objects) |