Skip to content

Agent with Tool Calling

An agent is a prompt that can call your functions. The flow:

  1. You define tools in the .prompty frontmatter
  2. You register the matching functions in your code
  3. The runtime sends the tools to the LLM
  4. If the LLM returns a tool_calls response, the runtime calls your function, appends the result to the conversation, and calls the LLM again
  5. This loops until the LLM returns a normal text response
User message
→ LLM (with tool definitions)
→ tool_calls: get_weather("Seattle")
→ Your function returns "72°F and sunny"
→ LLM (with tool result in context)
→ "The weather in Seattle is 72°F and sunny!"

def get_weather(city: str) -> str:
"""Get the current weather for a city."""
# In production, call a real weather API
return f"72°F and sunny in {city}"
def get_time(timezone: str) -> str:
"""Get the current time in a timezone."""
from datetime import datetime, timezone as tz
return datetime.now(tz.utc).isoformat()

Create agent.prompty:

---
name: weather-agent
description: An agent that can check weather and time
model:
id: gpt-4o-mini
provider: openai
apiType: chat
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
options:
temperature: 0
tools:
- name: get_weather
kind: function
description: Get the current weather for a city
parameters:
properties:
- name: city
kind: string
description: The city name, e.g. "Seattle"
required: true
- name: get_time
kind: function
description: Get the current time in a timezone
parameters:
properties:
- name: timezone
kind: string
description: IANA timezone, e.g. "America/New_York"
required: true
inputSchema:
properties:
- name: question
kind: string
default: What's the weather in Seattle?
template:
format:
kind: jinja2
parser:
kind: prompty
---
system:
You are a helpful assistant with access to weather and time tools.
Always use the tools when the user asks about weather or time.
user:
{{question}}

import prompty
# Define your tool functions
def get_weather(city: str) -> str:
return f"72°F and sunny in {city}"
def get_time(timezone: str) -> str:
from datetime import datetime, timezone as tz
return datetime.now(tz.utc).isoformat()
# Execute with the agent loop — tools are called automatically
result = prompty.execute_agent(
"agent.prompty",
inputs={"question": "What's the weather in Seattle and the time in Tokyo?"},
tools={
"get_weather": get_weather,
"get_time": get_time,
},
)
print(result)
# → "The weather in Seattle is 72°F and sunny, and the current time in Tokyo is ..."

Step-by-step variant:

import prompty
agent = prompty.load("agent.prompty")
# Run the agent loop with explicit tool functions
messages = prompty.prepare(agent, inputs={"question": "Weather in NYC?"})
result = prompty.execute_agent(
agent,
inputs={"question": "Weather in NYC?"},
tools={
"get_weather": get_weather,
"get_time": get_time,
},
)
print(result)

If your tools call external APIs, use async functions to avoid blocking:

import httpx
import prompty
async def get_weather(city: str) -> str:
"""Async weather lookup."""
async with httpx.AsyncClient() as client:
resp = await client.get(f"https://api.weather.com/v1/{city}")
data = resp.json()
return f"{data['temp']}°F, {data['condition']}"
async def main():
result = await prompty.execute_agent_async(
"agent.prompty",
inputs={"question": "Weather in London?"},
tools={
"get_weather": get_weather,
},
)
print(result)

You can define as many tools as needed. Here’s a more complete agent with database and search capabilities:

---
name: research-agent
model:
id: gpt-4o
provider: openai
apiType: chat
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
options:
temperature: 0
tools:
- name: search_docs
kind: function
description: Search internal documentation
parameters:
properties:
- name: query
kind: string
description: The search query
required: true
- name: limit
kind: integer
description: Max number of results (default 5)
- name: get_user
kind: function
description: Look up a user by email
parameters:
properties:
- name: email
kind: string
description: The user's email address
required: true
- name: send_email
kind: function
description: Send an email to a user
parameters:
properties:
- name: to
kind: string
description: Recipient email
required: true
- name: subject
kind: string
description: Email subject
required: true
- name: body
kind: string
description: Email body
required: true
inputSchema:
properties:
- name: request
kind: string
template:
format:
kind: jinja2
parser:
kind: prompty
---
system:
You are an office assistant. You can search docs, look up users, and send emails.
Always confirm before sending emails.
user:
{{request}}
import prompty
def search_docs(query: str, limit: int = 5) -> str:
# Your search implementation
return f"Found {limit} results for '{query}'"
def get_user(email: str) -> str:
return '{"name": "Jane Doe", "email": "jane@example.com", "role": "Engineer"}'
def send_email(to: str, subject: str, body: str) -> str:
# Your email implementation
return f"Email sent to {to}"
result = prompty.execute_agent(
"research-agent.prompty",
inputs={"request": "Find docs about onboarding and email a summary to jane@example.com"},
tools={
"search_docs": search_docs,
"get_user": get_user,
"send_email": send_email,
},
)

import prompty
try:
result = prompty.execute_agent(
"agent.prompty",
inputs={"question": "What's the weather?"},
tools={
"get_weather": get_weather,
# Missing get_time — will raise if the LLM calls it!
},
)
except ValueError as e:
print(f"Tool error: {e}")
except Exception as e:
print(f"Execution error: {e}")

Common pitfalls:

IssueCauseFix
ValueError: Tool 'X' not foundFunction not registeredAdd it to tool_functions
Agent loops foreverLLM keeps calling toolsSet maxOutputTokens or add “respond when done” to the system prompt
Wrong arguments passedSchema mismatchEnsure parameters in .prompty match your function signature
Tool returns non-stringRuntime expects stringAlways return a string from tool functions (use json.dumps() for objects)