Skip to content

Tutorial: Build a Tool-Calling Agent

An agent that answers weather questions by calling your functions automatically:

  1. You ask “What’s the weather in Seattle?”
  2. The model decides to call your get_weather tool
  3. Your function runs and returns a result
  4. The model uses that result to compose a natural-language answer

By the end (~15 min) you’ll understand tool definitions, the agent loop, and how to add multiple tools to a single agent.


Terminal window
pip install prompty[jinja2,openai]

Create a .env file with your OpenAI key:

Terminal window
OPENAI_API_KEY=sk-your-key-here

Create weather-agent.prompty with a tool definition in the frontmatter:

---
name: weather-agent
description: An agent that checks the weather
model:
id: gpt-4o-mini
provider: openai
apiType: chat
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
options:
temperature: 0
tools:
- name: get_weather
kind: function
description: Get the current weather for a city
parameters:
- name: city
kind: string
description: The city name, e.g. "Seattle"
required: true
inputs:
- name: question
kind: string
default: What's the weather in Seattle?
---
system:
You are a helpful assistant with access to a weather tool.
Always use the tool when the user asks about weather.
user:
{{question}}
SectionWhat it does
apiType: chatNormal chat prompt — the agent loop is activated by your calling code
toolsDeclares available functions so the LLM knows what it can call
parametersDescribes each function’s arguments with types and descriptions

Write the function that the agent will call. Use the @tool decorator (Python), tool() wrapper (TypeScript), or [Tool] attribute (C#):

from prompty import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
# In production, call a real weather API here
return f"72°F and sunny in {city}"

Use invoke_agent() instead of invoke() — this activates the tool-calling loop so the runtime automatically executes your functions:

from prompty import load, invoke_agent, tool, bind_tools
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"72°F and sunny in {city}"
# Load the prompt and bind tool functions
agent = load("weather-agent.prompty")
tools = bind_tools(agent, [get_weather])
# Run with the agent loop
result = invoke_agent(
agent,
inputs={"question": "What's the weather in Seattle?"},
tools=tools,
)
print(result)
# → "The weather in Seattle is 72°F and sunny!"

Register the console tracer to watch each step of the agent loop:

from prompty import Tracer
from prompty.tracing.tracer import console_tracer
Tracer.add("console", console_tracer)
# Now run the agent — you'll see each step printed:
result = invoke_agent(
agent,
inputs={"question": "What's the weather in Seattle?"},
tools=tools,
)

The trace output shows the full loop:

[render] → template rendered with inputs
[parse] → 2 messages (system + user)
[execute] → LLM returns tool_calls: get_weather("Seattle")
[tool] → get_weather("Seattle") → "72°F and sunny in Seattle"
[execute] → LLM returns text response with tool result in context
[process] → "The weather in Seattle is 72°F and sunny!"

Agents can use multiple tools. Add a get_time tool to the .prompty file and register the matching function:

Update weather-agent.prompty — add the second tool to the tools list:

tools:
- name: get_weather
kind: function
description: Get the current weather for a city
parameters:
- name: city
kind: string
description: The city name, e.g. "Seattle"
required: true
- name: get_time
kind: function
description: Get the current time in a timezone
parameters:
- name: timezone
kind: string
description: IANA timezone, e.g. "America/New_York"
required: true

Now register both functions:

from prompty import load, invoke_agent, tool, bind_tools
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"72°F and sunny in {city}"
@tool
def get_time(timezone: str) -> str:
"""Get the current time in a timezone."""
from datetime import datetime, timezone as tz
return datetime.now(tz.utc).isoformat()
agent = load("weather-agent.prompty")
tools = bind_tools(agent, [get_weather, get_time])
result = invoke_agent(
agent,
inputs={"question": "What's the weather in Seattle and the time in Tokyo?"},
tools=tools,
)
print(result)
# The model calls both tools and combines the results

The model may call both tools in a single round or across multiple rounds — the agent loop handles either pattern automatically.


✅ Declaring tool definitions in a .prompty file
✅ Writing tool functions with @tool / tool() / [Tool]
✅ Using bind_tools() to validate functions against declarations
✅ Running the agent loop with invoke_agent()
Tracing the loop to see tool calls in real time
✅ Adding multiple tools to a single agent