Tutorial: Build a Tool-Calling Agent
What you’ll build
Section titled “What you’ll build”An agent that answers weather questions by calling your functions automatically:
- You ask “What’s the weather in Seattle?”
- The model decides to call your
get_weathertool - Your function runs and returns a result
- The model uses that result to compose a natural-language answer
By the end (~15 min) you’ll understand tool definitions, the agent loop, and how to add multiple tools to a single agent.
Step 1: Install Prompty
Section titled “Step 1: Install Prompty”pip install prompty[jinja2,openai]npm install @prompty/core @prompty/openaidotnet add package Prompty.Core --prereleasedotnet add package Prompty.OpenAI --prereleaseCreate a .env file with your OpenAI key:
OPENAI_API_KEY=sk-your-key-hereStep 2: Create the .prompty file
Section titled “Step 2: Create the .prompty file”Create weather-agent.prompty with a tool definition in the frontmatter:
---name: weather-agentdescription: An agent that checks the weathermodel: id: gpt-4o-mini provider: openai apiType: chat connection: kind: key apiKey: ${env:OPENAI_API_KEY} options: temperature: 0tools: - name: get_weather kind: function description: Get the current weather for a city parameters: - name: city kind: string description: The city name, e.g. "Seattle" required: trueinputs: - name: question kind: string default: What's the weather in Seattle?---system:You are a helpful assistant with access to a weather tool.Always use the tool when the user asks about weather.
user:{{question}}| Section | What it does |
|---|---|
apiType: chat | Normal chat prompt — the agent loop is activated by your calling code |
tools | Declares available functions so the LLM knows what it can call |
parameters | Describes each function’s arguments with types and descriptions |
Step 3: Define the tool function
Section titled “Step 3: Define the tool function”Write the function that the agent will call. Use the @tool decorator
(Python), tool() wrapper (TypeScript), or [Tool] attribute (C#):
from prompty import tool
@tooldef get_weather(city: str) -> str: """Get the current weather for a city.""" # In production, call a real weather API here return f"72°F and sunny in {city}"import { tool } from "@prompty/core";
const getWeather = tool( (city: string) => `72°F and sunny in ${city}`, { name: "get_weather", description: "Get the current weather for a city", parameters: [{ name: "city", kind: "string", required: true }], },);using Prompty.Core;
public class WeatherTools{ [Tool(Name = "get_weather", Description = "Get the current weather")] public string GetWeather(string city) { // In production, call a real weather API here return $"72°F and sunny in {city}"; }}Step 4: Run the agent
Section titled “Step 4: Run the agent”Use invoke_agent() instead of invoke() — this activates the tool-calling
loop so the runtime automatically executes your functions:
from prompty import load, invoke_agent, tool, bind_tools
@tooldef get_weather(city: str) -> str: """Get the current weather for a city.""" return f"72°F and sunny in {city}"
# Load the prompt and bind tool functionsagent = load("weather-agent.prompty")tools = bind_tools(agent, [get_weather])
# Run with the agent loopresult = invoke_agent( agent, inputs={"question": "What's the weather in Seattle?"}, tools=tools,)print(result)# → "The weather in Seattle is 72°F and sunny!"import { load, invokeAgent, tool, bindTools } from "@prompty/core";import "@prompty/openai";
const getWeather = tool( (city: string) => `72°F and sunny in ${city}`, { name: "get_weather", description: "Get the current weather for a city", parameters: [{ name: "city", kind: "string", required: true }], },);
const agent = await load("weather-agent.prompty");const tools = bindTools(agent, [getWeather]);
const result = await invokeAgent(agent, { question: "What's the weather in Seattle?",}, { tools });
console.log(result);// → "The weather in Seattle is 72°F and sunny!"using Prompty.Core;
var service = new WeatherTools();var agent = PromptyLoader.Load("weather-agent.prompty");var tools = ToolAttribute.BindTools(agent, service);
var result = await Pipeline.InvokeAgentAsync( agent, new() { ["question"] = "What's the weather in Seattle?" }, tools: tools);
Console.WriteLine(result);// → "The weather in Seattle is 72°F and sunny!"Step 5: See the loop in action
Section titled “Step 5: See the loop in action”Register the console tracer to watch each step of the agent loop:
from prompty import Tracerfrom prompty.tracing.tracer import console_tracer
Tracer.add("console", console_tracer)
# Now run the agent — you'll see each step printed:result = invoke_agent( agent, inputs={"question": "What's the weather in Seattle?"}, tools=tools,)import { Tracer, consoleTracer } from "@prompty/core";
Tracer.add("console", consoleTracer);
// Now run — each step is logged to the consoleconst result = await invokeAgent(agent, { question: "What's the weather in Seattle?",}, { tools });using Prompty.Core.Tracing;
Tracer.Add("console", ConsoleTracer.Factory);
// Now run — each step is logged to the consolevar result = await Pipeline.InvokeAgentAsync( agent, new() { ["question"] = "What's the weather in Seattle?" }, tools: tools);The trace output shows the full loop:
[render] → template rendered with inputs[parse] → 2 messages (system + user)[execute] → LLM returns tool_calls: get_weather("Seattle")[tool] → get_weather("Seattle") → "72°F and sunny in Seattle"[execute] → LLM returns text response with tool result in context[process] → "The weather in Seattle is 72°F and sunny!"Step 6: Add a second tool
Section titled “Step 6: Add a second tool”Agents can use multiple tools. Add a get_time tool to the .prompty file
and register the matching function:
Update weather-agent.prompty — add the second tool to the tools list:
tools: - name: get_weather kind: function description: Get the current weather for a city parameters: - name: city kind: string description: The city name, e.g. "Seattle" required: true - name: get_time kind: function description: Get the current time in a timezone parameters: - name: timezone kind: string description: IANA timezone, e.g. "America/New_York" required: trueNow register both functions:
from prompty import load, invoke_agent, tool, bind_tools
@tooldef get_weather(city: str) -> str: """Get the current weather for a city.""" return f"72°F and sunny in {city}"
@tooldef get_time(timezone: str) -> str: """Get the current time in a timezone.""" from datetime import datetime, timezone as tz return datetime.now(tz.utc).isoformat()
agent = load("weather-agent.prompty")tools = bind_tools(agent, [get_weather, get_time])
result = invoke_agent( agent, inputs={"question": "What's the weather in Seattle and the time in Tokyo?"}, tools=tools,)print(result)# The model calls both tools and combines the resultsimport { load, invokeAgent, tool, bindTools } from "@prompty/core";import "@prompty/openai";
const getWeather = tool( (city: string) => `72°F and sunny in ${city}`, { name: "get_weather", description: "Get the current weather for a city", parameters: [{ name: "city", kind: "string", required: true }], },);
const getTime = tool( (timezone: string) => new Date().toISOString(), { name: "get_time", description: "Get the current time in a timezone", parameters: [{ name: "timezone", kind: "string", required: true }], },);
const agent = await load("weather-agent.prompty");const tools = bindTools(agent, [getWeather, getTime]);
const result = await invokeAgent(agent, { question: "What's the weather in Seattle and the time in Tokyo?",}, { tools });
console.log(result);using Prompty.Core;
public class MultiTools{ [Tool(Name = "get_weather", Description = "Get the current weather")] public string GetWeather(string city) => $"72°F and sunny in {city}";
[Tool(Name = "get_time", Description = "Get the current time")] public string GetTime(string timezone) => DateTime.UtcNow.ToString("o");}
var agent = PromptyLoader.Load("weather-agent.prompty");var service = new MultiTools();var tools = ToolAttribute.BindTools(agent, service);
var result = await Pipeline.InvokeAgentAsync( agent, new() { ["question"] = "What's the weather in Seattle and the time in Tokyo?" }, tools: tools);
Console.WriteLine(result);The model may call both tools in a single round or across multiple rounds — the agent loop handles either pattern automatically.
What you learned
Section titled “What you learned”✅ Declaring tool definitions in a .prompty file
✅ Writing tool functions with @tool / tool() / [Tool]
✅ Using bind_tools() to validate functions against declarations
✅ Running the agent loop with invoke_agent()
✅ Tracing the loop to see tool calls in real time
✅ Adding multiple tools to a single agent