Skip to content

Tools

Tools extend what an LLM can do beyond generating text. Define them in the .prompty frontmatter under the tools: key. When the prompt executes, the runtime passes tool definitions to the LLM as part of the API call. If the model decides to use a tool, the agent loop handles calling the function and feeding the result back into the conversation.

tools:
- name: get_weather
kind: function
description: Get current weather for a city
parameters:
- name: city
kind: string
required: true

Every tool has a name, a kind that determines its type, and an optional description. Beyond that, each kind carries its own fields. Prompty supports four tool kinds — backed by AgentSchema types — plus a wildcard catch-all.


flowchart TD
    Tool["Tool (base)\nname · kind · description · bindings"]
    Tool --> FunctionTool["FunctionTool\nkind: function\nparameters · strict\n→ local function call"]
    Tool --> McpTool["McpTool\nkind: mcp\nserverName · connection\napprovalMode · allowedTools"]
    Tool --> OpenApiTool["OpenApiTool\nkind: openapi\nconnection · specification\n→ REST API call"]
    Tool --> CustomTool["CustomTool\nkind: * (wildcard)\nconnection · options\n→ provider-specific"]

    Dispatch["Tool.load_kind(data) resolves kind → concrete class\nUnknown kinds automatically become CustomTool"]

    style Tool fill:#1d4ed8,stroke:#1e40af,color:#fff
    style FunctionTool fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style McpTool fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style OpenApiTool fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style CustomTool fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style Dispatch fill:#fefce8,stroke:#f59e0b,color:#92400e

Function tools define local functions that the runtime can call directly when the LLM requests them. This is the most common tool type — you provide a function name, description, and a parameter schema, and the executor maps tool calls to your Python functions at runtime.

tools:
- name: get_weather
kind: function
description: Get current weather for a city
parameters:
- name: city
kind: string
description: City name
required: true
- name: units
kind: string
description: Temperature units
default: celsius

Set strict: true to constrain the LLM to output only arguments that match the exact parameter schema. This adds "strict": true to the function definition and "additionalProperties": false to the JSON Schema sent to the API — preventing the model from hallucinating extra parameters.

tools:
- name: get_weather
kind: function
description: Get current weather for a city
strict: true
parameters:
- name: city
kind: string
required: true

MCP (Model Context Protocol) tools connect to an external MCP server that exposes a set of capabilities. You reference the server by name and optionally restrict which tools the model can access.

tools:
- name: filesystem
kind: mcp
serverName: fs-server
connection:
kind: reference
name: my-mcp-server
approvalMode:
kind: always
allowedTools:
- read_file
- list_directory
FieldDescription
serverNameIdentifier of the MCP server to connect to
connectionHow to reach the server (any Connection type)
approvalModeWhen tool calls need approval — always, never, or custom
allowedToolsWhitelist of tool names the model may invoke (optional)

OpenAPI tools let the LLM call a REST API described by an OpenAPI specification. Prompty reads the spec to understand available operations and translates tool calls into HTTP requests.

tools:
- name: weather_api
kind: openapi
connection:
kind: key
endpoint: https://api.weather.com
apiKey: ${env:WEATHER_API_KEY}
specification: ./weather.openapi.json
FieldDescription
connectionEndpoint and auth for the API (any Connection type)
specificationPath to an OpenAPI JSON/YAML spec (relative to the .prompty file)

Any kind value that doesn’t match function, mcp, or openapi is caught by the CustomTool wildcard. This is the extensibility escape hatch — use it to integrate with tool providers that Prompty doesn’t have built-in support for.

tools:
- name: my_tool
kind: my_custom_provider
connection:
kind: key
endpoint: https://custom.example.com
options:
setting: value
FieldDescription
connectionOptional connection for the custom provider
optionsFree-form dictionary passed through to the provider

The runtime loads these as CustomTool instances. Your executor or a plugin is responsible for interpreting the kind and options at execution time.


All tool types support optional bindings that map between the tool’s parameters and the prompt’s input schema. Use bindings when the tool’s parameter names don’t match your prompt’s input variable names.

tools:
- name: search
kind: function
description: Search for documents
bindings:
- name: query
input: userQuestion
parameters:
- name: query
kind: string
required: true

In this example, the tool parameter query is bound to the prompt input userQuestion — so the value of userQuestion is automatically passed as query when the tool is invoked.


Tools defined in the frontmatter are sent to the LLM as part of the API request. To actually execute the tool calls the model returns, use execute_agent — which runs the agent loop: call the LLM, execute any requested tools, feed results back, and repeat until the model produces a final response.

import prompty
def get_weather(city: str, units: str = "celsius") -> str:
return f"72°F and sunny in {city}"
result = prompty.execute_agent(
"agent.prompty",
inputs={"question": "Weather in Seattle?"},
tools={"get_weather": get_weather},
)
result = prompty.execute_agent(
"agent.prompty",
inputs={"question": "Weather in Seattle?"},
tools={"get_weather": get_weather},
)
print(result)