The .prompty File Format
A .prompty file is a plain-text asset that pairs configuration with
prompt instructions in a single, portable file. The top half is YAML
frontmatter; the bottom half is a markdown body that becomes the
instructions property on the loaded PromptAgent.
File Structure Overview
Section titled “File Structure Overview”Every .prompty file follows the same two-part layout:
--- ← frontmatter start(YAML) ← configuration: model, inputs, tools, template …--- ← frontmatter end(Markdown) ← body: role markers + template syntax → instructionsThe loader splits the file at the --- delimiters, parses the YAML into
typed AgentSchema objects, and assigns the markdown body to
agent.instructions.
flowchart TD
subgraph FM["FRONTMATTER (YAML between --- delimiters)"]
direction LR
Identity["name, description\nmetadata: authors, tags …"]
ModelCfg["model:\nid: gpt-4o\nprovider: foundry\nconnection + options"]
Inputs["inputSchema:\nproperties: [...]"]
Tools["tools: [...]"]
Template["template:\nformat + parser"]
end
subgraph Body["BODY → instructions (Markdown below closing ---)"]
direction TB
System["system:\nYou are a helpful assistant."]
User["user:\n{{question}} — template variable"]
Assistant["assistant:\nLet me help with that."]
System --> User --> Assistant
end
FM -- "--- delimiter ---" --> Body
Identity -- maps to --> PA1["PromptAgent.name\n.description\n.metadata"]
ModelCfg -- maps to --> PA2["PromptAgent.model"]
Inputs -- maps to --> PA3["PromptAgent.inputSchema"]
Tools -- maps to --> PA4["PromptAgent.tools"]
Template -- maps to --> PA5["PromptAgent.template"]
Body -- maps to --> PA6["PromptAgent.instructions"]
style FM fill:#eff6ff,stroke:#3b82f6,color:#1e293b
style Body fill:#ecfdf5,stroke:#10b981,color:#1e293b
style System fill:#d1fae5,stroke:#10b981,color:#1d4ed8
style User fill:#d1fae5,stroke:#10b981,color:#1d4ed8
style Assistant fill:#d1fae5,stroke:#10b981,color:#1d4ed8
style PA1 fill:#fefce8,stroke:#f59e0b,color:#92400e
style PA2 fill:#fefce8,stroke:#f59e0b,color:#92400e
style PA3 fill:#fefce8,stroke:#f59e0b,color:#92400e
style PA4 fill:#fefce8,stroke:#f59e0b,color:#92400e
style PA5 fill:#fefce8,stroke:#f59e0b,color:#92400e
style PA6 fill:#fefce8,stroke:#f59e0b,color:#92400e
Frontmatter Properties
Section titled “Frontmatter Properties”The YAML frontmatter maps directly to
AgentSchema PromptAgent
fields. Here is a summary — see the Schema Reference page for the
full specification of every property.
Identity
Section titled “Identity”| Property | Type | Description |
|---|---|---|
name | string | Unique name for the prompt |
displayName | string | Human-readable label |
description | string | What this prompt does |
Metadata
Section titled “Metadata”Arbitrary key-value pairs. Common conventions:
metadata: authors: [alice, bob] tags: [customer-support, v2] version: "1.0"Configures the LLM to call. Full form:
model: id: gpt-4o provider: foundry # or "openai" apiType: chat # chat | responses | embedding | image connection: kind: key endpoint: ${env:AZURE_OPENAI_ENDPOINT} apiKey: ${env:AZURE_OPENAI_API_KEY} options: temperature: 0.7 maxOutputTokens: 1000Or the shorthand — just a model name:
model: gpt-4oThis expands to { id: "gpt-4o" } with provider and connection
inherited from defaults or environment.
Input & Output Schema
Section titled “Input & Output Schema”Define the inputs your template expects and the structure of outputs:
inputSchema: properties: - name: question kind: string description: The user's question required: true - name: language kind: string default: English
outputSchema: properties: - name: answer kind: string - name: confidence kind: floatEach property has a kind (string, integer, float, boolean,
array, object, thread), optional description, default,
required, and enumValues fields.
A list of tool definitions the model can call:
tools: - name: get_weather kind: function description: Get the current weather parameters: properties: - name: city kind: string required: truetools: - name: filesystem kind: mcp serverName: filesystem-server connection: kind: referencetools: - name: weather_api kind: openapi specification: ./weather.openapi.json connection: kind: key endpoint: https://api.weather.comtools: - name: my_tool kind: my_provider connection: kind: key endpoint: https://custom.example.com options: setting: valueTemplate
Section titled “Template”Configures the rendering engine and the message parser.
Shorthand (recommended) — string values work at every level.
format: jinja2 expands to format: { kind: jinja2 }, and
parser: prompty expands to parser: { kind: prompty }:
template: format: jinja2 parser: promptyFull form — use if you prefer explicit nesting:
template: format: kind: jinja2 # or "mustache" parser: kind: prompty # role-marker parserThe Markdown Body
Section titled “The Markdown Body”Everything below the closing --- is the body. The loader assigns
it to agent.instructions. At runtime the body flows through two
stages:
- Renderer — expands template variables (
{{name}}) using the inputs you provide. - Parser — splits the rendered text on role markers into a
list[Message]ready for the LLM.
Role Markers
Section titled “Role Markers”Role markers are keywords on their own line followed by a colon. The parser recognises three roles:
| Marker | Resulting role |
|---|---|
system: | system |
user: | user |
assistant: | assistant |
Everything after a marker (until the next marker or end-of-file) becomes
the content of that message.
system:You are an AI assistant who helps people find information.
user:{{question}}
assistant:Let me help with that.
user:{{followUp}}Template Syntax
Section titled “Template Syntax”The default renderer is Jinja2. You can also use Mustache by
setting template.format.kind: mustache.
system:You are helping {{firstName}} {{lastName}}.
{% if context %}Here is some context:{{ context }}{% endif %}
{% for item in history %}- {{ item }}{% endfor %}
user:{{question}}system:You are helping {{firstName}} {{lastName}}.
{{#context}}Here is some context:{{context}}{{/context}}
{{#history}}- {{.}}{{/history}}
user:{{question}}Variable References in Frontmatter
Section titled “Variable References in Frontmatter”Frontmatter values can reference external data using ${protocol:value}
syntax. The loader resolves these at load time before the YAML is parsed
into typed objects.
Environment Variables
Section titled “Environment Variables”# Required — errors if AZURE_OPENAI_ENDPOINT is not setendpoint: ${env:AZURE_OPENAI_ENDPOINT}
# With a fallback default valueregion: ${env:AZURE_REGION:eastus}File References
Section titled “File References”# Load a JSON file inline (path relative to the .prompty file)connection: ${file:shared/azure-connection.json}Shorthand Syntax
Section titled “Shorthand Syntax”Prompty supports a compact shorthand for the model property:
# Shorthand — just the model namemodel: gpt-4o
# Equivalent full formmodel: id: gpt-4oComplete Example
Section titled “Complete Example”Here is a full .prompty file using all the features described above:
---name: customer-supportdisplayName: Customer Support Agentdescription: Answers customer questions using context from their account.metadata: authors: [support-team] tags: [production, customer-facing] version: "2.1"
model: id: gpt-4o provider: foundry apiType: chat connection: kind: key endpoint: ${env:AZURE_OPENAI_ENDPOINT} apiKey: ${env:AZURE_OPENAI_API_KEY} options: temperature: 0.3 maxOutputTokens: 2000
inputSchema: properties: - name: customerName kind: string description: Full name of the customer required: true - name: question kind: string description: The customer's question required: true - name: orderHistory kind: array description: Recent orders for context default: []
outputSchema: properties: - name: answer kind: string - name: sentiment kind: string enumValues: [positive, neutral, negative]
tools: - name: lookup_order kind: function description: Look up an order by ID parameters: properties: - name: orderId kind: string required: true
template: format: kind: jinja2 parser: kind: prompty---system:You are a customer support agent for Contoso. Be helpful, concise,and empathetic. Always greet the customer by name.
You have access to the following order history:{% for order in orderHistory %}- Order #{{ order.id }}: {{ order.status }} ({{ order.date }}){% endfor %}
user:Hi, my name is {{customerName}}. {{question}}Run it with the Prompty runtime:
import prompty
# Load + render + parse + execute + process in one callresult = prompty.run( "customer-support.prompty", inputs={ "customerName": "Jane Doe", "question": "Where is my order #12345?", "orderHistory": [ {"id": "12345", "status": "shipped", "date": "2025-01-15"}, {"id": "12300", "status": "delivered", "date": "2025-01-02"}, ], },)import { execute } from "@prompty/core";import "@prompty/foundry"; // registers "azure" provider
// Load + render + parse + execute + process in one callconst result = await execute("customer-support.prompty", { inputs: { customerName: "Jane Doe", question: "Where is my order #12345?", orderHistory: [ { id: "12345", status: "shipped", date: "2025-01-15" }, { id: "12300", status: "delivered", date: "2025-01-02" }, ], },});