Skip to content

The .prompty File Format

A .prompty file is a plain-text asset that pairs configuration with prompt instructions in a single, portable file. The top half is YAML frontmatter; the bottom half is a markdown body that becomes the instructions property on the loaded PromptAgent.

Every .prompty file follows the same two-part layout:

--- ← frontmatter start
(YAML) ← configuration: model, inputs, tools, template …
--- ← frontmatter end
(Markdown) ← body: role markers + template syntax → instructions

The loader splits the file at the --- delimiters, parses the YAML into typed AgentSchema objects, and assigns the markdown body to agent.instructions.

flowchart TD
    subgraph FM["FRONTMATTER (YAML between --- delimiters)"]
        direction LR
        Identity["name, description\nmetadata: authors, tags …"]
        ModelCfg["model:\nid: gpt-4o\nprovider: foundry\nconnection + options"]
        Inputs["inputSchema:\nproperties: [...]"]
        Tools["tools: [...]"]
        Template["template:\nformat + parser"]
    end

    subgraph Body["BODY → instructions (Markdown below closing ---)"]
        direction TB
        System["system:\nYou are a helpful assistant."]
        User["user:\n{{question}} — template variable"]
        Assistant["assistant:\nLet me help with that."]
        System --> User --> Assistant
    end

    FM -- "--- delimiter ---" --> Body

    Identity -- maps to --> PA1["PromptAgent.name\n.description\n.metadata"]
    ModelCfg -- maps to --> PA2["PromptAgent.model"]
    Inputs -- maps to --> PA3["PromptAgent.inputSchema"]
    Tools -- maps to --> PA4["PromptAgent.tools"]
    Template -- maps to --> PA5["PromptAgent.template"]
    Body -- maps to --> PA6["PromptAgent.instructions"]

    style FM fill:#eff6ff,stroke:#3b82f6,color:#1e293b
    style Body fill:#ecfdf5,stroke:#10b981,color:#1e293b
    style System fill:#d1fae5,stroke:#10b981,color:#1d4ed8
    style User fill:#d1fae5,stroke:#10b981,color:#1d4ed8
    style Assistant fill:#d1fae5,stroke:#10b981,color:#1d4ed8
    style PA1 fill:#fefce8,stroke:#f59e0b,color:#92400e
    style PA2 fill:#fefce8,stroke:#f59e0b,color:#92400e
    style PA3 fill:#fefce8,stroke:#f59e0b,color:#92400e
    style PA4 fill:#fefce8,stroke:#f59e0b,color:#92400e
    style PA5 fill:#fefce8,stroke:#f59e0b,color:#92400e
    style PA6 fill:#fefce8,stroke:#f59e0b,color:#92400e

The YAML frontmatter maps directly to AgentSchema PromptAgent fields. Here is a summary — see the Schema Reference page for the full specification of every property.

PropertyTypeDescription
namestringUnique name for the prompt
displayNamestringHuman-readable label
descriptionstringWhat this prompt does

Arbitrary key-value pairs. Common conventions:

metadata:
authors: [alice, bob]
tags: [customer-support, v2]
version: "1.0"

Configures the LLM to call. Full form:

model:
id: gpt-4o
provider: foundry # or "openai"
apiType: chat # chat | responses | embedding | image
connection:
kind: key
endpoint: ${env:AZURE_OPENAI_ENDPOINT}
apiKey: ${env:AZURE_OPENAI_API_KEY}
options:
temperature: 0.7
maxOutputTokens: 1000

Or the shorthand — just a model name:

model: gpt-4o

This expands to { id: "gpt-4o" } with provider and connection inherited from defaults or environment.

Define the inputs your template expects and the structure of outputs:

inputSchema:
properties:
- name: question
kind: string
description: The user's question
required: true
- name: language
kind: string
default: English
outputSchema:
properties:
- name: answer
kind: string
- name: confidence
kind: float

Each property has a kind (string, integer, float, boolean, array, object, thread), optional description, default, required, and enumValues fields.

A list of tool definitions the model can call:

tools:
- name: get_weather
kind: function
description: Get the current weather
parameters:
properties:
- name: city
kind: string
required: true

Configures the rendering engine and the message parser.

Shorthand (recommended) — string values work at every level. format: jinja2 expands to format: { kind: jinja2 }, and parser: prompty expands to parser: { kind: prompty }:

template:
format: jinja2
parser: prompty

Full form — use if you prefer explicit nesting:

template:
format:
kind: jinja2 # or "mustache"
parser:
kind: prompty # role-marker parser

Everything below the closing --- is the body. The loader assigns it to agent.instructions. At runtime the body flows through two stages:

  1. Renderer — expands template variables ({{name}}) using the inputs you provide.
  2. Parser — splits the rendered text on role markers into a list[Message] ready for the LLM.

Role markers are keywords on their own line followed by a colon. The parser recognises three roles:

MarkerResulting role
system:system
user:user
assistant:assistant

Everything after a marker (until the next marker or end-of-file) becomes the content of that message.

system:
You are an AI assistant who helps people find information.
user:
{{question}}
assistant:
Let me help with that.
user:
{{followUp}}

The default renderer is Jinja2. You can also use Mustache by setting template.format.kind: mustache.

system:
You are helping {{firstName}} {{lastName}}.
{% if context %}
Here is some context:
{{ context }}
{% endif %}
{% for item in history %}
- {{ item }}
{% endfor %}
user:
{{question}}

Frontmatter values can reference external data using ${protocol:value} syntax. The loader resolves these at load time before the YAML is parsed into typed objects.

# Required — errors if AZURE_OPENAI_ENDPOINT is not set
endpoint: ${env:AZURE_OPENAI_ENDPOINT}
# With a fallback default value
region: ${env:AZURE_REGION:eastus}
# Load a JSON file inline (path relative to the .prompty file)
connection: ${file:shared/azure-connection.json}

Prompty supports a compact shorthand for the model property:

# Shorthand — just the model name
model: gpt-4o
# Equivalent full form
model:
id: gpt-4o

Here is a full .prompty file using all the features described above:

---
name: customer-support
displayName: Customer Support Agent
description: Answers customer questions using context from their account.
metadata:
authors: [support-team]
tags: [production, customer-facing]
version: "2.1"
model:
id: gpt-4o
provider: foundry
apiType: chat
connection:
kind: key
endpoint: ${env:AZURE_OPENAI_ENDPOINT}
apiKey: ${env:AZURE_OPENAI_API_KEY}
options:
temperature: 0.3
maxOutputTokens: 2000
inputSchema:
properties:
- name: customerName
kind: string
description: Full name of the customer
required: true
- name: question
kind: string
description: The customer's question
required: true
- name: orderHistory
kind: array
description: Recent orders for context
default: []
outputSchema:
properties:
- name: answer
kind: string
- name: sentiment
kind: string
enumValues: [positive, neutral, negative]
tools:
- name: lookup_order
kind: function
description: Look up an order by ID
parameters:
properties:
- name: orderId
kind: string
required: true
template:
format:
kind: jinja2
parser:
kind: prompty
---
system:
You are a customer support agent for Contoso. Be helpful, concise,
and empathetic. Always greet the customer by name.
You have access to the following order history:
{% for order in orderHistory %}
- Order #{{ order.id }}: {{ order.status }} ({{ order.date }})
{% endfor %}
user:
Hi, my name is {{customerName}}. {{question}}

Run it with the Prompty runtime:

import prompty
# Load + render + parse + execute + process in one call
result = prompty.run(
"customer-support.prompty",
inputs={
"customerName": "Jane Doe",
"question": "Where is my order #12345?",
"orderHistory": [
{"id": "12345", "status": "shipped", "date": "2025-01-15"},
{"id": "12300", "status": "delivered", "date": "2025-01-02"},
],
},
)