Skip to content

Getting Started

Prompty is a markdown file format for LLM prompts. A .prompty file combines structured YAML frontmatter (model config, inputs, tools) with a markdown body that becomes the prompt instructions. The runtime loads, renders, parses, and executes these files.

┌─────────────────────────────┐
│ .prompty file │
│ ┌───────────────────────┐ │
│ │ --- (YAML frontmatter)│ │ → model, inputs, tools, template config
│ │ --- │ │
│ │ Markdown body │ │ → prompt instructions with template syntax
│ └───────────────────────┘ │
└─────────────────────────────┘
Terminal window
# Core + Jinja2 renderer + OpenAI provider
pip install prompty[jinja2,openai]
# With Microsoft Foundry support
pip install prompty[jinja2,foundry]
# Everything
pip install prompty[all]

Create a file called greeting.prompty:

---
name: greeting
description: A friendly greeting prompt
model:
id: gpt-4o-mini
provider: openai
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
inputs:
- name: userName
kind: string
default: World
---
system:
You are a friendly assistant who greets people warmly.
user:
Say hello to {{userName}} and ask how their day is going.
import prompty
# All-in-one: load → render → parse → execute → process
result = prompty.invoke(
"greeting.prompty",
inputs={"userName": "Jane"},
)
print(result)
# "Hello Jane! 👋 How's your day going so far?"

For more control, use the pipeline stages individually:

import prompty
# 1. Load the .prompty file → typed Prompty object
agent = prompty.load("greeting.prompty")
# 2. Render template + parse → list[Message]
messages = prompty.prepare(agent, inputs={"userName": "Jane"})
# 3. Call the LLM + process → clean result
result = prompty.run(agent, messages)

Use ${env:VAR_NAME} in frontmatter to reference environment variables. Create a .env file in your project root:

Terminal window
OPENAI_API_KEY=sk-your-key-here

Prompty automatically loads .env files.

Here’s a full, tested example you can copy and run:

chat_basic.py
"""Basic chat completion with OpenAI.
This example loads a .prompty file and runs a simple chat completion.
Used in: how-to/openai.mdx, getting-started/index.mdx
"""
from __future__ import annotations
from prompty import invoke, load
agent = load("chat-basic.prompty")
result = invoke(agent, inputs={"question": "What is Prompty?"})
print(result)