Skip to content

Python

Prompty v2 requires Python ≥ 3.11. We recommend uv for environment management.

Terminal window
# Create a virtual environment
uv venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
# Install with the extras you need
uv pip install prompty[jinja2,openai]
ExtraPackages InstalledWhat it enables
jinja2jinja2Jinja2 template rendering
mustachechevronMustache template rendering
openaiopenaiOpenAI provider
azureopenai, azure-identityAzure OpenAI provider (deprecated alias for foundry)
anthropicanthropicAnthropic provider
foundryazure-ai-foundry, azure-identityMicrosoft Foundry provider
otelopentelemetry-apiOpenTelemetry tracing
allAll of the aboveEverything
Terminal window
# Install everything
uv pip install prompty[all]
import prompty
# All-in-one execution
result = prompty.execute("greeting.prompty", inputs={"name": "Jane"})
print(result)
# Load a .prompty file into a typed Prompty object
agent = prompty.load("chat.prompty")
print(agent.name) # "chat"
print(agent.model.id) # "gpt-4o"
print(agent.instructions) # the markdown body
# Render template with inputs → string
rendered = prompty.render(agent, inputs={"q": "Hi"})
# Parse rendered string → list[Message]
messages = prompty.parse(agent, rendered)
# Render + parse + thread expansion → list[Message]
messages = prompty.prepare(agent, inputs={"q": "Hi"})
# Execute LLM + process response → clean result
result = prompty.run(agent, messages)
# Full pipeline: load + prepare + run
result = prompty.execute("chat.prompty", inputs={"q": "Hi"})

Every function has an async counterpart:

agent = await prompty.load_async("chat.prompty")
messages = await prompty.prepare_async(agent, inputs={"q": "Hi"})
result = await prompty.run_async(agent, messages)
result = await prompty.execute_async("chat.prompty", inputs={"q": "Hi"})
def get_weather(city: str) -> str:
return f"72°F and sunny in {city}"
result = prompty.execute_agent(
"agent.prompty",
inputs={"question": "Weather in Seattle?"},
tools={"get_weather": get_weather},
max_iterations=10,
)

Create a Prompty object programmatically without a .prompty file:

agent = prompty.headless(
api="chat",
content="Translate the following to French: Hello world",
model="gpt-4o-mini",
provider="openai",
connection={"kind": "key", "apiKey": os.environ["OPENAI_API_KEY"]},
)
result = prompty.run(agent, agent.metadata["content"])

Pre-configure SDK clients for production use:

from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_ENDPOINT"],
azure_ad_token_provider=get_bearer_token_provider(
DefaultAzureCredential(),
"https://cognitiveservices.azure.com/.default",
),
)
prompty.register_connection("azure-prod", client=client)
from prompty import Tracer, PromptyTracer, trace
# Register a file-based tracer
Tracer.add("json", PromptyTracer("./traces").tracer)
# Trace custom functions
@trace
def my_pipeline(query: str) -> str:
return prompty.execute("search.prompty", inputs={"q": query})
ProviderRegistration KeySDKExtras
OpenAIopenaiopenaiprompty[openai]
Azure OpenAI (deprecated)azureopenai + azure-identityprompty[azure]
Anthropicanthropicanthropicprompty[anthropic]
Microsoft Foundryfoundryazure-ai-foundry + azure-identityprompty[foundry]

Providers are discovered via Python entry points — the plugin architecture makes it easy to add new providers. Third-party providers can register themselves in their pyproject.toml.

Prompty automatically loads .env files via python-dotenv. Place a .env file in your project root:

Terminal window
OPENAI_API_KEY=sk-your-key-here
AZURE_OPENAI_ENDPOINT=https://myresource.openai.azure.com/
AZURE_OPENAI_API_KEY=abc123
ANTHROPIC_API_KEY=sk-ant-your-key-here
FOUNDRY_ENDPOINT=https://your-project.services.ai.azure.com