Python
Installation
Section titled “Installation”Prompty v2 requires Python ≥ 3.11. We recommend uv for environment management.
# Create a virtual environmentuv venvsource .venv/bin/activate # or .venv\Scripts\activate on Windows
# Install with the extras you needuv pip install prompty[jinja2,openai]Available Extras
Section titled “Available Extras”| Extra | Packages Installed | What it enables |
|---|---|---|
jinja2 | jinja2 | Jinja2 template rendering |
mustache | chevron | Mustache template rendering |
openai | openai | OpenAI provider |
azure | openai, azure-identity | Azure OpenAI provider (deprecated alias for foundry) |
anthropic | anthropic | Anthropic provider |
foundry | azure-ai-foundry, azure-identity | Microsoft Foundry provider |
otel | opentelemetry-api | OpenTelemetry tracing |
all | All of the above | Everything |
# Install everythinguv pip install prompty[all]Quick Start
Section titled “Quick Start”import prompty
# All-in-one executionresult = prompty.execute("greeting.prompty", inputs={"name": "Jane"})print(result)API Overview
Section titled “API Overview”Loading
Section titled “Loading”# Load a .prompty file into a typed Prompty objectagent = prompty.load("chat.prompty")
print(agent.name) # "chat"print(agent.model.id) # "gpt-4o"print(agent.instructions) # the markdown bodyPipeline Functions
Section titled “Pipeline Functions”# Render template with inputs → stringrendered = prompty.render(agent, inputs={"q": "Hi"})
# Parse rendered string → list[Message]messages = prompty.parse(agent, rendered)
# Render + parse + thread expansion → list[Message]messages = prompty.prepare(agent, inputs={"q": "Hi"})
# Execute LLM + process response → clean resultresult = prompty.run(agent, messages)
# Full pipeline: load + prepare + runresult = prompty.execute("chat.prompty", inputs={"q": "Hi"})Async Variants
Section titled “Async Variants”Every function has an async counterpart:
agent = await prompty.load_async("chat.prompty")messages = await prompty.prepare_async(agent, inputs={"q": "Hi"})result = await prompty.run_async(agent, messages)result = await prompty.execute_async("chat.prompty", inputs={"q": "Hi"})Agent Mode
Section titled “Agent Mode”def get_weather(city: str) -> str: return f"72°F and sunny in {city}"
result = prompty.execute_agent( "agent.prompty", inputs={"question": "Weather in Seattle?"}, tools={"get_weather": get_weather}, max_iterations=10,)Headless Mode
Section titled “Headless Mode”Create a Prompty object programmatically without a .prompty file:
agent = prompty.headless( api="chat", content="Translate the following to French: Hello world", model="gpt-4o-mini", provider="openai", connection={"kind": "key", "apiKey": os.environ["OPENAI_API_KEY"]},)result = prompty.run(agent, agent.metadata["content"])Connection Registry
Section titled “Connection Registry”Pre-configure SDK clients for production use:
from openai import AzureOpenAIfrom azure.identity import DefaultAzureCredential, get_bearer_token_provider
client = AzureOpenAI( azure_endpoint=os.environ["AZURE_ENDPOINT"], azure_ad_token_provider=get_bearer_token_provider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default", ),)prompty.register_connection("azure-prod", client=client)Tracing
Section titled “Tracing”from prompty import Tracer, PromptyTracer, trace
# Register a file-based tracerTracer.add("json", PromptyTracer("./traces").tracer)
# Trace custom functions@tracedef my_pipeline(query: str) -> str: return prompty.execute("search.prompty", inputs={"q": query})Providers
Section titled “Providers”| Provider | Registration Key | SDK | Extras |
|---|---|---|---|
| OpenAI | openai | openai | prompty[openai] |
| Azure OpenAI (deprecated) | azure | openai + azure-identity | prompty[azure] |
| Anthropic | anthropic | anthropic | prompty[anthropic] |
| Microsoft Foundry | foundry | azure-ai-foundry + azure-identity | prompty[foundry] |
Providers are discovered via Python entry points — the plugin architecture makes it
easy to add new providers. Third-party providers can register themselves in their
pyproject.toml.
Environment Files
Section titled “Environment Files”Prompty automatically loads .env files via python-dotenv. Place a .env
file in your project root:
OPENAI_API_KEY=sk-your-key-hereAZURE_OPENAI_ENDPOINT=https://myresource.openai.azure.com/AZURE_OPENAI_API_KEY=abc123ANTHROPIC_API_KEY=sk-ant-your-key-hereFOUNDRY_ENDPOINT=https://your-project.services.ai.azure.comFurther Reading
Section titled “Further Reading”- API Reference — complete function signatures
- How-To Guides — practical recipes
- Core Concepts — architecture deep-dives