Providers
Prompty uses a provider system to connect to different LLM backends. Each
provider has an executor (sends requests to the API) and a processor
(extracts results from responses). You set the provider in your .prompty file’s
model section.
model: id: gpt-4o provider: openai # ← provider key apiType: chat connection: kind: key endpoint: ${env:OPENAI_ENDPOINT} apiKey: ${env:OPENAI_API_KEY}OpenAI
Section titled “OpenAI”Direct access to the OpenAI API.
Provider key: openai
Supported API types: chat, responses, embedding, image
model: id: gpt-4o provider: openai apiType: chat connection: kind: key endpoint: https://api.openai.com/v1 apiKey: ${env:OPENAI_API_KEY} options: temperature: 0.7 maxOutputTokens: 1000model: id: text-embedding-3-small provider: openai apiType: embedding connection: kind: key apiKey: ${env:OPENAI_API_KEY}model: id: dall-e-3 provider: openai apiType: image connection: kind: key apiKey: ${env:OPENAI_API_KEY} options: additionalProperties: size: "1024x1024" quality: standard- Uses the official
openaiPython package - Supports streaming via
PromptyStream/AsyncPromptyStream - Structured output is supported via
outputSchema→response_format - Agent mode is available via
execute_agent()— it usesapiType: chatwith an automatic tool-calling loop
Microsoft Foundry
Section titled “Microsoft Foundry”Connect to models deployed through Microsoft Foundry (Azure AI Services). This provider covers both Foundry project endpoints (the recommended approach) and classic Azure OpenAI endpoints (legacy).
Provider key: foundry
Supported API types: chat, responses, embedding, image
model: id: gpt-4o provider: foundry apiType: chat connection: kind: key endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT} apiKey: ${env:AZURE_AI_PROJECT_KEY} options: temperature: 0.7model: id: gpt-4o provider: foundry apiType: chat connection: kind: foundry endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT}model: id: gpt-4o # your deployment name provider: foundry apiType: chat connection: kind: key endpoint: ${env:AZURE_OPENAI_ENDPOINT} apiKey: ${env:AZURE_OPENAI_API_KEY} options: temperature: 0.7model: id: text-embedding-3-small # your deployment name provider: foundry apiType: embedding connection: kind: key endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT} apiKey: ${env:AZURE_AI_PROJECT_KEY}Endpoint Patterns
Section titled “Endpoint Patterns”| Endpoint Pattern | Example | Notes |
|---|---|---|
| Foundry project (recommended) | https://<resource>.services.ai.azure.com/api/projects/<project> | New-style Foundry project endpoint |
| Classic Azure OpenAI (legacy) | https://<resource>.openai.azure.com/ | Still supported via provider: foundry |
- Uses the
openaiPython package with Azure-specific configuration - Requires the
azure-identitypackage for Azure AD authentication - Supports both API key and Azure AD (managed identity /
DefaultAzureCredential) - Supports the same features as the OpenAI provider (streaming, structured output, agent mode)
- Model deployments are managed through the Azure AI Foundry portal
Anthropic
Section titled “Anthropic”Access Anthropic Claude models directly.
Provider key: anthropic
Supported API types: chat
model: id: claude-sonnet-4-6 provider: anthropic apiType: chat connection: kind: key endpoint: https://api.anthropic.com apiKey: ${env:ANTHROPIC_API_KEY} options: temperature: 0.7 maxOutputTokens: 1024- Uses the
anthropicPython package - The
endpointdefaults tohttps://api.anthropic.comand can typically be omitted - Tool calling is supported through Anthropic’s native tool use API
Provider Comparison
Section titled “Provider Comparison”| Feature | OpenAI | Microsoft Foundry | Anthropic |
|---|---|---|---|
chat | ✅ | ✅ | ✅ |
responses | ✅ | ✅ | ❌ |
embedding | ✅ | ✅ | ❌ |
image | ✅ | ✅ | ❌ |
agent (tool loop) | ✅ | ✅ | ❌ |
| Streaming | ✅ | ✅ | ✅ |
| Structured output | ✅ | ✅ | ✅ |
| Azure AD auth | ❌ | ✅ | ❌ |
Custom Providers
Section titled “Custom Providers”The provider system is extensible. You can create your own provider by implementing the executor and processor protocols, then registering them as Python entry points.
1. Implement the Protocols
Section titled “1. Implement the Protocols”from prompty.core.protocols import ExecutorProtocol, ProcessorProtocolfrom prompty.core.types import Message
class MyExecutor: def execute(self, agent, messages: list[Message], **kwargs): # Call your LLM API here ...
async def execute_async(self, agent, messages: list[Message], **kwargs): # Async variant ...
class MyProcessor: def process(self, agent, response, **kwargs): # Extract content from the API response ...
async def process_async(self, agent, response, **kwargs): ...2. Register Entry Points
Section titled “2. Register Entry Points”In your package’s pyproject.toml:
[project.entry-points."prompty.executors"]myprovider = "my_package.executor:MyExecutor"
[project.entry-points."prompty.processors"]myprovider = "my_package.processor:MyProcessor"3. Use in .prompty Files
Section titled “3. Use in .prompty Files”model: id: my-model provider: myprovider connection: kind: key endpoint: ${env:MY_ENDPOINT} apiKey: ${env:MY_API_KEY}After installing your package, the Prompty runtime discovers your provider automatically via the entry point system — no changes to the Prompty codebase are needed.