Skip to content

Providers

Prompty uses a provider system to connect to different LLM backends. Each provider has an executor (sends requests to the API) and a processor (extracts results from responses). You set the provider in your .prompty file’s model section.

model:
id: gpt-4o
provider: openai # ← provider key
apiType: chat
connection:
kind: key
endpoint: ${env:OPENAI_ENDPOINT}
apiKey: ${env:OPENAI_API_KEY}

Direct access to the OpenAI API.

Provider key: openai

Supported API types: chat, responses, embedding, image

model:
id: gpt-4o
provider: openai
apiType: chat
connection:
kind: key
endpoint: https://api.openai.com/v1
apiKey: ${env:OPENAI_API_KEY}
options:
temperature: 0.7
maxOutputTokens: 1000
  • Uses the official openai Python package
  • Supports streaming via PromptyStream / AsyncPromptyStream
  • Structured output is supported via outputSchemaresponse_format
  • Agent mode is available via execute_agent() — it uses apiType: chat with an automatic tool-calling loop

Connect to models deployed through Microsoft Foundry (Azure AI Services). This provider covers both Foundry project endpoints (the recommended approach) and classic Azure OpenAI endpoints (legacy).

Provider key: foundry

Supported API types: chat, responses, embedding, image

model:
id: gpt-4o
provider: foundry
apiType: chat
connection:
kind: key
endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT}
apiKey: ${env:AZURE_AI_PROJECT_KEY}
options:
temperature: 0.7
Endpoint PatternExampleNotes
Foundry project (recommended)https://<resource>.services.ai.azure.com/api/projects/<project>New-style Foundry project endpoint
Classic Azure OpenAI (legacy)https://<resource>.openai.azure.com/Still supported via provider: foundry
  • Uses the openai Python package with Azure-specific configuration
  • Requires the azure-identity package for Azure AD authentication
  • Supports both API key and Azure AD (managed identity / DefaultAzureCredential)
  • Supports the same features as the OpenAI provider (streaming, structured output, agent mode)
  • Model deployments are managed through the Azure AI Foundry portal

Access Anthropic Claude models directly.

Provider key: anthropic

Supported API types: chat

model:
id: claude-sonnet-4-6
provider: anthropic
apiType: chat
connection:
kind: key
endpoint: https://api.anthropic.com
apiKey: ${env:ANTHROPIC_API_KEY}
options:
temperature: 0.7
maxOutputTokens: 1024
  • Uses the anthropic Python package
  • The endpoint defaults to https://api.anthropic.com and can typically be omitted
  • Tool calling is supported through Anthropic’s native tool use API

FeatureOpenAIMicrosoft FoundryAnthropic
chat
responses
embedding
image
agent (tool loop)
Streaming
Structured output
Azure AD auth

The provider system is extensible. You can create your own provider by implementing the executor and processor protocols, then registering them as Python entry points.

from prompty.core.protocols import ExecutorProtocol, ProcessorProtocol
from prompty.core.types import Message
class MyExecutor:
def execute(self, agent, messages: list[Message], **kwargs):
# Call your LLM API here
...
async def execute_async(self, agent, messages: list[Message], **kwargs):
# Async variant
...
class MyProcessor:
def process(self, agent, response, **kwargs):
# Extract content from the API response
...
async def process_async(self, agent, response, **kwargs):
...

In your package’s pyproject.toml:

[project.entry-points."prompty.executors"]
myprovider = "my_package.executor:MyExecutor"
[project.entry-points."prompty.processors"]
myprovider = "my_package.processor:MyProcessor"
model:
id: my-model
provider: myprovider
connection:
kind: key
endpoint: ${env:MY_ENDPOINT}
apiKey: ${env:MY_API_KEY}

After installing your package, the Prompty runtime discovers your provider automatically via the entry point system — no changes to the Prompty codebase are needed.