Skip to content

Custom Providers

The built-in providers cover OpenAI, Azure OpenAI, and Anthropic. Build your own when you need to hit a different API — a self-hosted model, a custom gateway, or a provider with a proprietary SDK.


The pipeline looks up two components by the model.provider value in the .prompty file:

ComponentResponsibilityKeyed by
ExecutorSends messages to the LLM and returns the raw responsemodel.provider
ProcessorExtracts the final result from the raw responsemodel.provider
prepare() → messages → Executor.execute() → raw → Processor.process() → result

my_provider/executor.py
from __future__ import annotations
from typing import Any
from prompty.model import Prompty
from prompty.core.types import Message
class MyExecutor:
"""Executor for the 'my-custom' provider."""
def execute(self, agent: Prompty, messages: list[Message]) -> Any:
import httpx
conn = agent.model.connection
payload = {
"model": agent.model.id,
"messages": [{"role": m.role, "content": m.text} for m in messages],
}
resp = httpx.post(
f"{conn.endpoint}/v1/completions",
json=payload,
headers={"Authorization": f"Bearer {conn.apiKey}"},
)
resp.raise_for_status()
return resp.json()
async def execute_async(self, agent: Prompty, messages: list[Message]) -> Any:
import httpx
conn = agent.model.connection
payload = {
"model": agent.model.id,
"messages": [{"role": m.role, "content": m.text} for m in messages],
}
async with httpx.AsyncClient() as client:
resp = await client.post(
f"{conn.endpoint}/v1/completions",
json=payload,
headers={"Authorization": f"Bearer {conn.apiKey}"},
)
resp.raise_for_status()
return resp.json()

my_provider/processor.py
from __future__ import annotations
from typing import Any
from prompty.model import Prompty
class MyProcessor:
"""Extracts text content from the custom provider response."""
def process(self, agent: Prompty, response: Any) -> Any:
return response["choices"][0]["message"]["content"]
async def process_async(self, agent: Prompty, response: Any) -> Any:
return self.process(agent, response)

Register via entry points in pyproject.toml:

[project.entry-points."prompty.executors"]
my-custom = "my_provider.executor:MyExecutor"
[project.entry-points."prompty.processors"]
my-custom = "my_provider.processor:MyProcessor"

Reinstall after changing entry points:

Terminal window
uv pip install -e .

Set provider: my-custom in the model block:

custom-llm.prompty
---
name: custom-llm-chat
model:
id: my-model-name
provider: my-custom
apiType: chat
connection:
kind: key
endpoint: ${env:MY_LLM_ENDPOINT}
apiKey: ${env:MY_LLM_API_KEY}
options:
temperature: 0.7
inputs:
- name: question
kind: string
---
system:
You are a helpful assistant.
user:
{{question}}
from prompty import invoke
result = invoke("custom-llm.prompty", inputs={"question": "Hello!"})
print(result)