Skip to content

Connections

Connections define how Prompty authenticates with LLM providers. Every .prompty file can specify a connection inside its model.connection frontmatter block. The connection tells the runtime where to send requests and how to authenticate.

model:
id: gpt-4o
provider: foundry
connection: # ← this section
kind: key
endpoint: https://my-resource.services.ai.azure.com/api/projects/my-project
apiKey: ${env:AZURE_AI_PROJECT_KEY}

Prompty ships six connection types — each backed by a corresponding AgentSchema class — so you can match the auth strategy to your environment.


classDiagram
    class Connection {
        <>
        kind
    }
    class ApiKeyConnection {
        kind: key
        endpoint
        apiKey
        apiVersion
    }
    class ReferenceConnection {
        kind: reference
        name
    }
    class RemoteConnection {
        kind: remote
        name
        endpoint
    }
    class AnonymousConnection {
        kind: anonymous
        endpoint
    }
    class FoundryConnection {
        kind: foundry
        endpoint
        name
        connectionType
        authenticationMode
    }
    class OAuthConnection {
        kind: oauth
        endpoint
        clientId
        clientSecret
        tokenUrl
        scopes
    }
    Connection <|-- ApiKeyConnection
    Connection <|-- ReferenceConnection
    Connection <|-- RemoteConnection
    Connection <|-- AnonymousConnection
    Connection <|-- FoundryConnection
    Connection <|-- OAuthConnection

The simplest connection: provide an endpoint and an API key. Ideal for local development and quick experiments.

model:
id: gpt-4o
provider: openai
connection:
kind: key
endpoint: https://api.openai.com/v1
apiKey: ${env:OPENAI_API_KEY}

For Microsoft Foundry, set the endpoint to your Foundry project or classic Azure resource:

model:
id: gpt-4o
provider: foundry
connection:
kind: key
endpoint: ${env:AZURE_AI_PROJECT_ENDPOINT}
apiKey: ${env:AZURE_AI_PROJECT_KEY}

You can also use a classic Azure OpenAI endpoint with the Foundry provider:

model:
id: gpt-4o
provider: foundry
connection:
kind: key
endpoint: ${env:AZURE_OPENAI_ENDPOINT}
apiKey: ${env:AZURE_OPENAI_API_KEY}

For Anthropic, use provider: anthropic with your Anthropic API key:

model:
id: claude-sonnet-4-20250514
provider: anthropic
connection:
kind: key
endpoint: https://api.anthropic.com
apiKey: ${env:ANTHROPIC_API_KEY}

References a pre-registered SDK client by name. This is the recommended pattern for production because it supports Azure AD, managed identity, custom retry policies, and any authentication method the SDK supports.

model:
id: gpt-4o
provider: foundry
connection:
kind: reference
name: my-foundry-client

Register the client at application startup:

from openai import AzureOpenAI
import prompty
prompty.register_connection("my-foundry-client", client=AzureOpenAI(
azure_endpoint="https://my-resource.services.ai.azure.com/api/projects/my-project",
api_key="...",
))

The executor looks up "my-azure-client" in the connection registry and uses the pre-configured SDK client directly — no additional auth logic is needed in the .prompty file.


A named endpoint reference for remote model-serving services. Useful when you have a separate inference service behind a gateway.

model:
id: my-fine-tuned-model
provider: openai
connection:
kind: remote
name: my-service
endpoint: https://my-model.azurewebsites.net

Endpoint only — no authentication. Perfect for self-hosted models like Ollama or vLLM running locally.

model:
id: llama3
provider: openai
connection:
kind: anonymous
endpoint: http://localhost:11434/v1

Integrates with Microsoft Foundry for managed model deployments, including serverless and managed-compute endpoints.

model:
id: gpt-4o
provider: foundry
connection:
kind: foundry
endpoint: ${env:FOUNDRY_ENDPOINT}
name: my-deployment
connectionType: serverless
FieldDescription
endpointFoundry project endpoint
nameDeployment name within the Foundry project
connectionTypeserverless or managedCompute
authenticationModeOptional — override default Foundry authentication

OAuth 2.0 client credentials flow for services that require token-based authentication.

model:
id: my-model
provider: openai
connection:
kind: oauth
endpoint: https://api.example.com
clientId: ${env:CLIENT_ID}
clientSecret: ${env:CLIENT_SECRET}
tokenUrl: https://auth.example.com/token
scopes:
- api.read
FieldDescription
clientIdOAuth 2.0 client identifier
clientSecretOAuth 2.0 client secret
tokenUrlToken endpoint URL
scopesList of scopes to request
endpointThe model-serving endpoint to call

The following diagram shows how the runtime resolves a connection when a .prompty file is loaded and executed.

flowchart TD
    A["Read model.connection"] --> B{"kind = ?"}
    B -->|key| C["Use directly\nendpoint + apiKey → SDK client"]
    B -->|reference| D["Registry lookup\nname → pre-configured client"]
    B -->|foundry| E["Foundry resolve\nendpoint + name → Foundry SDK"]
    B -->|"anonymous · remote · oauth"| F["Kind-specific resolution"]
    C -.-> G["Expand ${env:VAR} references\nLoadContext.pre_process resolves all string values"]
    D -.-> G
    E -.-> G
    F -.-> G
    G --> H["Configured SDK Client\nReady for executor to call LLM"]

For production workloads you’ll typically use kind: reference combined with a connection registry configured at application startup. This keeps secrets and complex auth logic in application code — not in YAML.

import os
import prompty
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
# Build a client with Azure AD token-based auth
client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_ENDPOINT"],
azure_ad_token_provider=get_bearer_token_provider(
DefaultAzureCredential(),
"https://cognitiveservices.azure.com/.default",
),
)
# Register it by name — .prompty files reference this name
prompty.register_connection("foundry-prod", client=client)

Then in your .prompty file:

model:
id: gpt-4o
provider: foundry
connection:
kind: reference
name: foundry-prod

All string values in the frontmatter support ${env:VAR} references. The runtime resolves them at load time via LoadContext.pre_process.

SyntaxBehavior
${env:VAR}Reads VAR from the environment. Raises ValueError if unset.
${env:VAR:default}Falls back to default when VAR is unset.
model:
connection:
kind: key
endpoint: ${env:OPENAI_ENDPOINT:https://api.openai.com/v1}
apiKey: ${env:OPENAI_API_KEY}

Environment variables are loaded from a .env file (via python-dotenv) if one exists alongside the .prompty file or in a parent directory.


Use kind: key with environment variables. It’s the fastest way to get started and keeps secrets out of source control.

model:
connection:
kind: key
endpoint: ${env:OPENAI_ENDPOINT:https://api.openai.com/v1}
apiKey: ${env:OPENAI_API_KEY}
ScenarioConnection KindWhy
Local dev with API keykeySimplest setup, env-var-based secrets
Anthropic (Claude)keyAPI key auth with provider: anthropic
Production with Azure ADreferenceFull SDK control, managed identity
Microsoft Foundry deploymentsfoundryNative Foundry service integration
OAuth 2.0 servicesoauthClient credentials token flow
Remote gateway / proxyremoteNamed endpoint reference