Skip to content

Migration Guide (v1 → v2)

Prompty v2 is a significant rewrite. Your .prompty files will mostly work as-is (v2 auto-migrates v1 syntax), but your Python code will need updates.

v1 supported Python 3.9+. v2 requires 3.11+ for modern syntax features (X | Y unions, match/case, etc.).

v2 recommends uv for Python environment management:

Terminal window
# v1
pip install prompty
# v2
uv pip install prompty[jinja2,openai]

In v1, all providers were bundled. In v2, install the extras you need:

ExtraWhat it includes
jinja2Jinja2 renderer
mustacheMustache renderer
openaiOpenAI provider
azureAzure OpenAI + identity (deprecated alias for foundry)
otelOpenTelemetry tracing
allEverything above

v1 used a special apiType in the .prompty file and smuggled tool functions via metadata["tool_functions"]:

# v1 — tool functions smuggled via metadata
agent.metadata["tool_functions"] = {
"get_weather": get_weather
}
result = prompty.execute(agent, messages)

v2 uses explicit execute_agent():

# v2 — tools passed explicitly
result = prompty.execute_agent(
"agent.prompty",
inputs={...},
tools={"get_weather": get_weather},
)

v1 silently fell back to DefaultAzureCredential when no API key was provided. v2 requires explicit configuration via the connection registry:

# v2 — explicit credential setup
from openai import AzureOpenAI
from azure.identity import (
DefaultAzureCredential,
get_bearer_token_provider,
)
client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_ENDPOINT"],
azure_ad_token_provider=get_bearer_token_provider(
DefaultAzureCredential(),
"https://cognitiveservices.azure.com/.default",
),
)
prompty.register_connection("azure", client=client)

Then reference it in the .prompty file:

model:
connection:
kind: reference
name: azure

v2 uses updated property names. v1 names are auto-migrated with deprecation warnings.

# v1
model:
api: chat
configuration:
type: azure_openai
azure_endpoint: ${env:ENDPOINT}
api_key: ${env:KEY}
parameters:
max_tokens: 500
temperature: 0.7
# v2
model:
apiType: chat
provider: foundry
connection:
kind: key
endpoint: ${env:ENDPOINT}
apiKey: ${env:KEY}
options:
maxOutputTokens: 500
temperature: 0.7
# v1
inputs:
name:
type: string
sample: World
question:
type: string
# v2
inputs:
- name: name
kind: string
default: World
- name: question
kind: string
# v1
template: jinja2
# v2 (shorthand)
template:
format: jinja2
parser: prompty
# v2 (full form)
template:
format:
kind: jinja2
parser:
kind: prompty
v1 Propertyv2 Property
model.apimodel.apiType
model.configurationmodel.connection
model.configuration.typemodel.provider (azure_openaifoundry, openaiopenai)
model.parametersmodel.options
model.parameters.max_tokensmodel.options.maxOutputTokens
model.parameters.top_pmodel.options.topP
model.parameters.frequency_penaltymodel.options.frequencyPenalty
model.parameters.presence_penaltymodel.options.presencePenalty
model.parameters.stopmodel.options.stopSequences
inputs (dict)inputs (list of Property)
outputsoutputs
inputs.X.typekind
inputs.X.sampledefault
  • Connection registryregister_connection()
  • Agent loopexecute_agent() with error recovery
  • Streaming hardening — tool calls, refusals, empty chunks
  • Structured outputoutputSchemaresponse_format
  • Thread safety — renderer nonces use threading.local()
  • Entry-point discovery — third-party implementations
  • TypeScript runtime@prompty/core, @prompty/openai, @prompty/foundry, @prompty/anthropic