What's New in v2
Prompty v2 is a ground-up rebuild focused on multi-runtime support, agent workflows, and a cleaner file format backed by the Prompty schema.
Three Runtime Implementations
Section titled “Three Runtime Implementations”- Python, TypeScript, and C# — the same
.promptyfile works across all three - Consistent public API across languages:
load()— parse a.promptyfile into a typed agent objectprepare()— render the template and parse into messagesinvoke()/run()— full pipeline: load → render → parse → execute → processinvoke_agent()— agentic tool-calling loop
New File Format
Section titled “New File Format”- YAML frontmatter follows the Prompty schema (defined in TypeSpec)
inputs/outputswith typed property lists (kind,default,required)toolsblock for function calling and PromptyTool compositiontemplate.formatandtemplate.parserfor explicit template configuration${env:VAR}and${file:path}reference resolution in frontmatter values
Agent Mode
Section titled “Agent Mode”invoke_agent()runs a tool-calling loop automatically — send a prompt, receive tool calls, execute them, re-send, repeat until the model returns a final response- Register tool functions:
- Python:
@tooldecorator ormetadata["tool_functions"] - TypeScript: tool registry
- C#:
[Tool]attribute
- Python:
- PromptyTool: compose prompts as tools — one
.promptyfile can call another
Streaming Support
Section titled “Streaming Support”- First-class streaming with
PromptyStream/AsyncPromptyStreamwrappers - Built-in tracing integration — streaming responses are traced automatically
- Works with both OpenAI and Azure OpenAI providers
Structured Output
Section titled “Structured Output”- Define
outputsin frontmatter → automaticresponse_formatfor OpenAI - Responses are JSON-parsed when a schema is defined
- Supports strict mode for guaranteed schema conformance
Provider Support
Section titled “Provider Support”- OpenAI — direct API key authentication
- Azure OpenAI / Foundry — API key, Microsoft Entra ID, and managed identity
- Anthropic — pluggable via the provider registry
- Extensible: add new providers by implementing the executor/processor protocols and registering via entry points
Pipeline Architecture
Section titled “Pipeline Architecture”Every prompt execution follows a four-stage pipeline:
- Render — expand the template (Jinja2 or Mustache) with inputs
- Parse — convert role markers (
system:,user:,assistant:) into messages - Execute — call the LLM provider
- Process — extract content, tool calls, or embeddings from the response
Each stage is independently replaceable via the discovery/registry system.
Tracing & Observability
Section titled “Tracing & Observability”@tracedecorator wraps any function with tracing spansTracerregistry with pluggable backends:- Console output
- JSON file logging
- OpenTelemetry (
opentelemetry-api)
- All pipeline stages are automatically traced
VS Code Extension
Section titled “VS Code Extension”- Live preview of rendered prompts directly in the editor
- Connection management for OpenAI and Azure endpoints
- Copilot Chat integration — use prompty files in GitHub Copilot workflows
Migration from v1
Section titled “Migration from v1”Prompty v2 includes a legacy migration layer that automatically converts v1
frontmatter properties to v2 equivalents with deprecation warnings. Old .prompty
files continue to load — update them at your own pace.
See the Migration Guide for full details on property mappings and breaking changes.