Skip to content

What's New in v2

Prompty v2 is a ground-up rebuild focused on multi-runtime support, agent workflows, and a cleaner file format backed by the Prompty schema.

  • Python, TypeScript, and C# — the same .prompty file works across all three
  • Consistent public API across languages:
    • load() — parse a .prompty file into a typed agent object
    • prepare() — render the template and parse into messages
    • invoke() / run() — full pipeline: load → render → parse → execute → process
    • invoke_agent() — agentic tool-calling loop
  • YAML frontmatter follows the Prompty schema (defined in TypeSpec)
  • inputs / outputs with typed property lists (kind, default, required)
  • tools block for function calling and PromptyTool composition
  • template.format and template.parser for explicit template configuration
  • ${env:VAR} and ${file:path} reference resolution in frontmatter values
  • invoke_agent() runs a tool-calling loop automatically — send a prompt, receive tool calls, execute them, re-send, repeat until the model returns a final response
  • Register tool functions:
    • Python: @tool decorator or metadata["tool_functions"]
    • TypeScript: tool registry
    • C#: [Tool] attribute
  • PromptyTool: compose prompts as tools — one .prompty file can call another
  • First-class streaming with PromptyStream / AsyncPromptyStream wrappers
  • Built-in tracing integration — streaming responses are traced automatically
  • Works with both OpenAI and Azure OpenAI providers
  • Define outputs in frontmatter → automatic response_format for OpenAI
  • Responses are JSON-parsed when a schema is defined
  • Supports strict mode for guaranteed schema conformance
  • OpenAI — direct API key authentication
  • Azure OpenAI / Foundry — API key, Microsoft Entra ID, and managed identity
  • Anthropic — pluggable via the provider registry
  • Extensible: add new providers by implementing the executor/processor protocols and registering via entry points

Every prompt execution follows a four-stage pipeline:

  1. Render — expand the template (Jinja2 or Mustache) with inputs
  2. Parse — convert role markers (system:, user:, assistant:) into messages
  3. Execute — call the LLM provider
  4. Process — extract content, tool calls, or embeddings from the response

Each stage is independently replaceable via the discovery/registry system.

  • @trace decorator wraps any function with tracing spans
  • Tracer registry with pluggable backends:
    • Console output
    • JSON file logging
    • OpenTelemetry (opentelemetry-api)
  • All pipeline stages are automatically traced
  • Live preview of rendered prompts directly in the editor
  • Connection management for OpenAI and Azure endpoints
  • Copilot Chat integration — use prompty files in GitHub Copilot workflows

Prompty v2 includes a legacy migration layer that automatically converts v1 frontmatter properties to v2 equivalents with deprecation warnings. Old .prompty files continue to load — update them at your own pace.

See the Migration Guide for full details on property mappings and breaking changes.