Rust
Installation
Section titled “Installation”Prompty for Rust requires Rust ≥ 1.85 (edition 2024) and an async runtime (Tokio).
# Core runtimecargo add prompty
# Add a provider (pick one or more)cargo add prompty-openai # OpenAIcargo add prompty-foundry # Azure OpenAI / Foundrycargo add prompty-anthropic # Anthropic ClaudeCrate Structure
Section titled “Crate Structure”| Crate | Description | crates.io |
|---|---|---|
prompty | Core pipeline, types, registry, tracing | |
prompty-openai | OpenAI executor & processor | |
prompty-foundry | Azure OpenAI / Foundry executor & processor | |
prompty-anthropic | Anthropic Claude executor & processor |
Feature Flags
Section titled “Feature Flags”The core prompty crate has optional features:
| Feature | What it enables |
|---|---|
otel | OpenTelemetry tracing backend (opentelemetry, opentelemetry_sdk, opentelemetry-stdout) |
The prompty-foundry crate has:
| Feature | What it enables |
|---|---|
entra_id | Azure Entra ID (AAD) authentication via azure_identity |
# Enable OpenTelemetry tracingcargo add prompty --features otel
# Enable Entra ID authentication for Foundrycargo add prompty-foundry --features entra_idQuick Start
Section titled “Quick Start”use prompty;use prompty_openai;use serde_json::json;
#[tokio::main]async fn main() -> Result<(), Box<dyn std::error::Error>> { // 1. Register providers (once at startup) prompty::register_defaults(); prompty_openai::register();
// 2. Load and invoke let result = prompty::invoke_from_path( "greeting.prompty", Some(&json!({ "userName": "Jane" })), ).await?;
println!("{result}"); Ok(())}API Overview
Section titled “API Overview”Loading
Section titled “Loading”// Load a .prompty file into a typed Prompty objectlet agent = prompty::load("chat.prompty")?;
println!("{}", agent.name); // "chat"println!("{}", agent.model.id); // "gpt-4o"println!("{:?}", agent.instructions); // Some("the markdown body")
// Async loading (non-blocking file I/O)let agent = prompty::load_async("chat.prompty").await?;
// Load from a string (no file needed)let agent = prompty::load_from_string(raw_content, ".")?;Pipeline Functions
Section titled “Pipeline Functions”use serde_json::json;
let agent = prompty::load("chat.prompty")?;let inputs = json!({ "q": "Hi" });
// Render template + parse role markers → Vec<Message>let messages = prompty::prepare(&agent, Some(&inputs)).await?;
// Execute LLM + process response → serde_json::Valuelet result = prompty::run(&agent, &messages).await?;
// One-shot: prepare + runlet result = prompty::invoke_agent(&agent, Some(&inputs)).await?;
// Load from path + invoke in one calllet result = prompty::invoke_from_path("chat.prompty", Some(&inputs)).await?;Async-Only Design
Section titled “Async-Only Design”Rust’s Prompty runtime is async-only — all pipeline functions are async fn
and require a Tokio runtime. This is idiomatic for Rust I/O and network operations.
#[tokio::main]async fn main() { let result = prompty::invoke_from_path("chat.prompty", None).await.unwrap();}Agent Mode (turn)
Section titled “Agent Mode (turn)”The turn() function runs an agent loop — the LLM can call tools, and the runtime
executes them automatically until it produces a final response.
use prompty::{TurnOptions, Steering, AgentEvent};use serde_json::json;use std::sync::Arc;
// Register tool handlersprompty::register_tool_handler("get_weather", |args| { Box::pin(async move { let city = args["city"].as_str().unwrap_or("unknown"); Ok(json!(format!("72°F and sunny in {city}"))) })});
let agent = prompty::load("agent.prompty")?;
let options = TurnOptions { max_iterations: Some(10), max_llm_retries: Some(3), events: Some(Arc::new(|event: AgentEvent| { println!("Event: {event:?}"); })), ..Default::default()};
let result = prompty::turn( &agent, Some(&json!({ "question": "What's the weather in Seattle?" })), Some(options),).await?;The agent loop includes built-in resilience:
- Resilient JSON parsing — recovers from malformed tool arguments (markdown fences, trailing commas)
- Tool error safety — both exceptions and
panic!s are caught viacatch_unwindand fed back to the LLM - LLM call retry — transient failures are retried with exponential backoff;
InvokerError::ExecuteRetryExhaustedcarries the full conversation for resumption - Cancellation — respects
canceltoken during backoff sleep viatokio::select!
Connection Registry
Section titled “Connection Registry”Pre-register named connections for production use:
use serde_json::json;
// Register a named connectionprompty::register_connection("my-openai", json!({ "kind": "key", "apiKey": std::env::var("OPENAI_API_KEY").unwrap(),}));
// .prompty files can reference it:// connection:// kind: reference// name: my-openaiTracing
Section titled “Tracing”use prompty::{Tracer, PromptyTracer, console_tracer, trace_async};
// Register a file-based tracer (writes .tracy JSON files)let pt = PromptyTracer::new("./traces");Tracer::register("json", pt.tracer());
// Register the console tracerTracer::register("console", console_tracer);
// Trace custom async functionslet result = trace_async("my_pipeline", json!({"query": q}), async { prompty::invoke_from_path("search.prompty", Some(&inputs)).await}).await?;Providers
Section titled “Providers”| Provider | Crate | Registration Key | Auth |
|---|---|---|---|
| OpenAI | prompty-openai | openai | API key |
| Azure OpenAI / Foundry | prompty-foundry | foundry | API key or Entra ID |
| Anthropic | prompty-anthropic | anthropic | API key |
Register providers at startup — once registered, any .prompty file with a matching
provider value will use that executor and processor:
prompty::register_defaults(); // renderers + parserprompty_openai::register(); // "openai" executor + processorprompty_foundry::register(); // "foundry" executor + processorprompty_anthropic::register(); // "anthropic" executor + processorEnvironment Variables
Section titled “Environment Variables”Prompty resolves ${env:VAR} references in .prompty frontmatter from the process
environment. Set them before loading:
export OPENAI_API_KEY=sk-your-key-hereexport AZURE_OPENAI_ENDPOINT=https://myresource.openai.azure.com/export AZURE_OPENAI_API_KEY=abc123export ANTHROPIC_API_KEY=sk-ant-your-key-hereCurrent Limitations
Section titled “Current Limitations”The Rust runtime covers the core Prompty pipeline comprehensively but has some gaps compared to the Python and TypeScript runtimes:
| Feature | Status | Notes |
|---|---|---|
| Chat completions | ✅ | All providers |
| Streaming | ✅ | PromptyStream with tracing |
Agent loop / turn() | ✅ | Events, cancellation, guardrails, steering |
| Structured output | ✅ | outputSchema → response_format |
| Tracing | ✅ | .tracy, console, OpenTelemetry |
| Custom providers | ✅ | Implement Executor + Processor traits |
| Embeddings API | ❌ | apiType: embedding not yet supported |
| Images API | ❌ | apiType: image not yet supported |
| Responses API | ❌ | OpenAI Responses API not yet supported |
| MCP tools | ❌ | MCP tool kind not yet supported |
| OpenAPI tools | ❌ | OpenAPI tool kind not yet supported |
These features are planned for future releases. Contributions welcome!
Further Reading
Section titled “Further Reading”- API Reference — complete type reference
- How-To Guides — practical recipes
- Core Concepts — architecture deep-dives