How-To Guides
These guides are recipe-style — each one shows a complete .prompty file and
the code to run it. Copy, paste, fill in your API keys, and go.
Providers
Section titled “Providers” Use with OpenAI Chat completions with the OpenAI API and API key auth.
Use with Microsoft Foundry Foundry project endpoints with API key and Microsoft Entra ID auth.
Use with Anthropic Chat completions using Anthropic Claude.
Discover Available Models List models and deployments before choosing model.id.
Custom Providers Build your own executor and processor for any LLM.
Features
Section titled “Features” Agent with Tool Calling Define tools, register handlers, and let the agent loop call them.
Multi-Prompt Composition Chain .prompty files — one prompt calls another via PromptyTool.
Streaming Responses Stream tokens in real time with PromptyStream and tracing.
Structured Output Get typed JSON back matching an output schema you define.
Embeddings Generate text embeddings with apiType: embedding.
Image Generation Generate images with DALL-E via apiType: image.
Development
Section titled “Development” Testing Your Prompts Unit test .prompty files without calling the LLM.
Run Live Provider Tests Exercise Prompty against real endpoints before publishing packages.
Troubleshooting Solutions to common errors and issues.
Reference
Section titled “Reference” Cookbook Ready-to-use .prompty examples — copy, customize, and run.