How-To Guides
Overview
Section titled “Overview”These guides are recipe-style — each one shows a complete .prompty file and
the code to run it. Copy, paste, fill in your API keys, and go.
Prompty supports multiple LLM providers out of the box — OpenAI, Microsoft Foundry, and Anthropic — and its plugin architecture makes it easy to add more.
Guides
Section titled “Guides”| Guide | What you’ll build |
|---|---|
| Use with Anthropic | Chat completion using Anthropic Claude with API key auth |
| Use with OpenAI | Chat completion using the OpenAI API with API key auth |
| Use with Microsoft Foundry | Chat completion using Microsoft Foundry — API key and Azure AD options |
| Agent with Tool Calling | An agent that calls your Python/TypeScript functions in a loop |
| Structured Output | Get typed JSON responses matching a schema |
| Streaming Responses | Stream tokens as they arrive |
| Embeddings | Generate text embeddings with apiType: embedding |
| Image Generation | Generate images with DALL-E via apiType: image |
| Custom Providers | Write your own executor and processor |
| VS Code Extension | Author and test .prompty files in VS Code |
Common Pattern
Section titled “Common Pattern”Every guide follows the same three-step pattern:
1. Write a .prompty file → declarative prompt + model config2. Call prompty.execute() → load → render → parse → call LLM → process3. Use the result → string, JSON, embeddings, or streamThe runtime handles template rendering, role-marker parsing, provider dispatch, and response processing automatically — your code stays minimal.