Skip to content

How-To Guides

These guides are recipe-style — each one shows a complete .prompty file and the code to run it. Copy, paste, fill in your API keys, and go.

Prompty supports multiple LLM providers out of the box — OpenAI, Microsoft Foundry, and Anthropic — and its plugin architecture makes it easy to add more.


GuideWhat you’ll build
Use with AnthropicChat completion using Anthropic Claude with API key auth
Use with OpenAIChat completion using the OpenAI API with API key auth
Use with Microsoft FoundryChat completion using Microsoft Foundry — API key and Azure AD options
Agent with Tool CallingAn agent that calls your Python/TypeScript functions in a loop
Structured OutputGet typed JSON responses matching a schema
Streaming ResponsesStream tokens as they arrive
EmbeddingsGenerate text embeddings with apiType: embedding
Image GenerationGenerate images with DALL-E via apiType: image
Custom ProvidersWrite your own executor and processor
VS Code ExtensionAuthor and test .prompty files in VS Code

Every guide follows the same three-step pattern:

1. Write a .prompty file → declarative prompt + model config
2. Call prompty.execute() → load → render → parse → call LLM → process
3. Use the result → string, JSON, embeddings, or stream

The runtime handles template rendering, role-marker parsing, provider dispatch, and response processing automatically — your code stays minimal.