Use with Anthropic
Prerequisites
Section titled “Prerequisites”pip install prompty[jinja2,anthropic]npm install @prompty/core @prompty/anthropicYou also need an Anthropic API key.
1. Write the .prompty File
Section titled “1. Write the .prompty File”Create anthropic-chat.prompty:
---name: anthropic-chatdescription: Chat completion with Anthropic Claudemodel: id: claude-sonnet-4-6 provider: anthropic connection: kind: key endpoint: ${env:ANTHROPIC_ENDPOINT:https://api.anthropic.com} apiKey: ${env:ANTHROPIC_API_KEY} options: temperature: 0.7 maxOutputTokens: 1024inputs: - name: topic kind: string default: the history of computingtemplate: format: kind: jinja2 parser: kind: prompty---system:You are a knowledgeable assistant who gives clear, concise explanations.
user:Tell me about {{topic}}.2. Set Your API Key
Section titled “2. Set Your API Key”Create a .env file in the same directory:
ANTHROPIC_API_KEY=sk-ant-your-key-here3. Run It
Section titled “3. Run It”import prompty
result = prompty.execute( "anthropic-chat.prompty", inputs={"topic": "quantum computing"})print(result)import { execute } from "@prompty/core";import "@prompty/anthropic"; // registers the provider
const result = await execute( "anthropic-chat.prompty", { topic: "quantum computing" });console.log(result);Step-by-Step Execution
Section titled “Step-by-Step Execution”If you need more control, use the individual pipeline stages:
import prompty
# Load the promptagent = prompty.load("anthropic-chat.prompty")
# Render + parse → message list (no LLM call)messages = prompty.prepare(agent, inputs={"topic": "quantum computing"})print(messages) # inspect before sending
# Execute + process → final resultresult = prompty.run(agent, messages)print(result)import { load, prepare, run } from "@prompty/core";import "@prompty/anthropic";
const agent = await load("anthropic-chat.prompty");const messages = await prepare(agent, { topic: "quantum computing" });console.log(messages); // inspect before sending
const result = await run(agent, messages);console.log(result);Available Models
Section titled “Available Models”Anthropic’s Claude model family includes:
| Model | Best for |
|---|---|
claude-opus-4-0 | Complex reasoning, research, multi-step tasks |
claude-sonnet-4-6 | Balanced performance and speed (recommended default) |
claude-haiku-3-5 | Fast responses, simple tasks, lower cost |
Change the model by updating model.id in your .prompty file:
model: id: claude-haiku-3-5Custom Endpoint
Section titled “Custom Endpoint”If you’re using an Anthropic-compatible proxy or a different base URL, set the endpoint explicitly:
model: connection: kind: key endpoint: https://my-proxy.example.com apiKey: ${env:ANTHROPIC_API_KEY}Or use an environment variable:
model: connection: kind: key endpoint: ${env:ANTHROPIC_ENDPOINT} apiKey: ${env:ANTHROPIC_API_KEY}Key Differences from OpenAI
Section titled “Key Differences from OpenAI”| Aspect | OpenAI | Anthropic |
|---|---|---|
| Provider | provider: openai | provider: anthropic |
| Default endpoint | https://api.openai.com/v1 | https://api.anthropic.com |
| Model names | gpt-4o, gpt-4o-mini | claude-sonnet-4-6, claude-haiku-3-5 |
| API key prefix | sk- | sk-ant- |
| Responses API | ✅ Supported | ❌ Not supported |
| Image generation | ✅ DALL-E | ❌ Not available |