Use with OpenAI
Prerequisites
Section titled “Prerequisites”pip install prompty[jinja2,openai]npm install @prompty/core @prompty/openaiYou also need an OpenAI API key.
1. Write the .prompty File
Section titled “1. Write the .prompty File”Create chat.prompty:
---name: openai-chatdescription: Simple chat completion with OpenAImodel: id: gpt-4o-mini provider: openai apiType: chat connection: kind: key apiKey: ${env:OPENAI_API_KEY} options: temperature: 0.7 maxOutputTokens: 1024inputSchema: properties: - name: question kind: string default: What is Prompty?template: format: kind: jinja2 parser: kind: prompty---system:You are a helpful assistant. Answer concisely.
user:{{question}}2. Run It
Section titled “2. Run It”import prompty
# One-liner: load → render → call LLM → return resultresult = prompty.execute("chat.prompty", inputs={"question": "What is Prompty?"})print(result)For more control, use the pipeline stages individually:
import prompty
# Step 1 — Load the .prompty file → PromptAgentagent = prompty.load("chat.prompty")
# Step 2 — Render template + parse role markers → list[Message]messages = prompty.prepare(agent, inputs={"question": "Explain async/await"})
# Step 3 — Call OpenAI + process response → stringresult = prompty.run(agent, messages)print(result)Async variant:
import asyncioimport prompty
async def main(): result = await prompty.execute_async( "chat.prompty", inputs={"question": "What is Prompty?"}, ) print(result)
asyncio.run(main())import { load, execute } from "@prompty/core";import { OpenAIExecutor } from "@prompty/openai";
const result = await execute("chat.prompty", { inputs: { question: "What is Prompty?" },});
console.log(result);3. Switch Models
Section titled “3. Switch Models”Change model.id in the frontmatter — no code changes needed:
model: id: gpt-4o # GPT-4o (default, fast + capable) # id: gpt-4o-mini # Cheaper, good for simple tasks # id: o1 # Reasoning model (higher latency) # id: gpt-4-turbo # 128K context window4. Environment Setup
Section titled “4. Environment Setup”Create a .env file in the same directory as your script:
OPENAI_API_KEY=sk-your-key-herePrompty uses python-dotenv to load .env automatically. Make sure .env is
in your .gitignore:
echo ".env" >> .gitignore5. Tune Model Options
Section titled “5. Tune Model Options”All options go under model.options: in the frontmatter:
model: id: gpt-4o-mini provider: openai connection: kind: key apiKey: ${env:OPENAI_API_KEY} options: temperature: 0.3 # Lower = more deterministic maxOutputTokens: 2048 # Max tokens in the response topP: 0.9 # Nucleus sampling frequencyPenalty: 0.2 # Reduce repetition presencePenalty: 0.1 # Encourage new topics seed: 42 # Reproducible outputs stopSequences: # Stop generation at these strings - "END"