Skip to content

Cookbook

Copy-paste these .prompty files into your project, update the model block for your provider, and run. Each example is self-contained.


The simplest chat completion — a system message and a user question.

---
name: basic-chat
model:
id: gpt-4o-mini
provider: openai
apiType: chat
inputs:
- name: question
kind: string
default: What is the capital of France?
---
system:
You are a helpful assistant.
user:
{{question}}
from prompty import load, prepare, run
agent = load("basic-chat.prompty")
result = run(agent, prepare(agent, {"question": "What is quantum computing?"}))

Embed examples directly in the instructions to guide the model’s output format.

---
name: few-shot
model:
id: gpt-4o-mini
apiType: chat
inputs:
- name: text
kind: string
default: The movie was absolutely fantastic and I loved every minute.
---
system:
Classify the sentiment of the text as positive, negative, or neutral.
Examples:
- "I love this product!" → positive
- "Terrible experience, never again." → negative
- "It was okay, nothing special." → neutral
user:
{{text}}
result = run(agent, prepare(agent, {"text": "The food was cold and bland."}))

Configurable summary length via an input parameter.

---
name: summarize
model:
id: gpt-4o-mini
apiType: chat
options:
maxOutputTokens: 300
inputs:
- name: text
kind: string
- name: length
kind: string
default: short
---
system:
Summarize the following text. Length: {{length}} (short = 1-2 sentences, medium = paragraph, long = detailed).
user:
{{text}}
result = run(agent, prepare(agent, {"text": article, "length": "medium"}))

Analyze code and provide structured feedback.

---
name: code-review
model:
id: gpt-4o
apiType: chat
options:
temperature: 0.3
inputs:
- name: code
kind: string
- name: language
kind: string
default: python
---
system:
You are a senior software engineer. Review the {{language}} code below.
Provide feedback on: bugs, performance, readability, and security.
Be concise — bullet points only.
user:
```{{language}}
{{code}}
result = run(agent, prepare(agent, {"code": my_code, "language": "python"}))

Uses outputs to constrain the LLM to return JSON matching a schema.

---
name: extract-entities
model:
id: gpt-4o-mini
apiType: chat
inputs:
- name: text
kind: string
default: "John Smith works at Contoso in Seattle as a software engineer."
outputs:
- name: name
kind: string
description: Person's full name
required: true
- name: company
kind: string
description: Company name
required: true
- name: location
kind: string
description: City or location
required: true
- name: role
kind: string
description: Job title
required: true
---
system:
Extract entities from the text. Return structured JSON.
user:
{{text}}
data = run(agent, prepare(agent, {"text": "Jane Doe is a PM at Microsoft in Redmond."}))
# data is a parsed dict: {"name": "Jane Doe", "company": "Microsoft", ...}

The outputs block generates an OpenAI response_format constraint — the model must return valid JSON matching the schema.


Use kind: thread to inject conversation history between system and user messages.

---
name: multi-turn
model:
id: gpt-4o-mini
apiType: chat
inputs:
- name: question
kind: string
default: Hello
- name: conversation
kind: thread
---
system:
You are a helpful assistant. Be concise.
{{conversation}}
user:
{{question}}
history = [
{"role": "user", "content": "What is Python?"},
{"role": "assistant", "content": "A programming language."},
]
result = run(agent, prepare(agent, {"question": "What about Java?", "conversation": history}))

The kind: thread input is expanded into message objects at its position in the template — enabling stateless multi-turn conversations.


Use apiType: embedding to generate vector embeddings instead of chat completions.

---
name: embed
model:
id: text-embedding-3-small
provider: openai
apiType: embedding
connection:
kind: key
endpoint: ${env:OPENAI_ENDPOINT:https://api.openai.com/v1}
apiKey: ${env:OPENAI_API_KEY}
inputs:
- name: text
kind: string
default: Hello world
---
{{text}}
vectors = run(agent, prepare(agent, {"text": "Embed this sentence."}))
# vectors is a list of floats

No role markers needed — the body is the raw text to embed.


An agent with kind: function tools. The runtime loops until the model produces a final answer.

---
name: weather-agent
model:
id: gpt-4o-mini
apiType: chat
tools:
- name: get_weather
kind: function
description: Get current weather for a city
parameters:
- name: city
kind: string
description: City name
required: true
inputs:
- name: question
kind: string
default: What's the weather in Tokyo?
---
system:
You are a helpful assistant with access to weather tools.
user:
{{question}}
from prompty import load, invoke_agent, tool, bind_tools
@tool
def get_weather(city: str) -> str:
return f"72°F and sunny in {city}"
agent = load("weather-agent.prompty")
tools = bind_tools(agent, [get_weather])
result = invoke_agent(agent, {"question": "Weather in Tokyo?"}, tools=tools)

The agent loop calls get_weather, appends the result, and re-queries the model for a natural language answer.


Higher temperature and topP for more creative, varied output.

---
name: creative-writer
model:
id: gpt-4o
apiType: chat
options:
temperature: 1.2
topP: 0.95
maxOutputTokens: 500
inputs:
- name: topic
kind: string
default: a robot discovering art for the first time
- name: style
kind: string
default: short story
---
system:
You are a creative writer. Write a {{style}} about the given topic.
Be vivid, imaginative, and original.
user:
Topic: {{topic}}
result = run(agent, prepare(agent, {"topic": "time travel paradox", "style": "poem"}))

Tuning temperature above 1.0 and topP near 1.0 gives the model more freedom for creative tasks.


Language pair controlled via inputs — one prompt handles any translation direction.

---
name: translator
model:
id: gpt-4o-mini
apiType: chat
options:
temperature: 0.3
inputs:
- name: text
kind: string
default: Hello, how are you?
- name: sourceLang
kind: string
default: English
- name: targetLang
kind: string
default: Spanish
---
system:
You are a professional translator. Translate the text from {{sourceLang}} to {{targetLang}}.
Preserve tone, meaning, and formatting. Output only the translation.
user:
{{text}}
result = run(agent, prepare(agent, {"text": "Good morning!", "sourceLang": "English", "targetLang": "Japanese"}))

Low temperature (0.3) keeps translations faithful. Parameterizing the languages makes this a single reusable prompt for any language pair.