Cookbook
Copy-paste these .prompty files into your project, update the model block for
your provider, and run. Each example is self-contained.
1. Basic Chat
Section titled “1. Basic Chat”The simplest chat completion — a system message and a user question.
---name: basic-chatmodel: id: gpt-4o-mini provider: openai apiType: chatinputs: - name: question kind: string default: What is the capital of France?---system:You are a helpful assistant.
user:{{question}}from prompty import load, prepare, runagent = load("basic-chat.prompty")result = run(agent, prepare(agent, {"question": "What is quantum computing?"}))import { invoke } from "@prompty/core";const result = await invoke("basic-chat.prompty", { question: "What is quantum computing?" });var result = await Pipeline.InvokeAsync("basic-chat.prompty", new Dictionary<string, object>{ ["question"] = "What is quantum computing?"});2. Few-Shot Prompting
Section titled “2. Few-Shot Prompting”Embed examples directly in the instructions to guide the model’s output format.
---name: few-shotmodel: id: gpt-4o-mini apiType: chatinputs: - name: text kind: string default: The movie was absolutely fantastic and I loved every minute.---system:Classify the sentiment of the text as positive, negative, or neutral.
Examples:- "I love this product!" → positive- "Terrible experience, never again." → negative- "It was okay, nothing special." → neutral
user:{{text}}result = run(agent, prepare(agent, {"text": "The food was cold and bland."}))const result = await invoke("few-shot.prompty", { text: "The food was cold and bland." });var result = await Pipeline.InvokeAsync(agent, new Dictionary<string, object>{ ["text"] = "The food was cold and bland."});3. Summarization
Section titled “3. Summarization”Configurable summary length via an input parameter.
---name: summarizemodel: id: gpt-4o-mini apiType: chat options: maxOutputTokens: 300inputs: - name: text kind: string - name: length kind: string default: short---system:Summarize the following text. Length: {{length}} (short = 1-2 sentences, medium = paragraph, long = detailed).
user:{{text}}result = run(agent, prepare(agent, {"text": article, "length": "medium"}))const result = await invoke("summarize.prompty", { text: article, length: "medium" });var result = await Pipeline.InvokeAsync(agent, new Dictionary<string, object>{ ["text"] = article, ["length"] = "medium"});4. Code Review
Section titled “4. Code Review”Analyze code and provide structured feedback.
---name: code-reviewmodel: id: gpt-4o apiType: chat options: temperature: 0.3inputs: - name: code kind: string - name: language kind: string default: python---system:You are a senior software engineer. Review the {{language}} code below.Provide feedback on: bugs, performance, readability, and security.Be concise — bullet points only.
user:```{{language}}{{code}}result = run(agent, prepare(agent, {"code": my_code, "language": "python"}))const result = await invoke("code-review.prompty", { code: myCode, language: "python" });var result = await Pipeline.InvokeAsync(agent, new Dictionary<string, object>{ ["code"] = myCode, ["language"] = "python"});5. Data Extraction (Structured Output)
Section titled “5. Data Extraction (Structured Output)”Uses outputs to constrain the LLM to return JSON matching a schema.
---name: extract-entitiesmodel: id: gpt-4o-mini apiType: chatinputs: - name: text kind: string default: "John Smith works at Contoso in Seattle as a software engineer."outputs: - name: name kind: string description: Person's full name required: true - name: company kind: string description: Company name required: true - name: location kind: string description: City or location required: true - name: role kind: string description: Job title required: true---system:Extract entities from the text. Return structured JSON.
user:{{text}}data = run(agent, prepare(agent, {"text": "Jane Doe is a PM at Microsoft in Redmond."}))# data is a parsed dict: {"name": "Jane Doe", "company": "Microsoft", ...}const data = await invoke("extract-entities.prompty", { text: "Jane Doe is a PM at Microsoft in Redmond." });var data = await Pipeline.InvokeAsync(agent, new Dictionary<string, object>{ ["text"] = "Jane Doe is a PM at Microsoft in Redmond."});The outputs block generates an OpenAI response_format constraint — the model must return valid JSON matching the schema.
6. Multi-Turn Conversation
Section titled “6. Multi-Turn Conversation”Use kind: thread to inject conversation history between system and user messages.
---name: multi-turnmodel: id: gpt-4o-mini apiType: chatinputs: - name: question kind: string default: Hello - name: conversation kind: thread---system:You are a helpful assistant. Be concise.
{{conversation}}
user:{{question}}history = [ {"role": "user", "content": "What is Python?"}, {"role": "assistant", "content": "A programming language."},]result = run(agent, prepare(agent, {"question": "What about Java?", "conversation": history}))const history = [ { role: "user", content: "What is Python?" }, { role: "assistant", content: "A programming language." },];const result = await invoke("multi-turn.prompty", { question: "What about Java?", conversation: history });var history = new[]{ new Message { Role = "user", Parts = [new TextPart { Value = "What is Python?" }] }, new Message { Role = "assistant", Parts = [new TextPart { Value = "A programming language." }] },};var result = await Pipeline.InvokeAsync(agent, new Dictionary<string, object>{ ["question"] = "What about Java?", ["conversation"] = history});The kind: thread input is expanded into message objects at its position in the template — enabling stateless multi-turn conversations.
7. Embedding Generation
Section titled “7. Embedding Generation”Use apiType: embedding to generate vector embeddings instead of chat completions.
---name: embedmodel: id: text-embedding-3-small provider: openai apiType: embedding connection: kind: key endpoint: ${env:OPENAI_ENDPOINT:https://api.openai.com/v1} apiKey: ${env:OPENAI_API_KEY}inputs: - name: text kind: string default: Hello world---{{text}}vectors = run(agent, prepare(agent, {"text": "Embed this sentence."}))# vectors is a list of floatsconst vectors = await invoke("embed.prompty", { text: "Embed this sentence." });var vectors = await Pipeline.InvokeAsync(agent, new Dictionary<string, object>{ ["text"] = "Embed this sentence."});No role markers needed — the body is the raw text to embed.
8. Tool-Calling Agent
Section titled “8. Tool-Calling Agent”An agent with kind: function tools. The runtime loops until the model produces a final answer.
---name: weather-agentmodel: id: gpt-4o-mini apiType: chattools: - name: get_weather kind: function description: Get current weather for a city parameters: - name: city kind: string description: City name required: trueinputs: - name: question kind: string default: What's the weather in Tokyo?---system:You are a helpful assistant with access to weather tools.
user:{{question}}from prompty import load, invoke_agent, tool, bind_tools
@tooldef get_weather(city: str) -> str: return f"72°F and sunny in {city}"
agent = load("weather-agent.prompty")tools = bind_tools(agent, [get_weather])result = invoke_agent(agent, {"question": "Weather in Tokyo?"}, tools=tools)import { load, invokeAgent, tool, bindTools } from "@prompty/core";const getWeather = tool((city: string) => `72°F and sunny in ${city}`, { name: "get_weather", description: "Get current weather", parameters: [{ name: "city", kind: "string", required: true }],});const agent = await load("weather-agent.prompty");const result = await invokeAgent(agent, { question: "Weather in Tokyo?" }, { tools: bindTools(agent, [getWeather]) });[Tool(Name = "get_weather", Description = "Get current weather")]public string GetWeather(string city) => $"72°F and sunny in {city}";var tools = ToolAttribute.BindTools(agent, new WeatherService());var result = await Pipeline.InvokeAgentAsync(agent, inputs, tools: tools);The agent loop calls get_weather, appends the result, and re-queries the model for a natural language answer.
9. Creative Writing
Section titled “9. Creative Writing”Higher temperature and topP for more creative, varied output.
---name: creative-writermodel: id: gpt-4o apiType: chat options: temperature: 1.2 topP: 0.95 maxOutputTokens: 500inputs: - name: topic kind: string default: a robot discovering art for the first time - name: style kind: string default: short story---system:You are a creative writer. Write a {{style}} about the given topic.Be vivid, imaginative, and original.
user:Topic: {{topic}}result = run(agent, prepare(agent, {"topic": "time travel paradox", "style": "poem"}))const result = await invoke("creative-writer.prompty", { topic: "time travel paradox", style: "poem" });var result = await Pipeline.InvokeAsync(agent, new Dictionary<string, object>{ ["topic"] = "time travel paradox", ["style"] = "poem"});Tuning temperature above 1.0 and topP near 1.0 gives the model more freedom for creative tasks.
10. Translation
Section titled “10. Translation”Language pair controlled via inputs — one prompt handles any translation direction.
---name: translatormodel: id: gpt-4o-mini apiType: chat options: temperature: 0.3inputs: - name: text kind: string default: Hello, how are you? - name: sourceLang kind: string default: English - name: targetLang kind: string default: Spanish---system:You are a professional translator. Translate the text from {{sourceLang}} to {{targetLang}}.Preserve tone, meaning, and formatting. Output only the translation.
user:{{text}}result = run(agent, prepare(agent, {"text": "Good morning!", "sourceLang": "English", "targetLang": "Japanese"}))const result = await invoke("translator.prompty", { text: "Good morning!", sourceLang: "English", targetLang: "Japanese" });var result = await Pipeline.InvokeAsync(agent, new Dictionary<string, object>{ ["text"] = "Good morning!", ["sourceLang"] = "English", ["targetLang"] = "Japanese"});Low temperature (0.3) keeps translations faithful. Parameterizing the languages makes this a single reusable prompt for any language pair.