Getting Started
What is Prompty?
Section titled “What is Prompty?”Prompty is a markdown file format for LLM prompts. A .prompty file
combines structured YAML frontmatter (model config, inputs, tools) with
a markdown body that becomes the prompt instructions. The runtime loads,
renders, parses, and executes these files.
┌─────────────────────────────┐│ .prompty file ││ ┌───────────────────────┐ ││ │ --- (YAML frontmatter)│ │ → model, inputs, tools, template config│ │ --- │ ││ │ Markdown body │ │ → prompt instructions with template syntax│ └───────────────────────┘ │└─────────────────────────────┘Installation
Section titled “Installation”# Core + Jinja2 renderer + OpenAI providerpip install prompty[jinja2,openai]
# With Microsoft Foundry supportpip install prompty[jinja2,foundry]
# Everythingpip install prompty[all]# Core + OpenAI providernpm install @prompty/core @prompty/openai
# With Microsoft Foundry supportnpm install @prompty/core @prompty/foundry
# With Anthropic supportnpm install @prompty/core @prompty/anthropicdotnet add package Prompty.Core --prereleasedotnet add package Prompty.OpenAI --prerelease # or Prompty.Foundry, Prompty.AnthropicWrite Your First .prompty File
Section titled “Write Your First .prompty File”Create a file called greeting.prompty:
---name: greetingdescription: A friendly greeting promptmodel: id: gpt-4o-mini provider: openai connection: kind: key apiKey: ${env:OPENAI_API_KEY}inputs: - name: userName kind: string default: World---system:You are a friendly assistant who greets people warmly.
user:Say hello to {{userName}} and ask how their day is going.Run It
Section titled “Run It”import prompty
# All-in-one: load → render → parse → execute → processresult = prompty.invoke( "greeting.prompty", inputs={"userName": "Jane"},)print(result)# "Hello Jane! 👋 How's your day going so far?"import { invoke } from "@prompty/core";import "@prompty/openai";
const result = await invoke("greeting.prompty", { userName: "Jane",});console.log(result);// "Hello Jane! 👋 How's your day going so far?"using Prompty.Core;
var result = await Pipeline.InvokeAsync("greeting.prompty", new() { ["userName"] = "Jane" });Console.WriteLine(result);// "Hello Jane! 👋 How's your day going so far?"Step-by-Step Pipeline
Section titled “Step-by-Step Pipeline”For more control, use the pipeline stages individually:
import prompty
# 1. Load the .prompty file → typed Prompty objectagent = prompty.load("greeting.prompty")
# 2. Render template + parse → list[Message]messages = prompty.prepare(agent, inputs={"userName": "Jane"})
# 3. Call the LLM + process → clean resultresult = prompty.run(agent, messages)import { load, prepare, run } from "@prompty/core";import "@prompty/openai";
const agent = await load("greeting.prompty");const messages = await prepare(agent, { userName: "Jane" });const result = await run(agent, messages);using Prompty.Core;
// 1. Load the .prompty file → Promptyvar agent = PromptyLoader.Load("greeting.prompty");
// 2. Render template + parse → messagesvar rendered = await Pipeline.RenderAsync(agent, new() { ["userName"] = "Jane" });var messages = await Pipeline.ParseAsync(agent, rendered);
// 3. Call the LLM + process → clean resultvar response = await Pipeline.ExecuteAsync(agent, messages);var result = await Pipeline.ProcessAsync(agent, response);Environment Variables
Section titled “Environment Variables”Use ${env:VAR_NAME} in frontmatter to reference environment variables.
Create a .env file in your project root:
OPENAI_API_KEY=sk-your-key-herePrompty automatically loads .env files.
Complete Example
Section titled “Complete Example”Here’s a full, tested example you can copy and run:
"""Basic chat completion with OpenAI.
This example loads a .prompty file and runs a simple chat completion.Used in: how-to/openai.mdx, getting-started/index.mdx"""from __future__ import annotations
from prompty import invoke, load
agent = load("chat-basic.prompty")result = invoke(agent, inputs={"question": "What is Prompty?"})print(result)/** * Basic chat completion — load a .prompty file and invoke it. * * @example * ```bash * OPENAI_API_KEY=sk-... npx tsx examples/chat-basic.ts * ``` */import "@prompty/openai"; // auto-registers openai executor + processorimport { invoke } from "@prompty/core";import { resolve } from "node:path";
const promptyFile = resolve(import.meta.dirname, "../../prompts/chat-basic.prompty");
export async function chatBasic(question?: string): Promise<string> { const result = await invoke(promptyFile, { question: question ?? "What is Prompty?", }); return result as string;}
// Run directlyconst response = await chatBasic();console.log(response);// Copyright (c) Microsoft. All rights reserved.
using Prompty.Core;using Prompty.OpenAI;
namespace DocsExamples.Examples;
/// <summary>/// Basic chat completion — load a .prompty file and invoke it./// </summary>public static class ChatBasic{ /// <summary> /// Load a .prompty file and run the full pipeline (render → parse → execute → process). /// </summary> public static async Task<object> RunAsync(string promptyPath, Dictionary<string, object?>? inputs = null) { // One-time setup — registers renderers, parser, and OpenAI provider new PromptyBuilder() .AddOpenAI();
// Full pipeline: load → prepare → run var result = await Pipeline.InvokeAsync(promptyPath, inputs); return result; }}Next Steps
Section titled “Next Steps”- The .prompty File Format — understand the anatomy of a
.promptyfile - Pipeline Architecture — how render → parse → execute → process works
- Schema Reference — all available frontmatter properties
- How-To Guides — practical recipes for common tasks
- Tutorials — hands-on walkthroughs from chat to production
- Why Prompty? — design philosophy and motivation