Skip to content

Why Prompty?

Most LLM-powered applications today embed prompts as string literals scattered across application code. This creates real problems as projects grow:

  • No standard format. Every team invents its own way to store prompts — YAML files, JSON configs, raw strings, template engines — making it impossible to share or port prompts between projects.
  • Configuration is tangled with text. Model name, temperature, endpoint URL, and the actual prompt content all live in the same function call. Changing one means touching the other.
  • Testing is painful. To test a prompt you have to mock the entire LLM client, because there’s no clean boundary between “what we send” and “how we send it.”
  • Version control is noisy. Prompt changes are buried inside code diffs. Reviewing a temperature tweak means reading through business logic.

The net effect: prompt engineering and application development are welded together when they should be separate disciplines.

Prompty introduces a dedicated file format.prompty — that makes prompts first-class assets in your project, just like configuration files or database migrations.

A .prompty file is plain markdown with YAML frontmatter. The frontmatter declares model configuration, input/output schemas, and tool definitions. The markdown body is the prompt itself, written with template syntax.

---
name: customer-support
model:
id: gpt-4o
provider: azure
connection:
kind: key
endpoint: ${env:AZURE_OPENAI_ENDPOINT}
apiKey: ${env:AZURE_OPENAI_API_KEY}
options:
temperature: 0.3
inputs:
- name: issue
kind: string
---
system:
You are a support agent. Be concise and helpful.
user:
{{issue}}

Every .prompty file flows through four stages — render → parse → execute → process — and each stage is independently swappable. Use Jinja2 or Mustache for rendering. Parse role markers into structured messages. Execute against OpenAI, Azure, or any provider. Process the response into your desired format.

.prompty file → Renderer → Parser → Executor → Processor → Result
(Jinja2) (roles) (OpenAI) (extract)

The same .prompty file works across Python, TypeScript, and C# runtimes. Your prompt engineers write once; your application teams consume in whatever language their service uses.

Raw SDKLangChain / Semantic KernelPrompty
Prompt formatStrings in codeTemplates in codeDedicated .prompty file
Model configConstructor argsChain configYAML frontmatter
PortabilitySingle languageSingle languagePython, TypeScript, C#
TestabilityMock entire clientMock chainMock at any pipeline stage
Version controlDiff code changesDiff code changesDiff prompt files directly
IDE supportCode editorCode editorVS Code extension with live preview

Prompty is a good fit when:

  • ✅ You want prompts treated as first-class assets — versioned, reviewed, and tested independently
  • ✅ Your team works across Python, TypeScript, and C# and needs a shared format
  • ✅ You want to test prompts in isolation without mocking entire LLM clients
  • ✅ You need built-in tracing and observability across the prompt lifecycle
  • ✅ You want a standard your team can adopt without committing to a full framework

Consider alternatives when:

  • ❌ You need a full orchestration framework with chains, memory, and retrieval built in — use LangChain or Semantic Kernel (but Prompty can still be your prompt layer inside them)
  • Prompts as code — store in version control, review in pull requests, test in CI. Prompts deserve the same engineering rigor as application code.
  • Separation of concerns — prompt text, model configuration, and application logic each live in the right place. Change one without touching the others.
  • Pluggable pipeline — swap renderers, parsers, executors, or processors without modifying your .prompty files. Add a new provider by registering an entry point.
  • Provider agnostic — the same .prompty file works with OpenAI, Azure OpenAI, Anthropic, or any provider you implement. Your prompts aren’t locked to a vendor.

Ready to try it? Head to the Getting Started guide to install the runtime, write your first .prompty file, and run it.