Structured Output
Overview
Section titled “Overview”By default, an LLM returns free-form text. When you define an outputSchema
in your .prompty frontmatter, the runtime converts it to the provider’s
response_format parameter so the model is constrained to return valid JSON
matching your schema. The processor then automatically parses the JSON string into
a Python dict or JavaScript object — no manual JSON.parse() needed.
flowchart LR
A["outputSchema\n(YAML)"] --> B["_output_schema\n_to_wire()\nconversion"]
B --> C["response_format\njson_schema\n→ sent to LLM"]
C --> D["LLM response\nvalid JSON"]
D --> E["Processor\nJSON.parse"]
E --> F["Typed\ndict / obj"]
style A fill:#dbeafe,stroke:#3b82f6,color:#1e40af
style B fill:#bfdbfe,stroke:#1d4ed8,color:#1e3a8a
style C fill:#fef3c7,stroke:#f59e0b,color:#92400e
style D fill:#e5e7eb,stroke:#6b7280,color:#374151
style E fill:#a7f3d0,stroke:#10b981,color:#065f46
style F fill:#d1fae5,stroke:#10b981,color:#065f46
Defining Output Schema
Section titled “Defining Output Schema”Add an outputSchema block to your frontmatter with the properties you expect in
the response. Each property has a kind (type) and optional description.
---name: weather-reportmodel: id: gpt-4o-mini provider: openai apiType: chat connection: kind: key apiKey: ${env:OPENAI_API_KEY}outputSchema: properties: - name: city kind: string description: The city name - name: temperature kind: integer description: Temperature in degrees Fahrenheit - name: conditions kind: string description: Current weather conditions---system:You are a weather assistant. Return the current weather for the requested city.
user:What's the weather in {{city}}?The runtime converts this to an OpenAI-compatible response_format with
type: "json_schema", ensuring the LLM must return a JSON object with exactly
those three fields.
How It Works
Section titled “How It Works”Under the hood, the executor performs three steps when outputSchema is present:
-
Schema conversion —
_output_schema_to_wire()translates eachProperty(withkind,description,required) into a standard JSON Schema object. The result is wrapped in an OpenAIresponse_formatparameter:{"type": "json_schema","json_schema": {"name": "output_schema","strict": true,"schema": {"type": "object","properties": {"city": { "type": "string", "description": "The city name" },"temperature": { "type": "integer", "description": "Temperature in degrees Fahrenheit" },"conditions": { "type": "string", "description": "Current weather conditions" }},"required": ["city", "temperature", "conditions"],"additionalProperties": false}}} -
LLM constrained generation — the model is forced to return valid JSON matching the schema. No malformed output, no missing fields.
-
Processor auto-parse — the processor detects that
outputSchemais defined and callsjson.loads()on the response content, returning a nativedict(Python) or object (JavaScript) instead of a raw string.
With structured output, execute() returns a parsed dictionary/object directly:
from prompty import execute
result = execute("weather.prompty", inputs={"city": "Seattle"})
# result is already a dict — no JSON.parse neededprint(result["city"]) # "Seattle"print(result["temperature"]) # 62print(result["conditions"]) # "Partly cloudy"print(type(result)) # <class 'dict'>from prompty import execute_async
result = await execute_async("weather.prompty", inputs={"city": "Seattle"})print(result["temperature"]) # 62import { execute } from "@prompty/core";
const result = await execute("weather.prompty", { city: "Seattle" });
// result is already a parsed objectconsole.log(result.city); // "Seattle"console.log(result.temperature); // 62console.log(result.conditions); // "Partly cloudy"Without Output Schema
Section titled “Without Output Schema”If you don’t define outputSchema, the processor returns the raw text content
from the LLM response. You can still ask the model to return JSON in your prompt
instructions, but there’s no schema enforcement or automatic parsing.
# outputSchema defined → dict returned automaticallyresult = execute("weather.prompty", inputs={"city": "Seattle"})print(type(result)) # <class 'dict'>print(result["temperature"]) # 62# No outputSchema → raw string returnedresult = execute("chat.prompty", inputs={"city": "Seattle"})print(type(result)) # <class 'str'># You'd need to parse manually:import jsondata = json.loads(result) # may fail if LLM didn't return valid JSONNested Objects
Section titled “Nested Objects”For complex responses, use kind: object with nested properties to define
multi-level schemas:
---name: detailed-weathermodel: id: gpt-4o-mini provider: openai apiType: chat connection: kind: key apiKey: ${env:OPENAI_API_KEY}outputSchema: properties: - name: city kind: string - name: current kind: object properties: - name: temperature kind: integer description: Temperature in °F - name: humidity kind: integer description: Humidity percentage - name: conditions kind: string - name: forecast kind: array description: Next 3 days---system:Return current weather and a 3-day forecast for the requested city.
user:Weather for {{city}}?The result is a nested dictionary:
result = execute("detailed-weather.prompty", inputs={"city": "Portland"})
print(result["city"]) # "Portland"print(result["current"]["temperature"]) # 58print(result["current"]["humidity"]) # 72print(result["forecast"]) # [{"day": "Mon", ...}, ...]