Skip to content

Tracing & Observability

Every pipeline call in Prompty is automatically traced. The tracing system uses a pluggable backend architecture — register as many trace consumers as you need. Traces capture the full lifecycle of a prompt: loading, rendering, parsing, execution, and processing.

Out of the box, tracing is a zero-overhead no-op. It only becomes active when you register one or more backends.


flowchart TD
    subgraph Sources["Trace Sources"]
        Pipeline["Pipeline Stages\nload → render → parse → run"]
        Decorator["@trace decorator\nCaptures name, args, return,\nduration, errors"]
        UserFns["Your Functions\n@trace-decorated code"]
    end

    Pipeline --> Registry
    Decorator --> Registry
    UserFns --> Registry

    Registry["Tracer Registry\nTracer.add(name, callback)\nDispatches to all registered backends"]

    Registry --> Console["Console\nconsole_tracer\nPrints to stdout"]
    Registry --> JSON["JSON File\nPromptyTracer\n.tracy files to disk"]
    Registry --> OTel["OpenTelemetry\notel_tracer()\nSpans → any OTel collector"]

    style Sources fill:none,stroke:#3b82f6,stroke-dasharray:5 5
    style Pipeline fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style Decorator fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style UserFns fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8
    style Registry fill:#1d4ed8,stroke:#1e40af,color:#fff
    style Console fill:#f0fdf4,stroke:#10b981,color:#065f46
    style JSON fill:#fffbeb,stroke:#f59e0b,color:#92400e
    style OTel fill:#eff6ff,stroke:#3b82f6,color:#1d4ed8

Register trace backends at application startup. Each backend is a callback function that receives structured trace data. You can register as many as you like — every trace event is dispatched to all registered backends.

from prompty import Tracer, PromptyTracer
# JSON file tracer — writes structured traces to disk
Tracer.add("json", PromptyTracer("./traces").tracer)
# Console tracer — prints to stdout
from prompty.tracing.tracer import console_tracer
Tracer.add("console", console_tracer)

Wrap any function to include it in the trace tree. When a traced function calls other traced functions (including Prompty’s built-in pipeline), they appear as nested child spans.

from prompty import trace
@trace
def my_business_logic(query: str) -> str:
result = prompty.execute("search.prompty", inputs={"q": query})
return process(result)

The decorator automatically captures:

FieldDescription
Function nameThe __name__ of the decorated function
ArgumentsAll positional and keyword arguments
Return valueThe function’s return value
DurationWall-clock time from entry to exit
ExceptionsAny exception raised (re-raised after tracing)

The built-in JSON file backend for local development and debugging. It writes one .tracy file per top-level trace to the specified output directory.

from prompty import Tracer, PromptyTracer
tracer = PromptyTracer("./traces")
Tracer.add("json", tracer.tracer)

Each .tracy file contains structured JSON with the full trace tree — every span, its duration, inputs, outputs, and any nested child spans. These files are human-readable and easy to inspect or post-process.


For production observability, Prompty integrates with OpenTelemetry. Each trace becomes a set of OTel spans, compatible with any collector — Azure Monitor, Jaeger, Zipkin, Datadog, and more.

from prompty.tracing.otel import otel_tracer
from prompty import Tracer
Tracer.add("otel", otel_tracer())

You can register multiple backends simultaneously — for example, OTel for production monitoring and console output for local debugging:

from prompty import Tracer, PromptyTracer
from prompty.tracing.tracer import console_tracer
from prompty.tracing.otel import otel_tracer
# Production: send to OTel collector
Tracer.add("otel", otel_tracer())
# Development: also log to console
Tracer.add("console", console_tracer)
# Debugging: also write .tracy files
Tracer.add("json", PromptyTracer("./traces").tracer)

Prompty automatically traces every pipeline stage. You don’t need to add @trace to use built-in tracing — it’s wired into the core pipeline.

Pipeline StageWhat’s Captured
loadFile path, frontmatter parsing, legacy migration warnings
renderTemplate engine, input variables, rendered output
parseParser type, role markers found, message count
prepareCombined render + parse, thread expansion
executeModel, provider, API type, request payload
runLLM call — token usage, latency, full response
processResponse extraction, content type, tool calls

When the executor calls the LLM, the trace includes:

  • Model identifier — which model was called
  • Token usage — prompt tokens, completion tokens, total
  • Latency — round-trip time for the API call
  • Response — the full model response (content, tool calls, finish reason)
  • Streaming — if streaming, traces flush when the stream is fully consumed

The Prompty TypeScript runtime (@prompty/core) includes the same tracing capabilities with a pluggable backend architecture. All the patterns shown above — Tracer.add(), trace(), PromptyTracer, and consoleTracer — are available as TypeScript imports as shown in the code examples.


Tracing is disabled by default. If you never call Tracer.add(), the tracing system is effectively a no-op with zero overhead — the decorator and pipeline hooks short-circuit immediately when no backends are registered.

To disable tracing after it’s been enabled, simply don’t register any backends on the next application restart. There is no explicit “disable” API because the default state is already off.

# No Tracer.add() calls → tracing is a no-op
from prompty import load, run
agent = load("my-prompt.prompty")
result = run(agent, inputs={"query": "hello"})
# No traces produced — zero overhead