Observability & Tracing
Observability is crucial for understanding how your prompts execute, debugging issues, and optimizing performance. Prompty provides a flexible tracing system that helps you monitor every aspect of prompt execution.
Overview
Section titled “Overview”Prompty’s tracing system captures detailed information about:
- Prompt loading and parsing
- Template rendering with inputs
- Model API calls and responses
- Execution timing and performance
- Custom function executions
Built-in Tracers
Section titled “Built-in Tracers”Console Tracer
Section titled “Console Tracer”The simplest tracer outputs traces directly to the console:
import promptyimport prompty.azurefrom prompty.tracer import Tracer, console_tracer
# Add console tracerTracer.add("console", console_tracer)
# Execute with tracingresponse = prompty.execute("path/to/prompt.prompty")Output example:
Starting executeinputs:{ "customer_name": "John Doe", "question": "What are your hours?"}result:{ "content": "Our business hours are 9 AM to 5 PM, Monday through Friday."}Ending executeJSON File Tracer
Section titled “JSON File Tracer”The PromptyTracer writes detailed traces to JSON files:
from prompty.tracer import PromptyTracer, Tracer
# Create JSON tracerjson_tracer = PromptyTracer(output_dir="./traces")Tracer.add("json", json_tracer.tracer)
# Execute with file tracingresponse = prompty.execute("prompt.prompty")This creates timestamped .tracy files containing complete execution traces.
Custom Tracers
Section titled “Custom Tracers”Simple Custom Tracer
Section titled “Simple Custom Tracer”Create your own tracer using context managers:
import contextlibimport jsonfrom typing import Any, Callable, Iterator
@contextlib.contextmanagerdef custom_tracer(name: str) -> Iterator[Callable[[str, Any], None]]: print(f"🚀 Starting {name}") traces = {}
try: yield lambda key, value: traces.update({key: value}) finally: print(f"📊 {name} completed with {len(traces)} traces") # Custom processing of traces here
# Register the tracerTracer.add("custom", custom_tracer)Class-Based Tracer
Section titled “Class-Based Tracer”For more complex tracing logic:
import osimport jsonfrom datetime import datetime
class DatabaseTracer: def __init__(self, connection_string: str): self.connection_string = connection_string self.tracer = self._tracer
@contextlib.contextmanager def _tracer(self, name: str): trace_id = datetime.now().isoformat() trace_data = {"id": trace_id, "name": name, "data": {}}
try: yield lambda key, value: trace_data["data"].update({key: value}) finally: # Save to database self._save_trace(trace_data)
def _save_trace(self, trace_data): # Implement database saving logic pass
# Use the tracerdb_tracer = DatabaseTracer("sqlite://traces.db")Tracer.add("database", db_tracer.tracer)Function Tracing
Section titled “Function Tracing”Use the @trace decorator to trace your custom functions:
from prompty.tracer import trace
@tracedef get_customer_data(customer_id: str): # Simulate database lookup return { "id": customer_id, "name": "Alice Johnson", "tier": "Premium" }
@tracedef process_request(customer_id: str, prompt_path: str): # Get customer data (traced) customer = get_customer_data(customer_id)
# Execute prompt (traced) response = prompty.execute( prompt_path, inputs={"customer": customer} )
return {"customer_id": customer_id, "response": response}
# Execute - all functions will be tracedresult = process_request("123", "customer_support.prompty")OpenTelemetry Integration
Section titled “OpenTelemetry Integration”Integrate with OpenTelemetry for distributed tracing:
import contextlibimport jsonfrom opentelemetry import trace as oteltracefrom opentelemetry.exporter.jaeger.thrift import JaegerExporterfrom opentelemetry.sdk.trace import TracerProviderfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom prompty.tracer import Tracer
# Setup OpenTelemetrytrace.set_tracer_provider(TracerProvider())jaeger_exporter = JaegerExporter( agent_host_name="localhost", agent_port=6831,)span_processor = BatchSpanProcessor(jaeger_exporter)trace.get_tracer_provider().add_span_processor(span_processor)
@contextlib.contextmanagerdef otel_tracer(name: str): tracer = oteltrace.get_tracer("prompty") with tracer.start_as_current_span(name) as span: yield lambda key, value: span.set_attribute( key, json.dumps(value).replace("\n", "") )
# Add to Prompty tracingTracer.add("opentelemetry", otel_tracer)Trace Data Structure
Section titled “Trace Data Structure”Traces contain structured information about execution:
{ "timestamp": "2024-01-15T10:30:00Z", "signature": { "function": "execute", "args": ["prompt.prompty"], "kwargs": {"inputs": {"name": "Alice"}} }, "inputs": { "name": "Alice", "context": "customer_support" }, "result": { "content": "Hello Alice! How can I help you today?", "usage": { "prompt_tokens": 45, "completion_tokens": 12, "total_tokens": 57 } }, "duration_ms": 1250, "model_config": { "type": "azure_openai", "deployment": "gpt-35-turbo", "temperature": 0.7 }}Filtering and Sampling
Section titled “Filtering and Sampling”Control tracing overhead with filtering:
import randomfrom prompty.tracer import Tracer
@contextlib.contextmanagerdef sampling_tracer(name: str): # Only trace 10% of requests if random.random() < 0.1: print(f"Tracing {name}") yield lambda key, value: print(f"{key}: {value}") else: # No-op tracer yield lambda key, value: None
Tracer.add("sampling", sampling_tracer)Security Considerations
Section titled “Security Considerations”Prompty automatically sanitizes sensitive data in traces:
# Sensitive keys are automatically maskedconfiguration = { "api_key": "sk-1234567890", # Will be masked as "**********" "secret": "my-secret", # Will be masked as "**********" "model": "gpt-3.5-turbo" # Will remain visible}Override sanitization if needed:
from prompty.tracer import sanitize
def custom_sanitize(key: str, value: Any) -> Any: if "internal" in key.lower(): return "[REDACTED]" return sanitize(key, value) # Use default sanitizationPerformance Monitoring
Section titled “Performance Monitoring”Monitor execution performance:
import timefrom prompty.tracer import trace
@tracedef timed_execution(prompt_path: str): start_time = time.time()
result = prompty.execute(prompt_path)
execution_time = time.time() - start_time print(f"Execution took {execution_time:.2f} seconds")
return resultDebugging with Traces
Section titled “Debugging with Traces”Use traces for debugging prompt issues:
# Enable detailed tracing for debuggingTracer.add("debug", console_tracer)
try: response = prompty.execute("problematic_prompt.prompty")except Exception as e: print(f"Error occurred: {e}") # Check trace output for debugging informationBest Practices
Section titled “Best Practices”Trace Analysis
Section titled “Trace Analysis”Analyze traces to optimize your prompts:
import jsonimport globfrom collections import defaultdict
def analyze_traces(trace_dir: str): traces = [] for file in glob.glob(f"{trace_dir}/*.tracy"): with open(file) as f: traces.append(json.load(f))
# Analyze token usage total_tokens = sum(t.get("result", {}).get("usage", {}).get("total_tokens", 0) for t in traces) avg_tokens = total_tokens / len(traces) if traces else 0
print(f"Analyzed {len(traces)} traces") print(f"Average tokens per request: {avg_tokens:.2f}")
# Analyze response times durations = [t.get("duration_ms", 0) for t in traces] avg_duration = sum(durations) / len(durations) if durations else 0
print(f"Average response time: {avg_duration:.2f}ms")
# Run analysisanalyze_traces("./traces")Next Steps
Section titled “Next Steps”- Learn about debugging Prompty for troubleshooting
- Explore CLI Usage for command-line tracing options
- Check out Advanced Configuration for complex setups
Want to Contribute To the Project? - Updated Guidance Coming Soon.