Skip to content

Core Concepts

This section covers the fundamental concepts behind Prompty. Each page explains a key aspect of how Prompty processes .prompty files into LLM results.

  • Connections — how Prompty authenticates with LLM providers (API keys, Microsoft Entra ID, registries)
  • Conversation History (Threads) — multi-turn conversation via kind: thread inputs, nonce-based expansion, and best practices
  • Tools — function, MCP, OpenAPI, and custom tool definitions
  • Tracing & Observability — pluggable tracer backends, @trace decorator, OpenTelemetry
  • Streaming — real-time chunk delivery, PromptyStream, async iteration
  • Structured Output — typed JSON responses via output schemas
  • Agentic Concepts — turns, loop controls, guardrails, context compaction, steering, cancellation, and retries