Troubleshooting
When something goes wrong, this page will help you diagnose and fix it fast. Errors are grouped by category — jump to the section that matches your symptom.
Installation Issues
Section titled “Installation Issues”| Error | Cause | Fix |
|---|---|---|
ModuleNotFoundError: No module named 'openai' | Missing optional dependency (Python) | pip install prompty[openai] |
ModuleNotFoundError: No module named 'jinja2' | Missing renderer dependency (Python) | pip install prompty[jinja2] |
ModuleNotFoundError: No module named 'chevron' | Missing Mustache renderer (Python) | pip install prompty[mustache] |
ModuleNotFoundError: No module named 'azure.identity' | Missing Azure dependency (Python) | pip install prompty[azure] |
Cannot find module '@prompty/openai' | Missing npm package (TypeScript) | npm install @prompty/openai |
Could not load file or assembly 'Prompty.OpenAI' | Missing NuGet package (C#) | dotnet add package Prompty.OpenAI |
Could not load file or assembly 'Prompty.Foundry' | Missing NuGet package (C#) | dotnet add package Prompty.Foundry |
Loading Errors
Section titled “Loading Errors”File Not Found
Section titled “File Not Found”FileNotFoundError: Prompty file not found: path/to/chat.promptyCause: The path passed to load() doesn’t resolve to an existing file.
Fix: Verify the file exists and use an absolute path or check your working directory:
from pathlib import Pathprint(Path("chat.prompty").resolve()) # see what Prompty will look forimport { resolve } from "node:path";console.log(resolve("chat.prompty")); // see what Prompty will look forConsole.WriteLine(Path.GetFullPath("chat.prompty")); // see what Prompty will look forMissing Environment Variable
Section titled “Missing Environment Variable”ValueError: Environment variable 'OPENAI_API_KEY' not set for key 'apiKey'Cause: Your .prompty file uses ${env:VAR} but the variable isn’t defined.
Fix: Set it in your shell or add it to a .env file in your project root:
OPENAI_API_KEY=sk-...AZURE_OPENAI_ENDPOINT=https://myresource.openai.azure.com/AZURE_OPENAI_API_KEY=abc123Invalid YAML Frontmatter
Section titled “Invalid YAML Frontmatter”ValueError: Invalid Markdown format: Missing or malformed frontmatter.Cause: The YAML block between --- delimiters has a syntax error.
Common mistakes:
- Using tabs instead of spaces for indentation
- Unclosed quotes:
description: "This is broken - Missing the closing
---delimiter - Colons in unquoted strings:
description: Use key: value(wrap in quotes)
Referenced File Not Found
Section titled “Referenced File Not Found”FileNotFoundError: Referenced file 'shared_config.json' not found for key 'connection'Cause: A ${file:path} reference points to a file that doesn’t exist.
Fix: The path is resolved relative to the .prompty file’s directory. Verify the file
exists at that relative location.
Runtime Errors
Section titled “Runtime Errors”Missing Renderer or Executor
Section titled “Missing Renderer or Executor”InvokerError: No renderer found for 'jinja2'.Install the required extra: uv pip install prompty[jinja2]InvokerError: No executor found for 'openai'.Install the required extra: uv pip install prompty[openai]Fix: Install the corresponding extra. After installing, if you’re in a notebook or long-running process, restart the Python kernel so entry points are re-discovered.
InvokerError: No executor registered for 'openai'.Register one via InvokerRegistry.RegisterExecutor().Fix: Add the provider NuGet package and register it at startup:
using Prompty.Core;using Prompty.OpenAI;
new PromptyBuilder() .AddOpenAI();Authentication Errors
Section titled “Authentication Errors”openai.AuthenticationError: Error code: 401 - Incorrect API key providedCause: The API key is invalid, expired, or not set.
Checklist:
- Verify the env var is set:
echo $OPENAI_API_KEY(or$env:OPENAI_API_KEYon PowerShell) - Check for trailing whitespace or newlines in your
.envfile - For Azure, ensure
endpointincludes the full URL withhttps:// - Confirm the key hasn’t been rotated or revoked
Rate Limit Errors
Section titled “Rate Limit Errors”openai.RateLimitError: Error code: 429 - Rate limit reachedFix: Add retry logic with exponential backoff, or reduce request frequency. For production workloads, consider increasing your API tier or using multiple keys.
Unsupported API Type
Section titled “Unsupported API Type”ValueError: Unsupported apiType: completionCause: The model.apiType value isn’t one of the supported types.
Supported values: chat, embedding, image, agent
Connection Not Found
Section titled “Connection Not Found”ValueError: No connection registered with name 'my-conn'.Currently registered: ['default']Cause: A kind: reference connection references a name that hasn’t been registered.
Fix: Register the connection before loading, or switch to kind: key with inline credentials.
Agent & Tool Calling Issues
Section titled “Agent & Tool Calling Issues”Agent Loop Exceeds Max Iterations
Section titled “Agent Loop Exceeds Max Iterations”ValueError: Agent loop exceeded max_iterations (10)Cause: The model keeps requesting tool calls without producing a final response.
Fix:
- Increase the limit if your workflow genuinely needs more iterations
- Improve your system prompt to guide the model toward a final answer
- Verify your tools return useful results — vague responses cause the model to retry
Tool Not Found
Section titled “Tool Not Found”ValueError: No tool handler registered for kind 'function' tool 'get_weather'Cause: The tool is declared in frontmatter but no handler function is registered.
Fix: Register the handler using the @tool decorator or pass it via metadata:
from prompty import tool
@tooldef get_weather(city: str) -> str: return f"72°F in {city}"import { tool } from "@prompty/core";
const getWeather = tool( (city: string) => `72°F in ${city}`, "get_weather", { description: "Get the weather", parameters: { city: { kind: "string" } } });using Prompty.Core;
[Tool("get_weather", Description = "Get the weather")]static string GetWeather(string city) => $"72°F in {city}";Tool Schema Mismatch
Section titled “Tool Schema Mismatch”If the model sends arguments that don’t match your tool’s parameters, verify that
the parameters schema in your .prompty file matches the function signature exactly.
Streaming Issues
Section titled “Streaming Issues”No Output From Streaming
Section titled “No Output From Streaming”Symptom: run() returns an object but nothing prints.
Cause: Streaming responses return an iterator — you must consume it:
from prompty import load, prepare, run
agent = load("chat.prompty")messages = prepare(agent, inputs={"question": "Hello"})result = run(agent, messages)for chunk in result: print(chunk, end="", flush=True)import { load, prepare, run } from "@prompty/core";
const agent = load("chat.prompty");const messages = await prepare(agent, { question: "Hello" });const result = await run(agent, messages);for await (const chunk of result) { process.stdout.write(chunk);}using Prompty.Core;
var agent = PromptyLoader.Load("chat.prompty");var messages = await Pipeline.PrepareAsync(agent, new() { ["question"] = "Hello" });var result = Pipeline.RunStreamingAsync(agent, messages);await foreach (var chunk in result){ Console.Write(chunk);}Model Refused Response
Section titled “Model Refused Response”ValueError: Model refused: <refusal reason>Cause: The model’s content filter rejected the request or response.
Fix: Review your prompt for content that may trigger safety filters. Rephrase the system prompt or user input.
Structured Output Issues
Section titled “Structured Output Issues”JSON Parse Error
Section titled “JSON Parse Error”Symptom: You defined outputs but the response isn’t valid JSON.
Checklist:
- Ensure you’re using a model that supports structured output (e.g.,
gpt-4o-mini) - Verify your
outputsis well-formed with validkindvalues - Use
strict: truein the output schema for deterministic JSON
Empty Embedding or Image Response
Section titled “Empty Embedding or Image Response”ValueError: Empty embedding responseValueError: Empty image responseCause: The API returned an empty result, usually due to invalid input or a model issue.
Fix: Verify your input text (for embeddings) or prompt (for images) is non-empty and within the model’s constraints.
Debugging Tips
Section titled “Debugging Tips”Enable Console Tracing
Section titled “Enable Console Tracing”See exactly what Prompty sends to the LLM and what comes back:
from prompty import Tracer, console_tracer, invoke
Tracer.add("console", console_tracer)result = invoke("chat.prompty", inputs={"question": "Hello"})import { Tracer, consoleTracer, invoke } from "@prompty/core";
Tracer.add("console", consoleTracer);const result = await invoke("chat.prompty", { question: "Hello" });using Prompty.Core;
Tracer.Add("console", name => new ConsoleTracer(name));var result = await Pipeline.InvokeAsync("chat.prompty", new() { ["question"] = "Hello" });Use prepare() to Inspect Messages
Section titled “Use prepare() to Inspect Messages”Debug your template rendering and parsing without making an API call:
from prompty import load, prepare
agent = load("chat.prompty")messages = prepare(agent, inputs={"name": "Jane"})for msg in messages: print(f"[{msg.role}] {msg.content}")import { load, prepare } from "@prompty/core";
const agent = load("chat.prompty");const messages = await prepare(agent, { name: "Jane" });for (const msg of messages) { console.log(`[${msg.role}] ${msg.content}`);}using Prompty.Core;
var agent = PromptyLoader.Load("chat.prompty");var messages = await Pipeline.PrepareAsync(agent, new() { ["name"] = "Jane" });foreach (var msg in messages){ Console.WriteLine($"[{msg.Role}] {msg.Content}");}Verify With a Minimal Example
Section titled “Verify With a Minimal Example”When debugging, strip your .prompty file down to the bare minimum:
---name: debugmodel: id: gpt-4o-mini provider: openai apiType: chat connection: kind: key apiKey: ${env:OPENAI_API_KEY}---system:Say hello.If this works, add your customizations back one at a time to isolate the issue.
Check Your .env File Location
Section titled “Check Your .env File Location”Prompty uses python-dotenv to load .env files. The file must be in your
current working directory or a parent directory. If you’re running from a
subdirectory, the .env in your project root may not be found automatically.
Still Stuck?
Section titled “Still Stuck?”- Search existing issues on GitHub
- Open a new issue with: the error message, your
.promptyfile (redact keys), and your runtime version info:- Python:
pip list | grep prompty - TypeScript:
npm ls @prompty/core - C#:
dotnet list package | findstr Prompty
- Python:
- Check the API Reference for correct property names and types