Skip to content

Troubleshooting

When something goes wrong, this page will help you diagnose and fix it fast. Errors are grouped by category — jump to the section that matches your symptom.


ErrorCauseFix
ModuleNotFoundError: No module named 'openai'Missing optional dependency (Python)pip install prompty[openai]
ModuleNotFoundError: No module named 'jinja2'Missing renderer dependency (Python)pip install prompty[jinja2]
ModuleNotFoundError: No module named 'chevron'Missing Mustache renderer (Python)pip install prompty[mustache]
ModuleNotFoundError: No module named 'azure.identity'Missing Azure dependency (Python)pip install prompty[azure]
Cannot find module '@prompty/openai'Missing npm package (TypeScript)npm install @prompty/openai
Could not load file or assembly 'Prompty.OpenAI'Missing NuGet package (C#)dotnet add package Prompty.OpenAI
Could not load file or assembly 'Prompty.Foundry'Missing NuGet package (C#)dotnet add package Prompty.Foundry

FileNotFoundError: Prompty file not found: path/to/chat.prompty

Cause: The path passed to load() doesn’t resolve to an existing file.

Fix: Verify the file exists and use an absolute path or check your working directory:

from pathlib import Path
print(Path("chat.prompty").resolve()) # see what Prompty will look for
ValueError: Environment variable 'OPENAI_API_KEY' not set for key 'apiKey'

Cause: Your .prompty file uses ${env:VAR} but the variable isn’t defined.

Fix: Set it in your shell or add it to a .env file in your project root:

.env
OPENAI_API_KEY=sk-...
AZURE_OPENAI_ENDPOINT=https://myresource.openai.azure.com/
AZURE_OPENAI_API_KEY=abc123
ValueError: Invalid Markdown format: Missing or malformed frontmatter.

Cause: The YAML block between --- delimiters has a syntax error.

Common mistakes:

  • Using tabs instead of spaces for indentation
  • Unclosed quotes: description: "This is broken
  • Missing the closing --- delimiter
  • Colons in unquoted strings: description: Use key: value (wrap in quotes)
FileNotFoundError: Referenced file 'shared_config.json' not found for key 'connection'

Cause: A ${file:path} reference points to a file that doesn’t exist.

Fix: The path is resolved relative to the .prompty file’s directory. Verify the file exists at that relative location.


InvokerError: No renderer found for 'jinja2'.
Install the required extra: uv pip install prompty[jinja2]
InvokerError: No executor found for 'openai'.
Install the required extra: uv pip install prompty[openai]

Fix: Install the corresponding extra. After installing, if you’re in a notebook or long-running process, restart the Python kernel so entry points are re-discovered.

openai.AuthenticationError: Error code: 401 - Incorrect API key provided

Cause: The API key is invalid, expired, or not set.

Checklist:

  1. Verify the env var is set: echo $OPENAI_API_KEY (or $env:OPENAI_API_KEY on PowerShell)
  2. Check for trailing whitespace or newlines in your .env file
  3. For Azure, ensure endpoint includes the full URL with https://
  4. Confirm the key hasn’t been rotated or revoked
openai.RateLimitError: Error code: 429 - Rate limit reached

Fix: Add retry logic with exponential backoff, or reduce request frequency. For production workloads, consider increasing your API tier or using multiple keys.

ValueError: Unsupported apiType: completion

Cause: The model.apiType value isn’t one of the supported types.

Supported values: chat, embedding, image, agent

ValueError: No connection registered with name 'my-conn'.
Currently registered: ['default']

Cause: A kind: reference connection references a name that hasn’t been registered.

Fix: Register the connection before loading, or switch to kind: key with inline credentials.


ValueError: Agent loop exceeded max_iterations (10)

Cause: The model keeps requesting tool calls without producing a final response.

Fix:

  • Increase the limit if your workflow genuinely needs more iterations
  • Improve your system prompt to guide the model toward a final answer
  • Verify your tools return useful results — vague responses cause the model to retry
ValueError: No tool handler registered for kind 'function' tool 'get_weather'

Cause: The tool is declared in frontmatter but no handler function is registered.

Fix: Register the handler using the @tool decorator or pass it via metadata:

from prompty import tool
@tool
def get_weather(city: str) -> str:
return f"72°F in {city}"

If the model sends arguments that don’t match your tool’s parameters, verify that the parameters schema in your .prompty file matches the function signature exactly.


Symptom: run() returns an object but nothing prints.

Cause: Streaming responses return an iterator — you must consume it:

from prompty import load, prepare, run
agent = load("chat.prompty")
messages = prepare(agent, inputs={"question": "Hello"})
result = run(agent, messages)
for chunk in result:
print(chunk, end="", flush=True)
ValueError: Model refused: <refusal reason>

Cause: The model’s content filter rejected the request or response.

Fix: Review your prompt for content that may trigger safety filters. Rephrase the system prompt or user input.


Symptom: You defined outputs but the response isn’t valid JSON.

Checklist:

  1. Ensure you’re using a model that supports structured output (e.g., gpt-4o-mini)
  2. Verify your outputs is well-formed with valid kind values
  3. Use strict: true in the output schema for deterministic JSON
ValueError: Empty embedding response
ValueError: Empty image response

Cause: The API returned an empty result, usually due to invalid input or a model issue.

Fix: Verify your input text (for embeddings) or prompt (for images) is non-empty and within the model’s constraints.


See exactly what Prompty sends to the LLM and what comes back:

from prompty import Tracer, console_tracer, invoke
Tracer.add("console", console_tracer)
result = invoke("chat.prompty", inputs={"question": "Hello"})

Debug your template rendering and parsing without making an API call:

from prompty import load, prepare
agent = load("chat.prompty")
messages = prepare(agent, inputs={"name": "Jane"})
for msg in messages:
print(f"[{msg.role}] {msg.content}")

When debugging, strip your .prompty file down to the bare minimum:

debug.prompty
---
name: debug
model:
id: gpt-4o-mini
provider: openai
apiType: chat
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
---
system:
Say hello.

If this works, add your customizations back one at a time to isolate the issue.

Prompty uses python-dotenv to load .env files. The file must be in your current working directory or a parent directory. If you’re running from a subdirectory, the .env in your project root may not be found automatically.


  • Search existing issues on GitHub
  • Open a new issue with: the error message, your .prompty file (redact keys), and your runtime version info:
    • Python: pip list | grep prompty
    • TypeScript: npm ls @prompty/core
    • C#: dotnet list package | findstr Prompty
  • Check the API Reference for correct property names and types