Skip to content

Testing Your Prompts

Every .prompty file is a structured asset — model config, input schemas, template logic. You can validate all of that without burning a single API token. Save integration tests for final verification and keep your CI fast and free.


Load a .prompty file and assert its properties. No LLM call is made.

from prompty import load
def test_load_prompt():
agent = load("prompts/chat.prompty")
assert agent.name == "openai-chat"
assert agent.model.id == "gpt-4o-mini"
assert agent.model.provider == "openai"
assert agent.model.apiType == "chat"
# Verify inputs were declared
props = agent.inputs.properties
assert any(p.name == "question" for p in props)

Use prepare() to render the template and parse it into messages — without calling the LLM. Assert on message count, roles, and interpolated content.

from prompty import prepare
def test_prepare_messages():
messages = prepare(
"prompts/chat.prompty",
inputs={"question": "What is Prompty?"},
)
assert len(messages) >= 2
assert messages[0].role == "system"
assert messages[-1].role == "user"
assert "What is Prompty?" in messages[-1].text

Test the entire pipeline (load → render → parse → execute → process) by mocking the LLM SDK client so no real API call is made.

from unittest.mock import patch, MagicMock
from prompty import invoke
def test_invoke_with_mock():
# Build a fake OpenAI chat response
mock_choice = MagicMock()
mock_choice.message.content = "Mocked answer"
mock_choice.message.tool_calls = None
mock_response = MagicMock()
mock_response.choices = [mock_choice]
with patch("openai.OpenAI") as MockClient:
client = MockClient.return_value
client.chat.completions.create.return_value = mock_response
result = invoke("prompts/chat.prompty", inputs={"question": "test"})
assert result == "Mocked answer"

Gate integration tests behind environment variables so they only run when API keys are available. This keeps CI fast by default and lets you opt-in to live tests locally.

import os
import pytest
from prompty import invoke
skip_openai = pytest.mark.skipif(
not os.environ.get("OPENAI_API_KEY"),
reason="OPENAI_API_KEY not set",
)
@skip_openai
def test_live_chat():
result = invoke(
"prompts/chat.prompty",
inputs={"question": "Say hello in one word."},
)
assert isinstance(result, str) and len(result) > 0