Skip to content

Python Runtime

The Prompty Python runtime provides a powerful and flexible way to execute prompts programmatically. It’s designed to be extensible, observable, and easy to integrate into your AI applications.

Install the Prompty runtime using pip. Choose the appropriate extras based on your needs:

Terminal window
# Basic installation
pip install prompty
# With Azure OpenAI support
pip install "prompty[azure]"
# With OpenAI support
pip install "prompty[openai]"
# With all invokers
pip install "prompty[azure,openai,serverless]"

The simplest way to use the runtime is with the execute() function:

import prompty
import prompty.azure # Import the invoker you need
# Execute a prompty file
response = prompty.execute("path/to/your/prompt.prompty")
print(response)

You can pass variables to your prompts using the inputs parameter:

response = prompty.execute(
"path/to/your/prompt.prompty",
inputs={
"customer_name": "John Doe",
"question": "What are your business hours?"
}
)

The main function for running prompts. It combines loading, preparing, and executing in one call.

def execute(
prompt: Union[str, Prompty],
*,
inputs: dict[str, Any] = {},
connection: str = "default",
configuration: dict[str, Any] = {},
options: dict[str, Any] = {},
stream: bool = False
) -> Any

Parameters:

  • prompt: Path to prompty file or Prompty object
  • inputs: Variables to pass to the prompt template
  • connection: Configuration connection name
  • configuration: Model configuration overrides
  • options: Additional options for execution
  • stream: Whether to stream the response

Load a prompty file into a Prompty object:

prompt = prompty.load("path/to/prompt.prompty")
print(f"Loaded prompt: {prompt.name}")

Prepare inputs and render the prompt template:

prepared = prompty.prepare(
prompt,
inputs={"name": "Alice", "topic": "AI"}
)

Execute a prepared prompt against the model:

result = prompty.run(prepared)

For programmatic prompt creation without files:

# Create a headless prompt
prompt = prompty.headless(
api="chat",
content="Hello, {{name}}! Tell me about {{topic}}.",
connection={
"type": "azure_openai",
"azure_endpoint": "https://your-endpoint.openai.azure.com/",
"azure_deployment": "gpt-35-turbo"
}
)
# Execute it
response = prompty.execute(prompt, inputs={"name": "Bob", "topic": "Python"})

The runtime supports multiple AI service providers:

import prompty.azure
# Configuration in prompty file or programmatically
response = prompty.execute(
prompt,
configuration={
"type": "azure_openai",
"azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}",
"azure_deployment": "gpt-35-turbo",
"api_version": "2024-10-21"
}
)
import prompty.openai
response = prompty.execute(
prompt,
configuration={
"type": "openai",
"api_key": "${env:OPENAI_API_KEY}",
"model": "gpt-3.5-turbo"
}
)

Support for GitHub Models and other serverless endpoints:

import prompty.serverless
response = prompty.execute(
prompt,
configuration={
"type": "serverless",
"endpoint": "https://models.inference.ai.azure.com",
"model": "gpt-4o-mini"
}
)

Enable streaming for real-time response processing:

# Get streaming response
stream = prompty.execute("prompt.prompty", stream=True)
# Process chunks as they arrive
for chunk in stream:
print(chunk, end="")

The runtime provides async versions of all main functions:

import asyncio
import prompty
async def main():
# Async execution
response = await prompty.execute_async("prompt.prompty")
# Async loading
prompt = await prompty.load_async("prompt.prompty")
# Async streaming
async for chunk in await prompty.execute_async("prompt.prompty", stream=True):
print(chunk, end="")
asyncio.run(main())

Handle common runtime errors:

try:
response = prompty.execute("prompt.prompty")
except FileNotFoundError:
print("Prompty file not found")
except ValueError as e:
print(f"Invalid configuration: {e}")
except Exception as e:
print(f"Execution failed: {e}")

Want to Contribute To the Project? - Updated Guidance Coming Soon.