Python Runtime
The Prompty Python runtime provides a powerful and flexible way to execute prompts programmatically. It’s designed to be extensible, observable, and easy to integrate into your AI applications.
Installation
Section titled “Installation”Install the Prompty runtime using pip. Choose the appropriate extras based on your needs:
# Basic installationpip install prompty
# With Azure OpenAI supportpip install "prompty[azure]"
# With OpenAI supportpip install "prompty[openai]"
# With all invokerspip install "prompty[azure,openai,serverless]"Basic Usage
Section titled “Basic Usage”Executing Prompty Files
Section titled “Executing Prompty Files”The simplest way to use the runtime is with the execute() function:
import promptyimport prompty.azure # Import the invoker you need
# Execute a prompty fileresponse = prompty.execute("path/to/your/prompt.prompty")print(response)Passing Input Variables
Section titled “Passing Input Variables”You can pass variables to your prompts using the inputs parameter:
response = prompty.execute( "path/to/your/prompt.prompty", inputs={ "customer_name": "John Doe", "question": "What are your business hours?" })Core Functions
Section titled “Core Functions”execute()
Section titled “execute()”The main function for running prompts. It combines loading, preparing, and executing in one call.
def execute( prompt: Union[str, Prompty], *, inputs: dict[str, Any] = {}, connection: str = "default", configuration: dict[str, Any] = {}, options: dict[str, Any] = {}, stream: bool = False) -> AnyParameters:
prompt: Path to prompty file or Prompty objectinputs: Variables to pass to the prompt templateconnection: Configuration connection nameconfiguration: Model configuration overridesoptions: Additional options for executionstream: Whether to stream the response
load()
Section titled “load()”Load a prompty file into a Prompty object:
prompt = prompty.load("path/to/prompt.prompty")print(f"Loaded prompt: {prompt.name}")prepare()
Section titled “prepare()”Prepare inputs and render the prompt template:
prepared = prompty.prepare( prompt, inputs={"name": "Alice", "topic": "AI"})Execute a prepared prompt against the model:
result = prompty.run(prepared)Headless Usage
Section titled “Headless Usage”For programmatic prompt creation without files:
# Create a headless promptprompt = prompty.headless( api="chat", content="Hello, {{name}}! Tell me about {{topic}}.", connection={ "type": "azure_openai", "azure_endpoint": "https://your-endpoint.openai.azure.com/", "azure_deployment": "gpt-35-turbo" })
# Execute itresponse = prompty.execute(prompt, inputs={"name": "Bob", "topic": "Python"})Available Invokers
Section titled “Available Invokers”The runtime supports multiple AI service providers:
Azure OpenAI
Section titled “Azure OpenAI”import prompty.azure
# Configuration in prompty file or programmaticallyresponse = prompty.execute( prompt, configuration={ "type": "azure_openai", "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}", "azure_deployment": "gpt-35-turbo", "api_version": "2024-10-21" })OpenAI
Section titled “OpenAI”import prompty.openai
response = prompty.execute( prompt, configuration={ "type": "openai", "api_key": "${env:OPENAI_API_KEY}", "model": "gpt-3.5-turbo" })Serverless Models
Section titled “Serverless Models”Support for GitHub Models and other serverless endpoints:
import prompty.serverless
response = prompty.execute( prompt, configuration={ "type": "serverless", "endpoint": "https://models.inference.ai.azure.com", "model": "gpt-4o-mini" })Streaming Responses
Section titled “Streaming Responses”Enable streaming for real-time response processing:
# Get streaming responsestream = prompty.execute("prompt.prompty", stream=True)
# Process chunks as they arrivefor chunk in stream: print(chunk, end="")Async Support
Section titled “Async Support”The runtime provides async versions of all main functions:
import asyncioimport prompty
async def main(): # Async execution response = await prompty.execute_async("prompt.prompty")
# Async loading prompt = await prompty.load_async("prompt.prompty")
# Async streaming async for chunk in await prompty.execute_async("prompt.prompty", stream=True): print(chunk, end="")
asyncio.run(main())Error Handling
Section titled “Error Handling”Handle common runtime errors:
try: response = prompty.execute("prompt.prompty")except FileNotFoundError: print("Prompty file not found")except ValueError as e: print(f"Invalid configuration: {e}")except Exception as e: print(f"Execution failed: {e}")Best Practices
Section titled “Best Practices”Next Steps
Section titled “Next Steps”- Learn about Observability & Tracing to monitor your prompts
- Explore the CLI Usage for command-line operations
- Check out Advanced Configuration for complex scenarios
Want to Contribute To the Project? - Updated Guidance Coming Soon.