Configuration
Prompty provides flexible configuration options to work with different AI models, environments, and deployment scenarios. This guide covers all configuration aspects from basic setup to advanced scenarios.
Configuration Sources
Section titled “Configuration Sources”Prompty loads configuration from multiple sources in this order of precedence:
- Runtime parameters (highest priority)
- Environment variables
- Configuration files (
prompty.json) - Prompty file frontmatter
- Default values (lowest priority)
Prompty File Configuration
Section titled “Prompty File Configuration”Basic Structure
Section titled “Basic Structure”Every prompty file includes configuration in the frontmatter:
---name: Customer Support Botdescription: Helps customers with common questionsmodel: api: chat connection: type: azure_openai azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} azure_deployment: ${env:AZURE_OPENAI_DEPLOYMENT} api_version: "2024-10-21" options: temperature: 0.7 max_tokens: 1000inputs: customer_name: type: string default: "John Doe" question: type: string default: "What are your hours?"---system:You are a helpful customer support assistant.
user:Customer: {{customer_name}}Question: {{question}}Model Configuration Options
Section titled “Model Configuration Options”Azure OpenAI
Section titled “Azure OpenAI”model: api: chat connection: type: azure_openai azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} azure_deployment: gpt-35-turbo api_version: "2024-10-21" options: temperature: 0.7 max_tokens: 1000 top_p: 1.0 frequency_penalty: 0.0 presence_penalty: 0.0OpenAI
Section titled “OpenAI”model: api: chat connection: type: openai api_key: ${env:OPENAI_API_KEY} id: gpt-3.5-turbo options: temperature: 0.7 max_tokens: 1000Serverless Models
Section titled “Serverless Models”model: api: chat connection: type: serverless endpoint: https://models.inference.ai.azure.com api_key: ${env:GITHUB_TOKEN} id: gpt-4o-mini options: temperature: 0.7Environment Variables
Section titled “Environment Variables”Azure OpenAI Setup
Section titled “Azure OpenAI Setup”# .env fileAZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com/AZURE_OPENAI_API_KEY=your-api-keyAZURE_OPENAI_DEPLOYMENT=gpt-35-turboAZURE_OPENAI_API_VERSION=2024-10-21OpenAI Setup
Section titled “OpenAI Setup”# .env fileOPENAI_API_KEY=sk-your-api-keyOPENAI_ORG_ID=org-your-org-id # OptionalGitHub Models Setup
Section titled “GitHub Models Setup”# .env fileGITHUB_TOKEN=ghp_your-tokenGlobal Configuration Files
Section titled “Global Configuration Files”prompty.json
Section titled “prompty.json”Create a prompty.json file in your project root for shared configurations:
{ "connections": { "default": { "type": "azure_openai", "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}", "api_version": "2024-10-21" }, "production": { "type": "azure_openai", "azure_endpoint": "https://prod-endpoint.openai.azure.com/", "api_version": "2024-10-21" }, "development": { "type": "azure_openai", "azure_endpoint": "https://dev-endpoint.openai.azure.com/", "api_version": "2024-10-21" } }, "defaults": { "temperature": 0.7, "max_tokens": 1000, "top_p": 1.0 }}Using Named Connections
Section titled “Using Named Connections”Reference connections in your prompty files:
---name: Production Botmodel: api: chat connection: production # References the "production" connection options: azure_deployment: gpt-4 temperature: 0.5---Or specify at runtime:
import prompty
response = prompty.execute( "prompt.prompty", connection="production")Runtime Configuration
Section titled “Runtime Configuration”Python Runtime
Section titled “Python Runtime”Override configuration programmatically:
import prompty
# Override individual settingsresponse = prompty.execute( "prompt.prompty", configuration={ "temperature": 0.9, "max_tokens": 500 })
# Override connectionresponse = prompty.execute( "prompt.prompty", connection="production")
# Complete configuration overrideresponse = prompty.execute( "prompt.prompty", configuration={ "type": "openai", "api_key": "sk-different-key", "model": "gpt-4", "temperature": 0.3 })CLI Runtime
Section titled “CLI Runtime”Override configuration via command line:
# Override specific settingsprompty -s prompt.prompty \ --config '{"temperature": 0.9, "max_tokens": 500}' \ -e .env
# Use different connectionprompty -s prompt.prompty --connection production -e .envAdvanced Configuration
Section titled “Advanced Configuration”Multiple Model Types
Section titled “Multiple Model Types”Configure different APIs in the same project:
{ "connections": { "chat": { "type": "azure_openai", "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}", "azure_deployment": "gpt-35-turbo" }, "embeddings": { "type": "azure_openai", "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}", "azure_deployment": "text-embedding-ada-002" }, "vision": { "type": "azure_openai", "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}", "azure_deployment": "gpt-4-vision" } }}Environment-Specific Configuration
Section titled “Environment-Specific Configuration”Use different configurations per environment:
# Developmentprompty -s prompt.prompty -e .env.dev
# Stagingprompty -s prompt.prompty -e .env.staging
# Productionprompty -s prompt.prompty -e .env.prodExample environment files:
AZURE_OPENAI_ENDPOINT=https://dev-endpoint.openai.azure.com/AZURE_OPENAI_DEPLOYMENT=gpt-35-turboPROMPTY_CONNECTION=development
# .env.prodAZURE_OPENAI_ENDPOINT=https://prod-endpoint.openai.azure.com/AZURE_OPENAI_DEPLOYMENT=gpt-4PROMPTY_CONNECTION=productionCustom Invokers
Section titled “Custom Invokers”Register custom invokers for specialized models:
import promptyfrom prompty.invoker import InvokerFactory
class CustomInvoker: def invoke(self, prompt, configuration, **kwargs): # Custom invocation logic pass
# Register custom invokerInvokerFactory.add_invoker("custom", CustomInvoker)
# Use in configurationresponse = prompty.execute( prompt, configuration={"type": "custom", "custom_param": "value"})Security Best Practices
Section titled “Security Best Practices”Environment Variable Management
Section titled “Environment Variable Management”# ✅ Good - Use environment variablesmodel: connection: azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} api_key: ${env:AZURE_OPENAI_API_KEY}
# ❌ Bad - Hardcoded secretsmodel: connection: azure_endpoint: https://my-endpoint.openai.azure.com/ api_key: sk-1234567890abcdef # Never do this!Key Rotation
Section titled “Key Rotation”Support key rotation with fallback configurations:
import promptyimport os
def get_api_key(): # Try new key first, fall back to old key return os.getenv("AZURE_OPENAI_API_KEY_NEW") or os.getenv("AZURE_OPENAI_API_KEY")
response = prompty.execute( "prompt.prompty", configuration={ "api_key": get_api_key() })Role-Based Configuration
Section titled “Role-Based Configuration”Use different configurations based on user roles:
def get_config_for_user(user_role: str): if user_role == "admin": return "production" elif user_role == "developer": return "development" else: return "default"
connection = get_config_for_user(user.role)response = prompty.execute("prompt.prompty", connection=connection)Performance Configuration
Section titled “Performance Configuration”Connection Pooling
Section titled “Connection Pooling”Configure connection pooling for high-throughput scenarios:
{ "connections": { "default": { "type": "azure_openai", "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}", "connection_pool_size": 10, "timeout": 30, "retry_count": 3 } }}Caching Configuration
Section titled “Caching Configuration”Enable response caching:
import promptyfrom functools import lru_cache
@lru_cache(maxsize=100)def cached_execute(prompt_path: str, inputs_hash: str): return prompty.execute(prompt_path, inputs=json.loads(inputs_hash))
# Use with hashable inputsimport jsoninputs = {"name": "Alice", "topic": "AI"}inputs_hash = json.dumps(inputs, sort_keys=True)result = cached_execute("prompt.prompty", inputs_hash)Monitoring and Logging
Section titled “Monitoring and Logging”Configuration Logging
Section titled “Configuration Logging”Log configuration for debugging:
import promptyimport logging
# Enable configuration logginglogging.basicConfig(level=logging.DEBUG)
# This will log the resolved configurationresponse = prompty.execute("prompt.prompty")Configuration Validation
Section titled “Configuration Validation”Validate configuration at startup:
import prompty
def validate_config(): try: # Test configuration with a simple prompt test_prompt = prompty.headless( api="chat", content="Test", connection="default" ) prompty.execute(test_prompt) print("✅ Configuration valid") except Exception as e: print(f"❌ Configuration error: {e}")
validate_config()Troubleshooting
Section titled “Troubleshooting”Common Configuration Issues
Section titled “Common Configuration Issues”Missing environment variables:
# Check which variables are loadedprompty -s prompt.prompty -e .env --verboseInvalid JSON configuration:
import json
try: config = json.loads(config_string)except json.JSONDecodeError as e: print(f"Invalid JSON: {e}")Connection issues:
import prompty
try: response = prompty.execute("prompt.prompty")except Exception as e: print(f"Connection error: {e}") # Check endpoint, API key, deployment nameDebug Configuration Loading
Section titled “Debug Configuration Loading”import promptyfrom prompty.utils import load_global_config
# Load and inspect global configurationconfig = load_global_config()print("Global config:", config)
# Load and inspect prompty fileprompt = prompty.load("prompt.prompty")print("Prompt config:", prompt.model.configuration)Best Practices
Section titled “Best Practices”Next Steps
Section titled “Next Steps”- Learn about Observability & Tracing for monitoring
- Explore CLI Usage for command-line operations
- Check out Deployment Best Practices for production setups
Want to Contribute To the Project? - Updated Guidance Coming Soon.