CLI Usage
The Prompty CLI provides a convenient way to execute prompts, debug issues, and integrate Prompty into scripts and CI/CD pipelines. It comes built-in with the Python runtime installation.
Installation
Section titled “Installation”The CLI is automatically installed with the Prompty Python package:
pip install "prompty[azure]"Verify the installation:
prompty --versionBasic Usage
Section titled “Basic Usage”Execute a Prompt
Section titled “Execute a Prompt”Run a prompty file with the basic command:
prompty -s path/to/your/prompt.promptyUsing Environment Files
Section titled “Using Environment Files”Load environment variables from a file:
prompty -s prompt.prompty -e .envExample .env file:
AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com/AZURE_OPENAI_API_KEY=your-api-keyAZURE_OPENAI_DEPLOYMENT=gpt-35-turboPassing Input Variables
Section titled “Passing Input Variables”Pass input variables using JSON:
prompty -s prompt.prompty --inputs '{"name": "Alice", "topic": "AI"}'Or from a JSON file:
# inputs.json{ "customer_name": "John Doe", "question": "What are your business hours?"}
prompty -s prompt.prompty --inputs-file inputs.jsonAdvanced Options
Section titled “Advanced Options”Specify Model Configuration
Section titled “Specify Model Configuration”Override model configuration from the command line:
prompty -s prompt.prompty \ --config '{"type": "azure_openai", "azure_deployment": "gpt-4"}' \ -e .envEnable Detailed Tracing
Section titled “Enable Detailed Tracing”The CLI includes tracing by default. Control trace output:
# Basic tracing (default)prompty -s prompt.prompty -e .env
# Verbose tracingprompty -s prompt.prompty -e .env --verbose
# Save traces to fileprompty -s prompt.prompty -e .env --trace-dir ./tracesStreaming Output
Section titled “Streaming Output”Enable streaming for real-time output:
prompty -s prompt.prompty --stream -e .envInteractive Chat Mode
Section titled “Interactive Chat Mode”Use the CLI in interactive chat mode for multi-turn conversations:
prompty -s chat_prompt.prompty --chat -e .envIn chat mode:
- Type your messages and press Enter
- Use
/exitto quit - Use
/clearto clear conversation history - Use
/helpfor available commands
CLI Options Reference
Section titled “CLI Options Reference”| Option | Short | Description | Example |
|---|---|---|---|
--source | -s | Path to prompty file | -s prompt.prompty |
--env | -e | Environment file path | -e .env |
--inputs | -i | JSON input variables | -i '{"name": "Alice"}' |
--inputs-file | Input variables from file | --inputs-file inputs.json | |
--config | -c | Model configuration JSON | -c '{"temperature": 0.7}' |
--connection | Connection name | --connection production | |
--stream | Enable streaming output | --stream | |
--chat | Interactive chat mode | --chat | |
--verbose | -v | Verbose output | -v |
--trace-dir | Directory for trace files | --trace-dir ./traces | |
--output | -o | Output file path | -o result.txt |
--format | -f | Output format (json, text) | -f json |
--help | -h | Show help message | -h |
--version | Show version | --version |
Output Formats
Section titled “Output Formats”Text Output (Default)
Section titled “Text Output (Default)”prompty -s prompt.prompty -e .env# Output: Hello! How can I help you today?JSON Output
Section titled “JSON Output”prompty -s prompt.prompty -e .env --format jsonJSON output includes metadata:
{ "content": "Hello! How can I help you today?", "usage": { "prompt_tokens": 45, "completion_tokens": 12, "total_tokens": 57 }, "model": "gpt-35-turbo", "finish_reason": "stop"}Save to File
Section titled “Save to File”prompty -s prompt.prompty -e .env -o response.txtWorking with Different Invokers
Section titled “Working with Different Invokers”Azure OpenAI
Section titled “Azure OpenAI”# Set up environmentexport AZURE_OPENAI_ENDPOINT="https://your-endpoint.openai.azure.com/"export AZURE_OPENAI_API_KEY="your-api-key"export AZURE_OPENAI_DEPLOYMENT="gpt-35-turbo"
# Executeprompty -s prompt.prompty -e .envOpenAI
Section titled “OpenAI”# Set up environmentexport OPENAI_API_KEY="sk-your-api-key"
# Execute with OpenAI configurationprompty -s prompt.prompty \ --config '{"type": "openai", "model": "gpt-3.5-turbo"}' \ -e .envServerless Models
Section titled “Serverless Models”# GitHub Models exampleexport GITHUB_TOKEN="your-github-token"
prompty -s prompt.prompty \ --config '{"type": "serverless", "endpoint": "https://models.inference.ai.azure.com", "model": "gpt-4o-mini"}' \ -e .envDebugging with CLI
Section titled “Debugging with CLI”Common Issues
Section titled “Common Issues”File not found:
prompty -s nonexistent.prompty# Error: File 'nonexistent.prompty' not foundInvalid JSON inputs:
prompty -s prompt.prompty --inputs '{"name": "Alice"'# Error: Invalid JSON in inputsMissing environment variables:
prompty -s prompt.prompty -e .env --verbose# Will show which environment variables are missingVerbose Mode for Troubleshooting
Section titled “Verbose Mode for Troubleshooting”prompty -s prompt.prompty -e .env --verboseVerbose output includes:
- Environment variable loading
- Prompt parsing details
- Model configuration
- Request/response details
- Execution timing
Scripting and Automation
Section titled “Scripting and Automation”Batch Processing
Section titled “Batch Processing”Process multiple prompts:
#!/bin/bashfor prompt in prompts/*.prompty; do echo "Processing $prompt..." prompty -s "$prompt" -e .env -o "results/$(basename "$prompt" .prompty).txt"doneCI/CD Integration
Section titled “CI/CD Integration”Use in GitHub Actions:
name: Test Promptson: [push]
jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2
- name: Set up Python uses: actions/setup-python@v2 with: python-version: '3.11'
- name: Install Prompty run: pip install "prompty[azure]"
- name: Test prompts env: AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }} AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }} run: | for prompt in tests/*.prompty; do prompty -s "$prompt" --format json -o "results/$(basename "$prompt" .prompty).json" doneExit Codes
Section titled “Exit Codes”The CLI returns appropriate exit codes for scripting:
0: Success1: General error (file not found, invalid JSON, etc.)2: Configuration error3: Authentication error4: API error
Performance Considerations
Section titled “Performance Considerations”Batch Operations
Section titled “Batch Operations”For multiple prompts, consider connection reuse:
# Instead of multiple CLI calls, use Python scriptpython batch_process.pyimport promptyimport prompty.azure
prompts = ["prompt1.prompty", "prompt2.prompty", "prompt3.prompty"]
for prompt_file in prompts: result = prompty.execute(prompt_file) print(f"{prompt_file}: {result}")Large Outputs
Section titled “Large Outputs”For large responses, use file output instead of console:
prompty -s large_prompt.prompty -e .env -o large_response.txtBest Practices
Section titled “Best Practices”Examples
Section titled “Examples”Customer Support Bot
Section titled “Customer Support Bot”# Interactive customer supportprompty -s customer_support.prompty --chat -e .envDocument Summarization
Section titled “Document Summarization”# Summarize with custom inputsprompty -s summarize.prompty \ --inputs '{"document": "path/to/document.txt", "max_length": 200}' \ -e .env \ -o summary.txtCode Review
Section titled “Code Review”# Review code changesprompty -s code_review.prompty \ --inputs-file review_context.json \ --format json \ -o review_results.json \ -e .envNext Steps
Section titled “Next Steps”- Learn about Python Runtime for programmatic usage
- Explore Observability & Tracing for monitoring
- Check out Advanced Configuration for complex setups
Want to Contribute To the Project? - Updated Guidance Coming Soon.