From Prompty to Code
In this section, we’ll learn how to convert Prompty assets to Python code and build your first production-ready application.
Prerequisites
Section titled “Prerequisites”Before we begin, ensure you have:
- Python 3.10 or higher installed
- Visual Studio Code with the Prompty extension
- A completed Prompty asset from the previous section
- Access to Azure OpenAI or another supported model provider
Installation
Section titled “Installation”First, install the Prompty Python package with the appropriate extras:
# For Azure OpenAIpip install "prompty[azure]"
# For OpenAIpip install "prompty[openai]"
# For GitHub Models and serverlesspip install "prompty[serverless]"
# Install all providerspip install "prompty[azure,openai,serverless]"Generate Code from Prompty Asset
Section titled “Generate Code from Prompty Asset”The VS Code extension can automatically generate Python code from your Prompty assets:
- Open your Prompty file in VS Code
- Right-click on the file in the Explorer
- Select “Add Code” → “Add Prompty Code”
- Choose your preferred template (basic, with tracing, async, etc.)
This generates a Python file with the necessary imports and execution logic.
Generated Code Example
Section titled “Generated Code Example”Here’s what the generated code looks like:
import promptyimport prompty.azurefrom prompty.tracer import trace, Tracer, console_tracer, PromptyTracer
# Configure tracing (optional)Tracer.add("console", console_tracer)json_tracer = PromptyTracer(output_dir="./traces")Tracer.add("json", json_tracer.tracer)
@tracedef run(question: str) -> str: """Execute the Shakespeare prompt with the given question"""
# Execute the prompty file response = prompty.execute( "shakespeare.prompty", inputs={"question": question} )
return response
if __name__ == "__main__": # Example usage result = run("What is the meaning of life?") print(result)Environment Configuration
Section titled “Environment Configuration”Set up Environment Variables
Section titled “Set up Environment Variables”Create a .env file in your project directory:
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/AZURE_OPENAI_API_KEY=your-api-keyAZURE_OPENAI_DEPLOYMENT=gpt-35-turboAZURE_OPENAI_API_VERSION=2024-10-21Load Environment Variables
Section titled “Load Environment Variables”Update your Python code to load environment variables:
import osfrom dotenv import load_dotenvimport promptyimport prompty.azure
# Load environment variablesload_dotenv()
def run(question: str) -> str: """Execute the Shakespeare prompt with environment configuration"""
response = prompty.execute( "shakespeare.prompty", inputs={"question": question} )
return response
if __name__ == "__main__": # Verify environment variables are loaded if not os.getenv("AZURE_OPENAI_ENDPOINT"): print("Error: AZURE_OPENAI_ENDPOINT not found in environment") exit(1)
result = run("To be or not to be?") print(result)Building a Simple Application
Section titled “Building a Simple Application”Let’s create a more comprehensive application:
app.py
Section titled “app.py”import osimport sysfrom dotenv import load_dotenvimport promptyimport prompty.azurefrom prompty.tracer import trace, Tracer, console_tracer
# Load environment variablesload_dotenv()
# Configure tracing for developmentTracer.add("console", console_tracer)
The Prompty runtime supports additional runtimes, including frameworks such as [LangChain](/tutorials/using-langchain/), and [Semantic Kernel](/tutorials/using-semantic-kernel/). In the tutorials section, we will cover how to generate code from Prompty assets using these runtimes. (coming soon)
def main(): """Main application loop""" print("🎭 Welcome to the Shakespeare Chat App!") print("Type 'quit' to exit, 'help' for commands")
while True: user_input = input("\n🤔 Ask Shakespeare: ").strip()
if user_input.lower() in ['quit', 'exit', 'q']: print("👋 Farewell!") break elif user_input.lower() == 'help': print("Commands:") print(" help - Show this help") print(" quit - Exit the application") print(" Just type any question to chat with Shakespeare!") continue elif not user_input: continue
print("🎭 Shakespeare is thinking...") response = chat_with_shakespeare(user_input)
if response: print(f"🎭 Shakespeare: {response}") else: print("😔 Shakespeare seems to be having trouble responding.")
if __name__ == "__main__": # Verify required environment variables required_vars = [ "AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_API_KEY", "AZURE_OPENAI_DEPLOYMENT" ]
missing_vars = [var for var in required_vars if not os.getenv(var)]
if missing_vars: print(f"❌ Missing required environment variables: {', '.join(missing_vars)}") print("Please check your .env file and try again.") sys.exit(1)
main()Running Your Application
Section titled “Running Your Application”Command Line Execution
Section titled “Command Line Execution”Run your application from the command line:
# Basic executionpython app.py
# With verbose tracingPROMPTY_TRACE_LEVEL=DEBUG python app.pyIn VS Code
Section titled “In VS Code”- Open the integrated terminal (Ctrl+`)
- Navigate to your project directory
- Run the application:
python app.py
Error Handling and Debugging
Section titled “Error Handling and Debugging”Add Robust Error Handling
Section titled “Add Robust Error Handling”import loggingfrom typing import Optional
# Configure logginglogging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')logger = logging.getLogger(__name__)
def safe_execute_prompt(prompt_path: str, inputs: dict) -> Optional[str]: """Safely execute a prompt with comprehensive error handling""" try: logger.info(f"Executing prompt: {prompt_path}")
response = prompty.execute(prompt_path, inputs=inputs)
logger.info("Prompt executed successfully") return response
except FileNotFoundError: logger.error(f"Prompt file not found: {prompt_path}") return None except ConnectionError: logger.error("Failed to connect to AI service") return None except Exception as e: logger.error(f"Unexpected error: {e}") return None
# Usageresponse = safe_execute_prompt("shakespeare.prompty", {"question": "Hello"})if response: print(response)else: print("Failed to get response")Next Steps
Section titled “Next Steps”Now that you have a working application, explore these advanced topics:
1. Add Multiple Prompts
Section titled “1. Add Multiple Prompts”Create different prompty files for different use cases and route requests appropriately.
2. Add Web Interface
Section titled “2. Add Web Interface”Create a simple web API with FastAPI to serve your prompts over HTTP.
3. Explore Advanced Features
Section titled “3. Explore Advanced Features”- Observability & Tracing - Monitor your application
- Performance Optimization - Scale your application
- Configuration - Manage different environments
- CLI Usage - Use the command-line interface
Common Issues and Solutions
Section titled “Common Issues and Solutions”Issue: ImportError
Section titled “Issue: ImportError”# Error: No module named 'prompty'pip install "prompty[azure]"
# Error: No module named 'dotenv'pip install python-dotenvIssue: Authentication Errors
Section titled “Issue: Authentication Errors”# Verify your credentialsimport osprint(f"Endpoint: {os.getenv('AZURE_OPENAI_ENDPOINT')}")print(f"API Key: {'***' if os.getenv('AZURE_OPENAI_API_KEY') else 'Not set'}")print(f"Deployment: {os.getenv('AZURE_OPENAI_DEPLOYMENT')}")Issue: File Not Found
Section titled “Issue: File Not Found”import osprompt_path = "shakespeare.prompty"if os.path.exists(prompt_path): print(f"✅ Found prompt file: {prompt_path}")else: print(f"❌ Prompt file not found: {prompt_path}") print(f"Current directory: {os.getcwd()}") print("Files in directory:", os.listdir("."))Congratulations! You’ve successfully converted your Prompty asset to code and built your first application. 🎉
7. Additional supported runtimes
Section titled “7. Additional supported runtimes”The Prompty runtime supports additional runtimes, including frameworks such as LangChain, and Semantic Kernel. In the tutorials section, we will cover how to generate code from Prompty assets using these runtimes. (coming soon)
Want to Contribute To the Project? - Updated Guidance Coming Soon.