In the last section, we Setup our development environment by installing the Visual Studio Code extension. In this section, we'll create, configure, and run, our first Prompty asset.
To run the Prompty, you will need a valid Large Language Model deployed endpoint you can configure. The Prompty specification currently supports three model providers:
For our first Prompty, we'll focus on the Azure OpenAI option.
Open the Visual Studio Code editor, then click the File Explorer
icon to view your project filesystem. Select a destination folder (e.g., could be the repository root) and right-click to get a drop-down menu. Look for the New Prompty
option and click it.
basic.prompty
file created (click to expand) ---
name: ExamplePrompt
description: A prompt that uses context to ground an incoming question
authors:
- Seth Juarez
model:
api: chat
configuration:
type: azure_openai
azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
azure_deployment: <your-deployment>
api_version: 2024-07-01-preview
parameters:
max_tokens: 3000
sample:
firstName: Seth
context: >
The Alpine Explorer Tent boasts a detachable divider for privacy,
numerous mesh windows and adjustable vents for ventilation, and
a waterproof design. It even has a built-in gear loft for storing
your outdoor essentials. In short, it's a blend of privacy, comfort,
and convenience, making it your second home in the heart of nature!
question: What can you tell me about your tents?
---
system:
You are an AI assistant who helps people find information. As the assistant,
you answer questions briefly, succinctly, and in a personable manner using
markdown and even add some personal flair with appropriate emojis.
# Customer
You are helping {{firstName}} to find answers to their questions.
Use their name to address them in your responses.
# Context
Use the following context to provide a more personalized response to {{firstName}}:
{{context}}
user:
{{question}}
You can now update the file contents as shown below. Here, we have made three changes:
Note that we updated the model to get its endpoint information from a AZURE_OPENAI_ENDPOINT
environment variable. Make sure you set this in the terminal, or in a .env
file at the root of the repo.
shakespeare.prompty
asset (click to expand) ---
name: Shakespearean Writing Prompty
description: A prompt that answers questions in Shakespearean style using GPT-4
authors:
- Bethany Jepchumba
model:
api: chat
configuration:
type: azure_openai
azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
azure_deployment: gpt-4
parameters:
max_tokens: 3000
sample:
question: Please write a short text inviting friends to a Game Night.
---
system:
You are a Shakespearean writing assistant who speaks in a Shakespearean style. You help people come up with creative ideas and content like stories, poems, and songs that use Shakespearean style of writing style, including words like "thou" and "hath”.
Here are some example of Shakespeare's style:
- Romeo, Romeo! Wherefore art thou Romeo?
- Love looks not with the eyes, but with the mind; and therefore is winged Cupid painted blind.
- Shall I compare thee to a summer's day? Thou art more lovely and more temperate.
example:
user: Please write a short text turning down an invitation to dinner.
assistant: Dearest,
Regretfully, I must decline thy invitation.
Prior engagements call me hence. Apologies.
user:
{{question}}
You can now run the Prompty by clicking the Play
button (top right) in the editor pane of your .prompty
file.
Outputs
tab. View output
The first step ensures that we use Azure managed identity to authenticate with the specified Azure OpenAI endpoint - and don't need to use explicitly defined keys. You only need to authenticate once. You can then iterate rapidly on prompty content ("prompt engineering") and run it for instant responses. We recommend clearing the output terminal after each run, for clarity.Friends most dear,
I do entreat thee to join me for an evening of mirth and games anon.
Let us gather our wits and spirits for a night of sport and jest.
Thy presence would bring great joy.
Yours in fellowship,
[Your Name]
The .prompty
file is an example of a Prompty asset that respects the schema defined in the Prompty specification. The asset class is language-agnostic (not tied to any language or framework), using a markdown format with YAML to specify metadata ("frontmatter") and content ("template") for a single prompt-based interaction with a Large Language Model. By doing this, it unifies the prompt content and its execution context in a single asset package, making it easy for developers to rapidly iterate on prompts for prototyping.
The frontmatter (metadata) is structured as YAML, and specifies the prompt inputs, outputs, and model configuration parameters - with optional sample data for testing. The content (template) forms the body of the asset and is a Jnja template that allows for dynamic data binding of variables from Prompty asset inputs. The ability to specify sample data for testing (as an inline object or an external filename) allows us to use Prompty to iteratively define the shape of the data required in complex flows at the granularity of single LLM calls.
The asset is then activated by a Prompty runtime as follows:
Prompty assets must be configured with a model that is the target for the prompt request. However, this configuration can happen at different levels, with a hierarchy that decides which value takes final precedence during execution.
Prompty default
tab in the bottom toolbar. If a Prompty asset does not specify an explicit model configuration, the invocation will use the default model.prompty.json
file with a default configuration. This is equivalent to the Visual Studio Code default, but applied to the case when we execute the Prompty from code (vs. VS Code editor).gpt-4
) or reference environment variables (${env:AZURE_OPENAI_ENDPOINT}
). The latter is the recommended approach, ensuring that secrets don't get checked into version control with asset file updates.model:
api: chat
configuration:
type: azure_openai
azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
azure_deployment: gpt-4
parameters:
max_tokens: 3000
Tip 1: Configure Model in VS Code. If you use the same model configuration repeatedly, or across multiple Prompty assets, consider setting it up as a named model in Visual Studio Code.To configure your model, navigate to and edit your settings.json
file. You can edit this by navigating to settings > Extensions > Prompty > Edit in settings.json
. Update an existing named model (e.g., default
) or create a new one. You can update this at user level (across your assets) or workspace level (across team assets).
Tip 2: Use Azure Managed Identity. If you are using Azure OpenAI or Azure-managed models, opt for keyless authentication by logging into Azure Active Directory (from Prompty extension, once per session) and using that credential to authenticate with your Azure OpenAI model deployments. To trigger this authflow, leave the api_key
property value empty in your model configuration.
Tip 3: Use Environment Variables. As shown above, property values can be defined using environment variables in the format ${env:ENVAR_NAME}
. By default, the Visual Studio Code extension will look for a .env
file in the root folder of the repository containing Prompty assets - create and update that file (and ensure it is .gitignore-d by default). If you use GitHub Codespaces, you can also store environment variables as Codespaces secrets that get automatically injected into the runtime at launch.
By default, executing the Prompty will open the Output tab on the Visual Studio Code terminal and show a brief response with the model output. But what if you want more detail? Prompty provides two features that can help.
Prompty Output (verbose)
option in a drop-down menu in the Visual Studio Code terminal (at top left of terminal). Selecting this option gives you verbose output which includes the request details and response details, including useful information like token usage for execution.In this section, we focused on Prompty asset creation and execution from the Visual Studio Code editor (no coding involved). Here, the Visual Studio Code extension acts as the default runtime, loading the asset, rendering the template, and executing the model invocation transparently. But this approach will not work when we need to orchestrate complex flows with multiple assets, or when we need to automate execution in CI/CD pipelines.
This is where the Prompty Runtime comes in. The runtime converts the Prompty asset into code that uses a preferred language or framework. We can think of "runtimes" in two categories:
In the next section, we'll explore how to go from Prompty To Code, using the core Python runtime.
Want to Contribute To the Project? - Updated Guidance Coming Soon.