Prompty Specification
bethanyjepnitya
3/6/2025

The Prompty yaml file spec can be found here. Below you can find a brief description of each section and the attributes within it.

Prompty description attributes:

PropertyTypeDescriptionRequired
namestringName of the promptyYes
descriptionstringDescription of the promptyYes
versionstringVersion numberNo
authorsarrayList of prompty authorsNo
tagstagsCategorization tagsNo

Input/Output Specifications

Inputs

The inputs object defines the expected input format for the prompty:

inputs:
  type: object
  description: "Input specification"

Outputs

The outputs object defines the expected output format:

outputs:
  type: object
  description: "Output specification"

Template Engine

Currently supports:

  • jinja2 (default) - Jinja2 template engine for text processing

Model Configuration

The model section defines how the AI model should be configured and executed.

model:
  api: chat
  configuration:
    # model-specific configuration
  parameters:
    # execution parameters
  response: first

Model API Types

  • chat (default) - For chat-based interactions
  • completion - For text completion tasks

Response Types

This determines whether the full (raw) response or just the first response in the choice array is returned.

  • first (default) - Returns only the first response
  • all - Returns all response choices

Model Providers

Azure OpenAI Configuration

configuration:
  type: azure_openai
  api_key: ${env:OPENAI_API_KEY}
  api_version: "2023-05-15"
  azure_deployment: "my-deployment"
  azure_endpoint: "https://my-endpoint.openai.azure.com"

OpenAI Configuration

configuration:
  type: openai
  name: "gpt-4"
  organization: "my-org"

MaaS Configuration

configuration:
  type: azure_serverless
  azure_endpoint: "https://my-endpoint.azureml.ms"

Model Parameters

Common parameters that can be configured for model execution:

ParameterTypeDescription
response_formatobjectAn object specifying the format that the model must output.
seedintegerFor deterministic sampling. This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.
max_tokensintegerThe maximum number of tokens that can be generated in the chat completion.
temperaturenumberSampling temperature (0-1)
frequency_penaltynumberPenalty for frequent tokens
presence_penaltynumberPenalty for new tokens
top_pnumberNucleus sampling probability
stoparraySequences to stop generation

Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.

Sample Prompty

---
name: ExamplePrompt
description: A prompt that uses context to ground an incoming question
authors:
  - Seth Juarez
model:
  api: chat
  configuration:
    type: azure_openai
    azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
    azure_deployment: <your-deployment>
    api_version: 2024-07-01-preview
  parameters:
    max_tokens: 3000
sample:
  firstName: Seth
  context: >
    The Alpine Explorer Tent boasts a detachable divider for privacy, 
    numerous mesh windows and adjustable vents for ventilation, and 
    a waterproof design. It even has a built-in gear loft for storing 
    your outdoor essentials. In short, it's a blend of privacy, comfort, 
    and convenience, making it your second home in the heart of nature!
  question: What can you tell me about your tents?
---

system:
You are an AI assistant who helps people find information. As the assistant, 
you answer questions briefly, succinctly, and in a personable manner using 
markdown and even add some personal flair with appropriate emojis.

# Customer
You are helping {{firstName}} to find answers to their questions.
Use their name to address them in your responses.

# Context
Use the following context to provide a more personalized response to {{firstName}}:
{{context}}

user:
{{question}}

Want to Contribute To the Project?