Skip to content

Discover Available Models

Model discovery helps you answer “what can this connection use?” before you hard-code model.id in a .prompty file. Providers return ModelInfo records with the model or deployment id and, when available, context-window and modality metadata.

RuntimeOpenAIFoundry / Azure OpenAIAnthropic
Pythonlist_models() / list_models_async()list_models() / list_models_async()Not yet exposed
TypeScriptlistModels()listAzureModels()Not yet exposed
C#OpenAIModels.ListModelsAsync()FoundryModels.ListModelsAsync()Not yet exposed
Rustlist_models() / list_models_async()Not yet exposedlist_models() / list_models_async()
from prompty.model import ApiKeyConnection
from prompty.providers.openai.models import list_models
connection = ApiKeyConnection.load({
"kind": "key",
"apiKey": "${env:OPENAI_API_KEY}",
})
for model in list_models(connection):
print(model.id, model.context_window)

To target a direct OpenAI-compatible endpoint, set endpoint on the same ApiKeyConnection.

import { ApiKeyConnection } from "@prompty/core";
import { listModels } from "@prompty/openai";
const models = await listModels(new ApiKeyConnection({
kind: "key",
apiKey: process.env.OPENAI_API_KEY,
}));
for (const model of models) {
console.log(model.id, model.contextWindow);
}
using Prompty.Core;
using Prompty.OpenAI;
var connection = new ApiKeyConnection
{
ApiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY"),
};
var models = await OpenAIModels.ListModelsAsync(connection);
foreach (var model in models)
{
Console.WriteLine($"{model.Id} {model.ContextWindow}");
}

For Foundry/Azure OpenAI, use Prompty.Foundry.FoundryModels.ListModelsAsync() with an ApiKeyConnection that includes Endpoint and ApiKey.

use serde_json::json;
let models = prompty_openai::list_models_async(&json!({
"kind": "key",
"apiKey": std::env::var("OPENAI_API_KEY")?,
})).await?;
for model in models {
println!("{} {:?}", model.id, model.context_window);
}

Anthropic model listing is available through prompty_anthropic::list_models_async().

Once you identify the model or deployment id, put it in frontmatter:

---
name: discovered-model
model:
id: gpt-4o-mini
provider: openai
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
---
system:
You are a helpful assistant.

If the provider returns deployment ids, use the deployment id exactly as returned.