Skip to content

TokenUsage

Tracks token consumption for a single LLM call. Provider-specific field names (e.g., OpenAI’s prompt_tokens vs Anthropic’s input_tokens) are mapped via knownAs augments in the wire directory.

---
title: TokenUsage
config:
  look: handDrawn
  theme: colorful
  class:
    hideEmptyMembersBox: true
---
classDiagram
    class TokenUsage {
        +int32 promptTokens
        +int32 completionTokens
        +int32 totalTokens
    }
promptTokens: 150
completionTokens: 42
totalTokens: 192
NameTypeDescription
promptTokensint32Number of tokens in the prompt/input sent to the model
completionTokensint32Number of tokens generated in the model’s completion/output
totalTokensint32Total tokens consumed (prompt + completion). May be provided by the API or computed.