TokenUsage
Tracks token consumption for a single LLM call. Provider-specific field
names (e.g., OpenAI’s prompt_tokens vs Anthropic’s input_tokens)
are mapped via knownAs augments in the wire directory.
Class Diagram
Section titled “Class Diagram”---
title: TokenUsage
config:
look: handDrawn
theme: colorful
class:
hideEmptyMembersBox: true
---
classDiagram
class TokenUsage {
+int32 promptTokens
+int32 completionTokens
+int32 totalTokens
}
Yaml Example
Section titled “Yaml Example”promptTokens: 150completionTokens: 42totalTokens: 192Properties
Section titled “Properties”| Name | Type | Description |
|---|---|---|
| promptTokens | int32 | Number of tokens in the prompt/input sent to the model |
| completionTokens | int32 | Number of tokens generated in the model’s completion/output |
| totalTokens | int32 | Total tokens consumed (prompt + completion). May be provided by the API or computed. |