Technologies/Langfuse/langfuse.api.usage.tokens.total
LangfuseLangfuseMetric

langfuse.api.usage.tokens.total

Total LLM tokens processed
Dimensions:None
Available on:PrometheusPrometheus (1)DynatraceDynatrace (1)
Interface Metrics (2)
PrometheusPrometheus
Total number of LLM tokens processed
Dimensions:None
DynatraceDynatrace
Total number of tokens (prompt + completion)
Dimensions:None

Technical Annotations (15)

Configuration Parameters (4)
monthly_budget_usdrecommended: 5000.0
Budget threshold for cost alerting (example value from code)
usage_details.inputrecommended: response.usage.input_tokens
map input token count from API response
usage_details.outputrecommended: response.usage.output_tokens
map output token count from API response
usage_details.totalrecommended: response.usage.total_tokens
map total token count from API response
Technical References (11)
token pricingconceptcost trackingconceptlangfuse.tracecomponentmetadatacomponentOpenTelemetryprotocol@anthropic-ai/tokenizercomponentClaude 3componenttokenizercomponentprompt templateconceptsystem promptcomponenttokenconcept
Related Insights (6)
Token usage approaching budget limits triggers cost overrun riskwarning
LLM monthly cost exceeds 90% of budgetwarning
High token consumption drives excessive API costsinfo
Zero token counts for Claude models via OpenTelemetry after initial traceswarning
Claude model tokenizer inaccuracies cause cost miscalculationwarning
Token-bloated prompts driving excessive API costswarning