Technologies/Langfuse/langfuse.generation.tokens.completion
LangfuseLangfuseMetric

langfuse.generation.tokens.completion

Completion token count
Dimensions:None
Available on:DynatraceDynatrace (1)
Interface Metrics (1)
DynatraceDynatrace
Number of tokens in the completion/output
Dimensions:None

Technical Annotations (22)

Configuration Parameters (7)
monthly_budget_usdrecommended: 5000.0
Budget threshold for cost alerting (example value from code)
usage_details.inputrecommended: response.usage.input_tokens
map input token count from API response
usage_details.outputrecommended: response.usage.output_tokens
map output token count from API response
usage_details.totalrecommended: response.usage.total_tokens
map total token count from API response
usage.prompt_tokensrecommended: int(usage.prompt_token_count)
Map Gemini prompt tokens to Langfuse format
usage.completion_tokensrecommended: int(usage.candidates_token_count)
Map Gemini completion tokens (not total) to Langfuse format
usage.total_tokensrecommended: int(usage.total_token_count)
Explicitly set total to avoid recalculation errors
Error Signatures (1)
TypeError: Langfuse.update_current_generation() got an unexpected keyword argument 'usage'exception
Technical References (14)
token pricingconceptcost trackingconcepttiktokencomponentOpenTelemetryprotocol@anthropic-ai/tokenizercomponentClaude 3componentPython SDKcomponentBedrockcomponentgen_ai.usage.output_tokenscomponentcandidates_token_countcomponenttotal_token_countcomponentprompt templateconceptsystem promptcomponenttokenconcept
Related Insights (11)
Token usage metrics missing from LLM generation logswarning
Token usage approaching budget limits triggers cost overrun riskwarning
LLM monthly cost exceeds 90% of budgetwarning
Tiktoken tokenization causes high CPU usage on large trace inputswarning
Zero token counts for Claude models via OpenTelemetry after initial traceswarning
Python SDK v2.57.1 reports zero tokens for Bedrock Claude callswarning
Missing trace flush causes lost token counts in automated testswarning
Streaming responses produce zero token counts without manual instrumentationinfo
Model Usage chart displays inconsistent cost calculationswarning
Gemini token counts misattributed via OpenTelemetry mappingwarning
Token-bloated prompts driving excessive API costswarning