Technologies/LangChain/langfuse.api.usage.cost.total
LangChainLangChainMetric

langfuse.api.usage.cost.total

Total LLM API cost
Dimensions:None

Technical Annotations (32)

Configuration Parameters (8)
monthly_budget_usdrecommended: 5000.0
Budget threshold for cost alerting (example value from code)
costDetails.totalrecommended: sum of all cost components
Required for correct UI total cost display when using custom cost ingestion
costDetails.input
Canonical field name recognized by UI for input token costs
costDetails.output
Canonical field name recognized by UI for output token costs
costDetails.reasoning
Canonical field name for reasoning token costs (use this instead of output_reasoning)
usage_details.inputrecommended: response.usage.input_tokens
map input token count from API response
usage_details.outputrecommended: response.usage.output_tokens
map output token count from API response
usage_details.totalrecommended: response.usage.total_tokens
map total token count from API response
Technical References (24)
token pricingconceptcost trackingconceptper-trace cost attributionconceptlangfuse.tracecomponentmetadatacomponentLangfuseGenerationClientcomponentgeneration.update()componentcostDetailscomponentusageDetailscomponentOpenTelemetryprotocol@anthropic-ai/tokenizercomponentClaude 3componentOpenLITcomponentcache_writesconceptcache_readsconceptinput_cache_readcomponentlangchain.jscomponentlangfuse-langchaincomponenttokenizercomponentmodel definitionsconceptusage_detailscomponentGemini 2.5 Flashcomponentretryconceptprompt templateconcept
Related Insights (17)
Token usage approaching budget limits triggers cost overrun riskwarning
LLM monthly cost exceeds 90% of budgetwarning
Autonomous agent operations cause unpredictable cost escalationwarning
High token consumption drives excessive API costsinfo
Missing total field in costDetails causes zero or incorrect cost displaywarning
Non-canonical cost field names break UI cost aggregationwarning
Zero token counts for Claude models via OpenTelemetry after initial traceswarning
OpenLIT integration missing cache token metricsinfo
Model name with region prefix requires custom definition for token calculationinfo
Cached prompt tokens counted twice in cost calculationwarning
Model Usage chart displays inconsistent cost calculationswarning
Claude model tokenizer inaccuracies cause cost miscalculationwarning
Model definition changes do not retroactively update historical cost datainfo
Custom model definition mismatches cause incorrect cost calculationswarning
Gemini cost calculations inflated by 8-10x due to token mismatchwarning
Background API retries silently inflating costswarning
Aggregate cost dashboards lack prompt-level attributioninfo