llama_index.llm.requests
LLM request countDimensions:None
Available on:
Datadog (1)
Interface Metrics (1)
Related Insights (3)
LlamaIndex LLM Token Budget Overrunwarning
LlamaIndex agents can exceed token budgets via unmonitored prompt/completion expansion, causing unexpected costs and API rate limit errors without per-request cost tracking.
▸
LlamaIndex LLM Request Rate Spikewarning
Abnormal spike in LLM request rate indicates potential abuse, runaway agent loops, or unexpected traffic patterns that can exhaust rate limits and inflate costs.
▸
LlamaIndex Query Engine Request Failurecritical
Query engine failures prevent users from receiving answers due to LLM API errors, retrieval failures, or agent execution errors without proper error handling and fallback mechanisms.
▸