Technologies/FastAPI/nginx_upstream_peers_response_time
FastAPIFastAPIMetric

nginx_upstream_peers_response_time

The average time to receive the last byte of data from this server.
Dimensions:None
Related Insights (3)
Event Loop Blocking Causes Serial Request Processingcritical

When NGINX proxies to async application servers (FastAPI, Node.js) but those backends make blocking I/O calls, the event loop stalls, causing serial-like request processing despite async infrastructure. Symptoms include flat throughput curves and rising tail latency even when CPU is moderate.

Request Queue Buildup Indicates Worker Exhaustioncritical

When NGINX reaches MaxRequestWorkers (or event MPM's equivalent capacity), new requests queue at the load balancer level, causing gateway timeouts even when CPU and dependencies appear healthy. This often manifests as moderate CPU (~50-60%) but rising tail latency and 502/504 errors.

HTTP Error Rate Spikes Require Multi-Layer Analysiscritical

Increases in nginx_server_zone_responses_4xx or nginx_server_zone_responses_5xx require differentiation between client errors (4xx), NGINX configuration issues (502/503), and upstream failures (504, backend 5xx). The same metric can indicate completely different root causes depending on code distribution.