Redis Connection Pool Exhaustion - Max Clients Reached
criticalIncident Response
Redis is rejecting new connections due to hitting maxclients limit, causing application timeouts and connection errors.
Prompt: “We're getting 'ERR max number of clients reached' errors and our app can't connect to Redis - should I increase maxclients, fix connection leaks in the application, or scale out the Redis cluster?”
Agent Playbook
When an agent encounters this scenario, Schema provides these diagnostic steps automatically.
When diagnosing Redis connection pool exhaustion, first confirm you're actually hitting the maxclients limit by checking rejected connections. Then investigate whether the root cause is a connection leak in your application, application-side pool misconfiguration, or blocking operations holding connections longer than necessary. Only increase maxclients after ruling out these application-level issues, as that's treating the symptom rather than the cause.
1Confirm Redis is rejecting connections at maxclients limit
Check `redis.connections.rejected` for incrementing values while `redis.clients.connected` approaches or equals your configured maxclients limit. If rejections are increasing but connected clients are well below maxclients, the problem is actually application-side pool exhaustion, not Redis limits. This insight from `connection-rejection-cascade-from-maxclient-saturation` is critical — Redis connection errors can appear healthy on CPU/memory metrics while new connections fail silently.
2Identify connection leaks in application code
Monitor `redis.clients.connected` over time — if it steadily climbs without dropping during low-traffic periods, you have a connection leak. Compare the rate of `redis.connections.received` (new connections) to expected application restarts and traffic patterns. Properly functioning connection pools should maintain steady connected client counts. If you see continuous growth, audit your application code for connections that aren't being returned to the pool or properly closed.
3Check application-side connection pool exhaustion
Compare `client_backend_usage` to `client_backend_max` — if usage approaches or equals max while `redis.clients.connected` remains well below Redis maxclients, your application pool is too small for your concurrency level. This is especially common in async applications where many coroutines compete for a limited pool. As noted in `redis-connection-pool-starvation-from-blocking-patterns`, the Redis server may have plenty of capacity while your app starves waiting for pool connections.
4Look for blocking operations in async code paths
If you're using async frameworks (FastAPI, asyncio), check whether Redis operations use async clients. Synchronous Redis calls in async endpoints block the event loop and hold connections much longer than necessary, causing artificial pool exhaustion. Check `redis.clients.blocked` for operations waiting on blocking calls (BLPOP, BRPOP, etc.). The `redis-connection-saturation-stalls-async-event-loop` insight shows this causes tail latency spikes despite low CPU utilization — a telltale sign of event loop blocking.
5Investigate slow queries holding connections
Query the Redis slowlog to identify operations taking longer than expected. The `slow-query-backlog-masks-redis-connection-pool-exhaustion` insight warns that O(N) operations like KEYS or SMEMBERS on large datasets can hold connections while blocking on I/O. If you see slowlog entries accumulating, these long-running commands are tying up connections and preventing their return to the pool. Focus on commands with high execution times during peak connection usage.
6Check for client output buffer disconnection loops
Examine `redis.clients.connected` for sudden drops correlating with spikes in `redis.memory.mem_clients_normal`. Slow clients that can't keep up with Redis output get disconnected when they exceed client-output-buffer-limit, then immediately reconnect, creating a reconnection storm that consumes connection slots. As detailed in `client-output-buffer-limits-causing-disconnections`, this is especially common with pub/sub subscribers or clients on slow networks. If you see this pattern, review your client-output-buffer-limit settings.
Technologies
Related Insights
Slow Query Backlog Masks Redis Connection Pool Exhaustion
warning
Redis slowlog entries accumulating (redis.slowlog.length rising) can indicate operations blocking on network or disk I/O, exhausting connection pools and causing cascading failures in dependent services even when Redis CPU appears healthy.
Connection Rejection Cascade From Maxclient Saturation
critical
When Redis reaches maximum client connections (redis.connections.rejected increasing), new connection attempts fail silently while existing connections continue working normally, creating intermittent failures that are difficult to diagnose from application metrics alone.
Client Output Buffer Limits Causing Disconnections
warning
When Redis disconnects slow clients that can't keep up with output buffer, redis.clients.connected drops suddenly. This is often caused by clients with slow network connections or blocking on their end, especially for pub/sub subscribers or replicas.
Redis Connection Pool Starvation from Blocking Patterns
warning
When async endpoints make synchronous Redis calls, they hold connections longer than necessary while blocking the event loop, causing artificial connection pool exhaustion even when Redis server capacity is available.
Redis Connection Saturation Stalls Async Event Loop
critical
When Redis connection pool exhausts under high concurrency, blocking Redis operations (even from async endpoints) stall the FastAPI event loop, causing serial-like request processing and tail latency spikes despite low CPU utilization.
Relevant Metrics
Monitoring Interfaces
Redis Native Metrics