percent_usage_connections
Connection utilizationDimensions:None
Technical Annotations (19)
Configuration Parameters (6)
pool_moderecommended: transactiondefault_pool_sizerecommended: 20max_client_connrecommended: 1000query_timeoutrecommended: 30max_connectionssuperuser_reserved_connectionsrecommended: 3Error Signatures (1)
FATAL: sorry, too many clients alreadylog patternCLI Commands (4)
SELECT query, calls, total_exec_time, mean_exec_time, rows FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 20;diagnosticSELECT * FROM pg_stat_activitydiagnosticSELECT state, COUNT(*) AS connection_count, MAX(EXTRACT(EPOCH FROM (now() - state_change))) AS max_age_seconds FROM pg_stat_activity WHERE pid <> pg_backend_pid() GROUP BY state;diagnosticSELECT max_conn, used, max_conn - used AS available, ROUND((used::float / max_conn) * 100, 2) AS usage_percent FROM ( SELECT (SELECT setting::int FROM pg_settings WHERE name = 'max_connections') AS max_conn, (SELECT COUNT(*) FROM pg_stat_activity) AS used ) AS conn_stats;diagnosticTechnical References (8)
PgBouncercomponent/etc/pgbouncer/pgbouncer.inifile pathcheck_postgrescomponentmax_connectionscomponentpgBouncercomponentpg_stat_activitycomponentpg_terminate_backend()componentpgpool-IIcomponentRelated Insights (6)
Connection limit exhaustion causes memory pressure and context-switching overheadcritical
▸
Connection count approaching max_connections causes connection failurescritical
▸
High connection count approaching limits degrades monitoring performancewarning
▸
Connection exhaustion prevents new client connectionscritical
Each PostgreSQL connection consumes 5-10MB RAM via fork model. At 200 connections: 1-2GB RAM overhead before queries run. At 500+ connections: excessive context-switching dominates query execution time. Database becomes unresponsive under concurrent load.
▸
Connection limit approaching maximum causes application timeoutscritical
▸
Connection exhaustion prevents new client connectionscritical
▸