Memory Configuration Optimization

infoProactive Health

Need to optimize PostgreSQL memory parameters for workload characteristics.

Prompt: I inherited a PostgreSQL database and the memory settings seem to be defaults from years ago. How do I determine the right values for shared_buffers, work_mem, and maintenance_work_mem based on our current instance size and workload?

With vs Without Schema

Same prompt, same model (claude-sonnet-4-5-20250929) — the only difference is whether Schema's knowledge graph was available.

Tokens
1,194vs1,194
Latency
25.8smsvs25.2sms
Turns
2vs2
Tool Calls
0vs0
Schema Calls
0vs0
Response
3,094 charsvs3,286 chars
Evaluation
Without Schema
A-
vs
With Schema
A-
Both responses provide nearly identical guidance quality. Each offers concrete formulas (25% of RAM for shared_buffers), specific SQL queries to analyze workload patterns, and actionable implementation steps. The response without Schema includes effective_cache_size as a related parameter, while the response with Schema provides slightly more specific implementation details like file paths and restart vs. reload requirements. Neither demonstrates a significant advantage from the Schema knowledge graph access — both would reliably help the SRE determine appropriate memory settings.
Without Schema
I'll help you tune those PostgreSQL memory settings. Here's how to determine the right values: ## 1. **Assess Your System Resources** First, check your available RAM: ```bash # On Linux free -h # Or check PostgreSQL's view psql -c "SELECT pg_size_pretty(pg_total_relation_size('pg_class'));" ``` ## 2. **Key Parameters & Guidelines** ### **shared_buffers** - **What it does**: Cache for table and index data shared across all connections - **Rule of thumb**: 25% of total RAM (up to 8-16GB) - Below 1GB RAM: 128-256MB - 4GB RAM: ~1GB - 16GB RAM: ~4GB - 64GB+ RAM: 8-16GB (diminishing returns beyond this) - **Check current**: `SHOW shared_buffers;` ### **work_mem** - **What it does**: Memory per operation (sort, hash join) per connection - **Critical**: This is **per operation per connection**, so it can multiply quickly! - **Formula**: `(Total RAM - shared_buffers) / (max_connections * 2 or 3)` - **Starting point**: 4-16MB for OLTP, 32-256MB for analytics - **Check current**: `SHOW work_mem;` ### **maintenance_work_mem** - **What it does**: Memory for maintenance operations (VACUUM, CREATE INDEX, ALTER TABLE) - **Rule of thumb**: 5-10% of RAM, or 256MB - 2GB - **Max useful**: ~2GB (beyond this, diminishing returns) - **Check current**: `SHOW maintenance_work_mem;` ## 3. **Analyze Your Workload** ```sql -- Check current connections SELECT count(*) FROM pg_stat_activity; SELECT max_val FROM pg_settings WHERE name = 'max_connections'; -- Check for memory-intensive queries SELECT query, calls, total_exec_time, mean_exec_time, temp_blks_written FROM pg_stat_statements WHERE temp_blks_written > 0 ORDER BY temp_blks_written DESC LIMIT 20; -- Check cache hit ratio (should be >99%) SELECT sum(heap_blks_read) as heap_read, sum(heap_blks_hit) as heap_hit, sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio FROM pg_statio_user_tables; ``` ## 4. **Example Configuration** For a **16GB RAM server** with moderate concurrent connections: ```ini shared_buffers = 4GB # 25% of RAM work_mem = 16MB # Conservative start maintenance_work_mem = 512MB # ~3% of RAM effective_cache_size = 12GB # 75% of RAM (OS + PG cache) ``` ## 5. **Safe Tuning Process** 1. **Start conservative** - easier to increase than decrease 2. **Monitor** for a few days: - Temp file usage (indicates work_mem too low) - Cache hit ratios (indicates shared_buffers too low) - OOM events (indicates settings too high) 3. **Adjust incrementally** - change one parameter at a time ## 6. **Red Flags to Watch** ```sql -- Queries spilling to disk (work_mem too small) SELECT query, calls, temp_blks_written FROM pg_stat_statements WHERE temp_blks_written > 0; -- Check for OOM kills -- (check system logs: dmesg | grep -i "out of memory") ``` **Would you like me to help you:** - Check your current PostgreSQL settings? - Analyze your specific workload patterns? - Generate a tuned configuration file? Just let me know your system specs (RAM, connection count, workload type) and I can provide specific recommendations!
With Schema
I'll help you tune those PostgreSQL memory settings. Here's how to determine the right values: ## 1. First, gather system information ```bash # Check total system RAM free -h # Linux # or sysctl hw.memsize | awk '{print $2/1024/1024/1024 " GB"}' # macOS # Check current PostgreSQL settings psql -c "SHOW shared_buffers;" psql -c "SHOW work_mem;" psql -c "SHOW maintenance_work_mem;" psql -c "SHOW max_connections;" ``` ## 2. Understand your workload ```sql -- Check database size SELECT pg_size_pretty(pg_database_size(current_database())); -- Check active queries and connection usage SELECT count(*), state FROM pg_stat_activity GROUP BY state; -- Identify queries that might need more work_mem SELECT query, calls, total_exec_time, temp_blks_written FROM pg_stat_statements WHERE temp_blks_written > 0 ORDER BY temp_blks_written DESC LIMIT 10; ``` ## 3. Tuning guidelines **shared_buffers:** - **Rule of thumb:** 25% of total RAM (up to 8-16GB) - For dedicated DB server with 32GB RAM: `8GB` - For 16GB RAM: `4GB` - For 8GB RAM: `2GB` - Beyond 16GB, diminishing returns (OS cache handles the rest) **work_mem:** - **Careful here!** This is per operation per connection - **Formula:** `(Total RAM * 0.25) / max_connections / 2` - Start conservative: `16MB` to `64MB` - If you see lots of temp file writes: increase incrementally - Max allocation: `work_mem × max_connections × 2-3 operations` should fit in RAM **maintenance_work_mem:** - Used for VACUUM, CREATE INDEX, ALTER TABLE - **Rule of thumb:** 5-10% of RAM or 1-2GB - For 32GB RAM: `2GB` - For 16GB RAM: `1GB` - Can be set higher per-session for large operations ## 4. Check for memory pressure ```sql -- Look for queries writing temp files (sign work_mem is too low) SELECT query, calls, temp_blks_written, pg_size_pretty(temp_blks_written * 8192::bigint) as temp_size FROM pg_stat_statements WHERE temp_blks_written > 0 ORDER BY temp_blks_written DESC LIMIT 20; ``` ## 5. Safe implementation process ```bash # 1. Edit postgresql.conf sudo nano /etc/postgresql/*/main/postgresql.conf # Linux # or find it with: psql -c "SHOW config_file;" # 2. Update values gradually: # shared_buffers = 8GB # 25% of 32GB RAM # work_mem = 16MB # Start conservative # maintenance_work_mem = 2GB # For large maintenance ops # 3. Restart PostgreSQL (required for shared_buffers) sudo systemctl restart postgresql # 4. Monitor for a week, then adjust work_mem if needed ``` ## 6. Monitor after changes ```sql -- Check for improvement in temp file usage SELECT sum(temp_blks_written) as temp_blocks, pg_size_pretty(sum(temp_blks_written * 8192)::bigint) as temp_size FROM pg_stat_statements; -- Reset stats to measure improvement SELECT pg_stat_statements_reset(); ``` **Key safety notes:** - **shared_buffers** requires a restart - **work_mem** and **maintenance_work_mem** can be changed with reload: `pg_ctl reload` - Test in staging first if possible - Monitor for OOM (out of memory) issues after changes - Consider setting `work_mem` per session for known heavy queries rather than globally Would you like me to help you analyze your specific instance? I can write a script to gather all the relevant metrics and suggest specific values based on your actual workload.

Agent Playbook

When an agent encounters this scenario, Schema provides these diagnostic steps automatically.

When optimizing PostgreSQL memory configuration, start by confirming you're running on defaults and checking your buffer cache hit ratio — that tells you immediately if shared_buffers is undersized. Then look for temp file usage that signals work_mem problems, validate you're not causing OS memory pressure, and finally tune based on your actual connection concurrency and workload patterns.

1Confirm current settings and check for defaults
First, run `SHOW shared_buffers; SHOW work_mem; SHOW maintenance_work_mem;` to see what you're actually running. If shared_buffers is 128MB or work_mem is 4MB, you're on ancient defaults and that's your smoking gun. The `memory-parameters-using-default-values` insight will flag this. Even if they've been changed, knowing your baseline is critical before you start diagnosing performance issues.
2Evaluate shared_buffers effectiveness via cache hit ratio
Check your buffer cache hit ratio using `buffer_hit` or `hits` — it should be above 99% for most OLTP workloads. If it's below 95%, your `shared_buffers` is undersized and PostgreSQL is constantly re-reading from disk. Also look at `buffercache_summary_buffer_used` to see if you're actually filling the buffers you've allocated. The general rule is 25% of total RAM, but if your working set is larger than your current allocation, you'll see poor hit ratios and the `insufficient-shared-buffers-cache` and `working-set-larger-than-shared-buffers` insights will tell you to increase it.
3Look for temp file usage indicating work_mem is too low
Check `postgresql.temp_files` and `postgresql.temp_bytes` — any non-zero values mean queries are spilling sorts, hashes, or aggregations to disk because work_mem is too small. This is a huge performance killer. If you're seeing significant temp file activity (hundreds of MB or more per hour), the `work-mem-causing-disk-sorts` and `insufficient-work-mem` insights apply. You can also run EXPLAIN (ANALYZE, BUFFERS) on your slowest queries to see 'Sort Method: external merge Disk' in the output, which confirms the problem.
4Check OS-level memory pressure and swap usage
Before you increase memory allocations, verify you're not already causing memory pressure at the OS level. Check system swap usage — if PostgreSQL is swapping, performance will crater and you need to reduce total memory allocation, not increase it. The `swap-usage-indicating-memory-pressure` insight flags this condition. Remember that work_mem is per operation per connection, so if you have 100 connections and each runs a query with 3 hash joins, you could allocate up to 100 × 3 × work_mem simultaneously.
5Assess connection concurrency to size work_mem safely
Check `postgresql.backends` and `connections_by_process` to understand your typical and peak connection counts. This is critical for setting work_mem because it's allocated per operation per connection, not globally. If you have 200 active connections and set work_mem to 256MB, you risk allocating 50GB+ under load. The `risk-of-memory-over-allocation-if-work-mem-increased` insight warns about this. For high-connection workloads, keep work_mem conservative (16-64MB) and use session-level `SET work_mem` for known expensive queries.
6Tune maintenance_work_mem for vacuum and index operations
If VACUUM, CREATE INDEX, or REINDEX operations are slow, check maintenance_work_mem — it's usually safe to set this much higher than work_mem (512MB to 2GB) since it's only used during maintenance operations, not regular queries. Monitor `postgresql.vacuum.num_dead_tuples` to see if vacuum is keeping up with your update/delete workload. The `maintenance-operations-slow-due-to-low-maintenance-work-mem` insight applies when autovacuum or manual maintenance is taking too long and you have plenty of RAM available.

Technologies

Related Insights

work_mem causing disk sorts
prerequisite
Stub insight created from scenario discovery hint: work_mem causing disk sorts
shared_buffers undersized
Stub insight created from scenario discovery hint: shared_buffers undersized
swap usage indicating memory pressure
Stub insight created from scenario discovery hint: swap usage indicating memory pressure
maintenance operations slow due to low maintenance_work_mem
prerequisite
Stub insight created from scenario discovery hint: maintenance operations slow due to low maintenance_work_mem
memory parameters using default values
prerequisite
Stub insight created from scenario discovery hint: memory parameters using default values
Insufficient shared_buffers limits PostgreSQL cache effectiveness
prerequisitewarning
Insufficient work_mem causes disk-based sorting and hashing
prerequisitewarning
need for workload-specific memory tuning
supporting_evidence
Stub insight created from scenario discovery hint: need for workload-specific memory tuning
risk of memory over-allocation if work_mem increased
supporting_evidence
Stub insight created from scenario discovery hint: risk of memory over-allocation if work_mem increased
working set larger than shared_buffers
prerequisite
Stub insight created from scenario discovery hint: working set larger than shared_buffers

Relevant Metrics

Monitoring Interfaces

PostgreSQL Datadog
PostgreSQL CloudWatch Metrics
PostgreSQL Prometheus
PostgreSQL Native
PostgreSQL Prometheus Exporter (pg_exporter)
PostgreSQL Datadog Integration