Memory Configuration Optimization
infoProactive Health
Need to optimize PostgreSQL memory parameters for workload characteristics.
Prompt: “I inherited a PostgreSQL database and the memory settings seem to be defaults from years ago. How do I determine the right values for shared_buffers, work_mem, and maintenance_work_mem based on our current instance size and workload?”
Agent Playbook
When an agent encounters this scenario, Schema provides these diagnostic steps automatically.
When optimizing PostgreSQL memory configuration, start by confirming you're running on defaults and checking your buffer cache hit ratio — that tells you immediately if shared_buffers is undersized. Then look for temp file usage that signals work_mem problems, validate you're not causing OS memory pressure, and finally tune based on your actual connection concurrency and workload patterns.
1Confirm current settings and check for defaults
First, run `SHOW shared_buffers; SHOW work_mem; SHOW maintenance_work_mem;` to see what you're actually running. If shared_buffers is 128MB or work_mem is 4MB, you're on ancient defaults and that's your smoking gun. The `memory-parameters-using-default-values` insight will flag this. Even if they've been changed, knowing your baseline is critical before you start diagnosing performance issues.
2Evaluate shared_buffers effectiveness via cache hit ratio
Check your buffer cache hit ratio using `buffer_hit` or `hits` — it should be above 99% for most OLTP workloads. If it's below 95%, your `shared_buffers` is undersized and PostgreSQL is constantly re-reading from disk. Also look at `buffercache_summary_buffer_used` to see if you're actually filling the buffers you've allocated. The general rule is 25% of total RAM, but if your working set is larger than your current allocation, you'll see poor hit ratios and the `insufficient-shared-buffers-cache` and `working-set-larger-than-shared-buffers` insights will tell you to increase it.
3Look for temp file usage indicating work_mem is too low
Check `postgresql.temp_files` and `postgresql.temp_bytes` — any non-zero values mean queries are spilling sorts, hashes, or aggregations to disk because work_mem is too small. This is a huge performance killer. If you're seeing significant temp file activity (hundreds of MB or more per hour), the `work-mem-causing-disk-sorts` and `insufficient-work-mem` insights apply. You can also run EXPLAIN (ANALYZE, BUFFERS) on your slowest queries to see 'Sort Method: external merge Disk' in the output, which confirms the problem.
4Check OS-level memory pressure and swap usage
Before you increase memory allocations, verify you're not already causing memory pressure at the OS level. Check system swap usage — if PostgreSQL is swapping, performance will crater and you need to reduce total memory allocation, not increase it. The `swap-usage-indicating-memory-pressure` insight flags this condition. Remember that work_mem is per operation per connection, so if you have 100 connections and each runs a query with 3 hash joins, you could allocate up to 100 × 3 × work_mem simultaneously.
5Assess connection concurrency to size work_mem safely
Check `postgresql.backends` and `connections_by_process` to understand your typical and peak connection counts. This is critical for setting work_mem because it's allocated per operation per connection, not globally. If you have 200 active connections and set work_mem to 256MB, you risk allocating 50GB+ under load. The `risk-of-memory-over-allocation-if-work-mem-increased` insight warns about this. For high-connection workloads, keep work_mem conservative (16-64MB) and use session-level `SET work_mem` for known expensive queries.
6Tune maintenance_work_mem for vacuum and index operations
If VACUUM, CREATE INDEX, or REINDEX operations are slow, check maintenance_work_mem — it's usually safe to set this much higher than work_mem (512MB to 2GB) since it's only used during maintenance operations, not regular queries. Monitor `postgresql.vacuum.num_dead_tuples` to see if vacuum is keeping up with your update/delete workload. The `maintenance-operations-slow-due-to-low-maintenance-work-mem` insight applies when autovacuum or manual maintenance is taking too long and you have plenty of RAM available.
Technologies
Related Insights
work_mem causing disk sorts
prerequisite
Stub insight created from scenario discovery hint: work_mem causing disk sorts
shared_buffers undersized
Stub insight created from scenario discovery hint: shared_buffers undersized
swap usage indicating memory pressure
Stub insight created from scenario discovery hint: swap usage indicating memory pressure
maintenance operations slow due to low maintenance_work_mem
prerequisite
Stub insight created from scenario discovery hint: maintenance operations slow due to low maintenance_work_mem
memory parameters using default values
prerequisite
Stub insight created from scenario discovery hint: memory parameters using default values
Insufficient shared_buffers limits PostgreSQL cache effectiveness
prerequisitewarning
Insufficient work_mem causes disk-based sorting and hashing
prerequisitewarning
need for workload-specific memory tuning
supporting_evidence
Stub insight created from scenario discovery hint: need for workload-specific memory tuning
risk of memory over-allocation if work_mem increased
supporting_evidence
Stub insight created from scenario discovery hint: risk of memory over-allocation if work_mem increased
working set larger than shared_buffers
prerequisite
Stub insight created from scenario discovery hint: working set larger than shared_buffers
Relevant Metrics
hitspostgresql.temp_filesbuffer_hit(primary_diagnostic)postgresql.temp_bytes(primary_diagnostic)postgresql.backends(secondary_diagnostic)postgresql.database.size(monitor)buffercache_summary_buffer_used(primary_diagnostic)temp_bytes(primary_diagnostic)postgresql.vacuum.num_dead_tuples(secondary_diagnostic)connections_by_process(monitor)Monitoring Interfaces
PostgreSQL Native