When an agent encounters this scenario, Schema provides these diagnostic steps automatically.
When optimizing PostgreSQL costs on RDS, start with the quick wins: drop unused indexes to reduce write IOPS and storage costs, then add missing indexes to eliminate expensive sequential scans. Next, analyze I/O patterns and temporary file usage to identify whether your bottleneck is query efficiency (allowing instance downgrade) or inherent workload (requiring right-sizing). Finally, review storage bloat and consolidation opportunities.
1Identify and drop unused indexes to reduce write IOPS
Start by checking `postgresql-index-size` and query pg_stat_user_indexes for indexes with idx_scan = 0. Unused indexes waste write IOPS on every INSERT/UPDATE/DELETE (a table with 10 indexes requires 11 writes per INSERT) and consume storage. The `unused-indexes-waste-resources` insight shows that dropping these can immediately cut IOPS costs with zero performance impact. This is the easiest cost reduction with no downside.
2Add missing indexes to eliminate expensive sequential scans
Compare `postgresql-sequential-scans` to `postgresql-index-scans` on large tables. If you're seeing high sequential scans on tables with >100K rows, you're burning CPU and I/O unnecessarily. The `missing-indexes-sequential-scans` insight explains that each sequential scan reads the entire table into memory. Adding appropriate indexes can reduce CPU usage by 10-100x on read-heavy workloads, potentially allowing you to downgrade from a db.r5.2xlarge to db.r5.xlarge.
3Analyze I/O wait times to identify query efficiency bottlenecks
Check `postgresql-blk-read-time` and `postgresql-blk-write-time` to see if you're I/O bound. High block read times (>20% of query execution time) indicate either missing indexes or queries that need optimization. High write times suggest too many indexes or autovacuum tuning issues. This tells you whether optimizing queries can free up enough resources to downgrade your instance type.
4Review temporary file usage for memory and sort inefficiencies
Look at `postgresql-temp-bytes` and `postgresql-temp-files`. If you're generating >1GB/hour in temp files, queries are spilling to disk due to insufficient work_mem or inefficient sorts/joins. These operations are 100-1000x slower than in-memory operations and drive up IOPS costs. Optimizing these queries (better indexes, query rewrites) can dramatically reduce I/O, allowing instance downgrade or IOPS tier reduction.
5Assess storage bloat and consolidation opportunities
Check `postgresql-database-size`, `postgresql-table-size`, and `postgresql-index-size` to identify bloat from dead tuples or oversized indexes. If individual databases are small (<50GB each), consolidation can reduce overhead from separate instances. Compare index size to table size — a ratio >0.5 suggests over-indexing. Review autovacuum settings to ensure bloat is being cleaned up efficiently, as bloat drives both storage and IOPS costs.
6Correlate query volume with resource usage patterns
Use `postgresql-queries-count` alongside `postgresql-backends` to understand your workload pattern. If you have high query counts (>10K/sec) but low complexity queries, you might be CPU-bound and could benefit from connection pooling or query batching rather than a larger instance. If backend count is consistently high (>80% of max_connections), you may have connection leak issues driving up instance size unnecessarily.