Cost optimization through performance tuning

infoCost Optimization

Reducing cloud database spending by optimizing queries and right-sizing instances without sacrificing performance

Prompt: My monthly AWS RDS PostgreSQL bill is $8,000 and I need to reduce costs by 30% without degrading performance. Help me identify whether I should optimize slow queries to downgrade my instance, tune autovacuum to reduce IOPS costs, or consolidate databases.

With vs Without Schema

Same prompt, same model (claude-sonnet-4-5-20250929) — the only difference is whether Schema's knowledge graph was available.

Tokens
1,437vs1,648
Latency
30.0smsvs36.2sms
Turns
7vs7
Tool Calls
2vs2
Schema Calls
0vs0
Response
1,558 charsvs1,310 chars
Evaluation
Without Schema
C+
vs
With Schema
B-
Both responses primarily gather information rather than provide immediate optimization guidance. The response with Schema is more comprehensive and better structured, explicitly asking about autovacuum activity and Performance Insights, both more specific to the question at hand. The response without Schema includes actionable SQL queries for diagnosing database state but has a formatting issue where it references questions that aren't actually shown in the text. Neither provides decision criteria or thresholds that would indicate which optimization path to take. Overall, with Schema provides a more complete diagnostic framework while without Schema offers some tactical queries but less comprehensive planning.
Without Schema
2 tool calls
I'll help you identify the best cost optimization strategy for your AWS RDS PostgreSQL setup. To make an informed recommendation, I need to understand your current configuration and usage patterns. Let me ask you some key questions to guide the analysis: I've asked you a few questions to understand your setup. Once you answer these, I'll create a targeted analysis plan to identify your best cost optimization strategy. In parallel, here's what I recommend you start gathering: ## Key Data to Collect **From AWS Console/CLI:** 1. **Cost breakdown** (AWS Cost Explorer → Filter by service "RDS"): - Instance costs vs. Storage costs vs. IOPS/throughput costs - Per-instance costs if you have multiple 2. **Instance specifications**: - Instance types (e.g., db.r6g.2xlarge) - Storage type (gp3, gp2, io1, io2) - Allocated IOPS (if using io1/io2) **From CloudWatch (if available):** - CPU Utilization (average & peak over last 30 days) - IOPS Read/Write (average & P99) - Free memory - Database connections **From PostgreSQL:** ```sql -- Check database sizes SELECT datname, pg_size_pretty(pg_database_size(datname)) FROM pg_database ORDER BY pg_database_size(datname) DESC; -- Identify slow queries (if pg_stat_statements is enabled) SELECT query, calls, mean_exec_time, total_exec_time FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 20; ``` Once you answer the questions above, I'll help you determine whether the $2,400/month savings should come from instance rightsizing, IOPS optimization, or database consolidation.
With Schema
2 tool calls
I'll help you identify the most effective cost optimization strategy for your RDS PostgreSQL setup. To give you targeted recommendations, I need to understand your current configuration and cost breakdown. To help you identify the best optimization strategy, I need some information about your setup. Could you provide: **1. Cost Breakdown:** - If possible, can you check AWS Cost Explorer and share what percentage of your $8k goes to: - Compute (instance costs) - Storage (allocated GB) - IOPS/throughput - Backups/snapshots - Data transfer **2. Current Configuration:** - Instance type(s) (e.g., db.r6g.2xlarge, db.m5.4xlarge) - Storage type (gp2, gp3, io1, io2) - Number of RDS instances you're running - Total allocated storage and IOPS **3. Performance Metrics (if available from CloudWatch/Performance Insights):** - Average CPU utilization % - Peak IOPS usage vs provisioned - Number of active connections - Top slow queries (execution time, frequency) - Autovacuum activity/blocking **4. Consolidation Potential:** - Are these separate instances for dev/staging/prod, or separate applications? - Current database sizes Once you share this information, I can analyze which optimization path will give you the best ROI and create a specific action plan to achieve your 30% cost reduction.

Agent Playbook

When an agent encounters this scenario, Schema provides these diagnostic steps automatically.

When optimizing PostgreSQL costs on RDS, start with the quick wins: drop unused indexes to reduce write IOPS and storage costs, then add missing indexes to eliminate expensive sequential scans. Next, analyze I/O patterns and temporary file usage to identify whether your bottleneck is query efficiency (allowing instance downgrade) or inherent workload (requiring right-sizing). Finally, review storage bloat and consolidation opportunities.

1Identify and drop unused indexes to reduce write IOPS
Start by checking `postgresql-index-size` and query pg_stat_user_indexes for indexes with idx_scan = 0. Unused indexes waste write IOPS on every INSERT/UPDATE/DELETE (a table with 10 indexes requires 11 writes per INSERT) and consume storage. The `unused-indexes-waste-resources` insight shows that dropping these can immediately cut IOPS costs with zero performance impact. This is the easiest cost reduction with no downside.
2Add missing indexes to eliminate expensive sequential scans
Compare `postgresql-sequential-scans` to `postgresql-index-scans` on large tables. If you're seeing high sequential scans on tables with >100K rows, you're burning CPU and I/O unnecessarily. The `missing-indexes-sequential-scans` insight explains that each sequential scan reads the entire table into memory. Adding appropriate indexes can reduce CPU usage by 10-100x on read-heavy workloads, potentially allowing you to downgrade from a db.r5.2xlarge to db.r5.xlarge.
3Analyze I/O wait times to identify query efficiency bottlenecks
Check `postgresql-blk-read-time` and `postgresql-blk-write-time` to see if you're I/O bound. High block read times (>20% of query execution time) indicate either missing indexes or queries that need optimization. High write times suggest too many indexes or autovacuum tuning issues. This tells you whether optimizing queries can free up enough resources to downgrade your instance type.
4Review temporary file usage for memory and sort inefficiencies
Look at `postgresql-temp-bytes` and `postgresql-temp-files`. If you're generating >1GB/hour in temp files, queries are spilling to disk due to insufficient work_mem or inefficient sorts/joins. These operations are 100-1000x slower than in-memory operations and drive up IOPS costs. Optimizing these queries (better indexes, query rewrites) can dramatically reduce I/O, allowing instance downgrade or IOPS tier reduction.
5Assess storage bloat and consolidation opportunities
Check `postgresql-database-size`, `postgresql-table-size`, and `postgresql-index-size` to identify bloat from dead tuples or oversized indexes. If individual databases are small (<50GB each), consolidation can reduce overhead from separate instances. Compare index size to table size — a ratio >0.5 suggests over-indexing. Review autovacuum settings to ensure bloat is being cleaned up efficiently, as bloat drives both storage and IOPS costs.
6Correlate query volume with resource usage patterns
Use `postgresql-queries-count` alongside `postgresql-backends` to understand your workload pattern. If you have high query counts (>10K/sec) but low complexity queries, you might be CPU-bound and could benefit from connection pooling or query batching rather than a larger instance. If backend count is consistently high (>80% of max_connections), you may have connection leak issues driving up instance size unnecessarily.

Technologies

Related Insights

Relevant Metrics

Monitoring Interfaces

PostgreSQL CloudWatch Metrics
PostgreSQL Datadog Integration
PostgreSQL Prometheus Exporter (pg_exporter)
PostgreSQL Prometheus
PostgreSQL Datadog