VACUUM FULL Execution Planning
warningProactive Health
Planning and executing VACUUM FULL operations on production tables, managing exclusive locks, disk space requirements, and downtime windows.
Prompt: “I have a 500GB PostgreSQL table with 60% bloat that needs VACUUM FULL to reclaim space. How do I plan this operation for production - how long will it take, how much extra disk space do I need, and what's the impact on running queries?”
Agent Playbook
When an agent encounters this scenario, Schema provides these diagnostic steps automatically.
Before running VACUUM FULL on a production table, the most critical steps are confirming available disk space (you'll need up to 2x the table size), evaluating whether pg_repack is a better alternative to avoid exclusive locks, and estimating execution time based on I/O capacity. Only after confirming you have the disk space and a sufficient maintenance window should you proceed with VACUUM FULL.
1Verify current bloat percentage and exact table size
Start by confirming the bloat metrics are accurate — check `postgresql-table-bloat` to validate the 60% bloat claim and `table-size-size` to get the exact current table size. With a 500GB table at 60% bloat, you're looking at roughly 300GB of actual data and 200GB of dead space. This validation is critical because it determines whether the operation is even necessary — the `table-bloat-critical` insight recommends action at 100%+ bloat, so 60% might not require VACUUM FULL at all.
2Check available disk space before planning anything else
The `vacuum-full-disk-space-exclusive-lock` insight is explicit: VACUUM FULL creates a fresh copy of the table, potentially doubling disk usage temporarily. For your 500GB table, you need at least 500GB of free disk space, ideally more for safety margin. If you're already low on disk space (which is often why people consider VACUUM FULL), running it could exhaust your disk mid-operation and cause catastrophic failure. Check `df -h` on the data partition before proceeding.
3Evaluate pg_repack as a superior alternative to VACUUM FULL
Before committing to VACUUM FULL, seriously consider pg_repack extension instead. The insights on `vacuum-full-exclusive-lock`, `vacuum-full-blocks-production`, and `cluster-blocks-all-reads` all recommend pg_repack because it achieves the same space reclamation without taking exclusive locks that block all queries including SELECTs. For production systems that can't afford hours of complete table unavailability, pg_repack is the standard solution — it works online with minimal lock time.
4Estimate execution time based on disk I/O throughput
VACUUM FULL execution time is primarily I/O-bound. If your storage can sustain 100 MB/s write throughput, a 500GB table will take roughly 5,000 seconds (1.4 hours) minimum, but realistically expect 2-4 hours depending on index rebuilding. Run a test write benchmark on your storage (`dd` or `fio`) to get actual throughput numbers. Factor in that this entire duration will be exclusive lock time where the table is completely unusable — every query against it will block.
5Identify queries and applications that will be blocked
Check `postgresql-locks` during normal operations to understand what queries typically access this table, because VACUUM FULL's ACCESS EXCLUSIVE lock will block everything — all SELECTs, INSERTs, UPDATEs, DELETEs. Review your application code and monitoring to identify which services depend on this table. If any critical user-facing features query this table, they'll timeout or fail for the entire VACUUM FULL duration, which could be hours.
6Plan maintenance window and fallback strategy
Based on your execution time estimate (likely 2-4 hours for 500GB), schedule a maintenance window that's at least 50% longer than your estimate to account for unexpected issues. The `vacuum-full-blocks-production` insight emphasizes never running this during business hours. Have a rollback plan: if VACUUM FULL fails mid-operation or takes too long, you may need to cancel it (Ctrl+C or `pg_cancel_backend`), which leaves the table in its original bloated state but at least functional.
7Consider if regular VACUUM and autovacuum tuning is sufficient
Before pulling the trigger on VACUUM FULL, the `table-bloat-critical` and `vacuum-full-exclusive-lock` insights both recommend trying regular VACUUM first and tuning autovacuum parameters to prevent future bloat accumulation. At 60% bloat (below the 100% critical threshold), aggressive regular VACUUM might reclaim enough space without the downtime. Adjust `autovacuum_vacuum_scale_factor` and `autovacuum_vacuum_cost_delay` to be more aggressive on this table specifically using per-table storage parameters.
Technologies
Related Insights
VACUUM FULL requires exclusive lock preventing concurrent operations
warning
CLUSTER operation blocks all reads during reordering
critical
Table bloat exceeds 100% causing significant performance degradation
critical
VACUUM FULL blocks production traffic during execution
outcomewarning
VACUUM FULL doubles disk space usage and blocks all queries
prerequisitecritical
need for VACUUM FULL or pg_repack
diagnostic_step
Stub insight created from scenario discovery hint: need for VACUUM FULL or pg_repack
REINDEX requires disk space equal to full index size plus overhead
supporting_evidencewarning
Relevant Metrics
Monitoring Interfaces
PostgreSQL Native