Version Upgrade Planning and Validation
infoMigration
Planning PostgreSQL major version upgrade requires understanding compatibility, performance changes, and risk mitigation.
Prompt: “We need to upgrade from PostgreSQL 12 to PostgreSQL 15. What should I check for compatibility issues, and how do I validate performance won't regress after the upgrade?”
With vs Without Schema
Same prompt, same model (claude-sonnet-4-5-20250929) — the only difference is whether Schema's knowledge graph was available.
Tokens
1,112vs2,816
Latency
26.9smsvs49.0sms
Turns
2vs7
Tool Calls
0vs3
Schema Calls
0vs0
Response
3,517 charsvs7,367 chars
Evaluation
Without Schema
B+
vs
With Schema
A-
The response without Schema provides solid guidance covering compatibility issues, validation steps, and performance testing, though some recommendations are somewhat generic. The response with Schema offers significantly more technical depth, including specific breaking changes for PostgreSQL 13, 14, and 15 (like UNIQUE nulls behavior and public schema permissions), a complete performance validation workflow with Python comparison scripts, and two upgrade methods (pg_upgrade and logical replication). The Schema-assisted response also provides more actionable SQL queries for identifying affected areas (like UNIQUE constraints on nullable columns) and more comprehensive post-upgrade monitoring. Both would help the operator, but the Schema response provides more version-specific, actionable guidance that would more reliably prevent upgrade issues.
Without Schema
I'll help you plan a safe PostgreSQL 12 to 15 upgrade. Here are the key areas to check:
## Compatibility Issues to Review
**Breaking Changes:**
- **Permissions changes**: `PUBLIC` schema permissions restricted - explicit grants may be needed
- **Deprecated features removed**: Check for use of deprecated functions or syntax
- **pg_stat_statements**: Column changes if you use this extension
- **Logical replication**: Protocol changes if using logical replication
- **Hash indexes**: Improved but incompatible format (need rebuild)
- **Operator precedence**: Some operators changed precedence
**Extension compatibility:**
- Review all installed extensions (`SELECT * FROM pg_extension;`) for version compatibility
- PostGIS, pgcrypto, pg_trgm, etc. may need updates
**SQL compatibility:**
- Functions using `PROCEDURE` vs `FUNCTION` (more strict in PG 13+)
- `VACUUM` behavior changes
- Partitioning improvements may affect existing partition logic
## Pre-Upgrade Validation Steps
**1. Test with pg_upgrade --check:**
```bash
pg_upgrade --check \
-b /path/to/pg12/bin \
-B /path/to/pg15/bin \
-d /path/to/pg12/data \
-D /path/to/pg15/data
```
**2. Review deprecation warnings:**
```sql
-- Check for deprecated functions
SELECT DISTINCT proname
FROM pg_proc
WHERE proname IN ('pg_stat_get_backend_activity_start', ...);
```
**3. Check query plans that might change:**
- Extended statistics
- Improved planner for partitioned tables
- Better join planning
## Performance Validation Strategy
**Before upgrade:**
```sql
-- Capture baseline metrics
SELECT * FROM pg_stat_database;
SELECT * FROM pg_stat_user_tables;
SELECT * FROM pg_stat_user_indexes;
-- Export query performance baseline
SELECT query, calls, total_exec_time, mean_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC LIMIT 100;
```
**Performance testing approach:**
1. **Clone production to staging** with PG 15
2. **Run EXPLAIN ANALYZE** on critical queries - compare plans
3. **Load test** with realistic workload using pgbench or your test suite
4. **Monitor key metrics:**
- Query execution times
- Lock contention
- Connection pool performance
- I/O patterns
- Memory usage
**After upgrade validation:**
```sql
-- Reanalyze all tables (critical!)
ANALYZE VERBOSE;
-- Check for missing indexes after reindex
SELECT schemaname, tablename, indexname
FROM pg_indexes
WHERE schemaname NOT IN ('pg_catalog', 'information_schema');
-- Monitor slow queries
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
WHERE mean_exec_time > 1000 -- queries > 1s
ORDER BY mean_exec_time DESC;
```
## Post-Upgrade Actions
**Immediately after:**
- `REINDEX` (especially hash indexes)
- `ANALYZE` all tables
- Update table statistics: `ALTER TABLE ... SET STATISTICS ...`
- Regenerate foreign key constraints if needed
**Monitor for 7-14 days:**
- Query performance regression
- Autovacuum effectiveness
- Connection pool behavior
- Replication lag (if applicable)
## Recommended Upgrade Path
1. **Full backup** (pg_dumpall or filesystem snapshot)
2. **Test on staging** with production data clone
3. **Document rollback plan**
4. **Use pg_upgrade with --link** for faster cutover (if downtime allows)
5. **Schedule during maintenance window** with buffer time
6. **Keep old cluster** for quick rollback until validated
Would you like me to help you create specific validation scripts for your environment, or do you have particular areas of concern (e.g., specific extensions, replication setup, etc.)?
With Schema
3 tool calls
I'll help you plan a PostgreSQL 12 to 15 upgrade. Let me first check if you have any database-related files in your project that I should review for specific compatibility concerns.
Here's a comprehensive guide for upgrading from PostgreSQL 12 to 15:
## Key Compatibility Issues to Check
### Breaking Changes (PG 12 → 15)
**PostgreSQL 13:**
- `VACUUM FULL` now requires `SHARE UPDATE EXCLUSIVE` lock instead of `ACCESS EXCLUSIVE` for TOAST tables
- Removed deprecated `postmaster -o` option
- Functions in `information_schema` now return actual types instead of `cardinal_number`
**PostgreSQL 14:**
- Removed support for Python 2 in PL/Python
- Changed behavior of `to_char()` with localized month names
- Non-superusers can no longer use `REINDEX DATABASE/SYSTEM`
**PostgreSQL 15:**
- `UNIQUE` nulls behavior changed (multiple NULLs now treated as distinct by default)
- Removed deprecated `VACUUM VERBOSE` output format
- Removed support for Visual Studio 2013 for Windows builds
- Changed permissions on `public` schema (no longer world-writable)
### Critical Areas to Review
1. **SQL Code:**
- Search for `UNIQUE` constraints with nullable columns
- Review any `public` schema permission assumptions
- Check for Python 2 usage in PL/Python functions
- Verify `information_schema` queries expecting numeric types
2. **Extensions:**
- List all extensions: `SELECT * FROM pg_extension;`
- Verify each extension supports PG 15
- Check for deprecated/removed extensions
3. **Configuration:**
- Review `postgresql.conf` parameters (some were renamed/removed)
- Check `shared_preload_libraries` for incompatible modules
## Pre-Upgrade Validation Steps
```bash
# 1. Check extension compatibility
psql -c "SELECT extname, extversion FROM pg_extension ORDER BY extname;"
# 2. Find queries using potentially affected features
psql -c "SELECT DISTINCT query FROM pg_stat_statements WHERE query LIKE '%UNIQUE%';"
# 3. Check for deprecated settings
pg_config --configure | grep -E "python2|deprecated"
# 4. Identify UNIQUE constraints on nullable columns
psql -c "
SELECT n.nspname, c.relname, a.attname
FROM pg_constraint con
JOIN pg_class c ON con.conrelid = c.oid
JOIN pg_namespace n ON c.relnamespace = n.oid
JOIN pg_attribute a ON a.attrelid = c.oid AND a.attnum = ANY(con.conkey)
WHERE con.contype = 'u' AND NOT a.attnotnull;
"
```
## Performance Validation Strategy
### 1. Establish Baseline (Pre-Upgrade)
```bash
# Enable pg_stat_statements on PG 12
echo "shared_preload_libraries = 'pg_stat_statements'" >> postgresql.conf
# Restart and create extension
psql -c "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;"
# Capture baseline metrics
psql -c "
SELECT
query,
calls,
mean_exec_time,
stddev_exec_time,
min_exec_time,
max_exec_time,
total_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 100;" > baseline_query_performance.csv
# Capture table statistics
psql -c "
SELECT schemaname, tablename, seq_scan, seq_tup_read,
idx_scan, idx_tup_fetch, n_tup_ins, n_tup_upd, n_tup_del
FROM pg_stat_user_tables;" > baseline_table_stats.csv
# Capture index usage
psql -c "
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read, idx_tup_fetch
FROM pg_stat_user_indexes;" > baseline_index_stats.csv
```
### 2. Testing Environment Setup
```bash
# Create test environment with PG 15
docker run -d --name pg15-test \
-e POSTGRES_PASSWORD=test \
-p 5433:5432 \
postgres:15
# Backup PG 12 and restore to PG 15
pg_dumpall -h localhost -p 5432 | psql -h localhost -p 5433
```
### 3. Post-Upgrade Performance Tests
```bash
# Reset stats after upgrade
psql -c "SELECT pg_stat_statements_reset();"
psql -c "SELECT pg_stat_reset();"
# Run workload (production replay or synthetic)
# Option A: pg_replay if you have query logs
# Option B: pgbench with custom scripts
# Option C: Application integration tests
# After workload, compare metrics
psql -c "
SELECT
query,
calls,
mean_exec_time,
stddev_exec_time,
min_exec_time,
max_exec_time,
total_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 100;" > pg15_query_performance.csv
# Compare results
python3 << 'EOF'
import csv
import sys
baseline = {}
with open('baseline_query_performance.csv') as f:
reader = csv.DictReader(f)
for row in reader:
baseline[row['query']] = float(row['mean_exec_time'])
with open('pg15_query_performance.csv') as f:
reader = csv.DictReader(f)
for row in reader:
query = row['query']
new_time = float(row['mean_exec_time'])
old_time = baseline.get(query)
if old_time:
diff_pct = ((new_time - old_time) / old_time) * 100
if abs(diff_pct) > 10: # Flag >10% change
print(f"{diff_pct:+.1f}% {query[:80]}")
EOF
```
### 4. EXPLAIN ANALYZE Comparison
```sql
-- Run on both PG 12 and 15
EXPLAIN (ANALYZE, BUFFERS, VERBOSE, COSTS)
SELECT ... -- your critical queries
-- Compare:
-- - Execution time
-- - Plan differences (index scans vs seq scans)
-- - Buffer hits
-- - Join strategies
```
## Upgrade Process Recommendations
### Option 1: pg_upgrade (Fastest)
```bash
# Stop PG 12
pg_ctl -D /data/pg12 stop
# Run pg_upgrade in check mode first
pg_upgrade \
--old-datadir /data/pg12 \
--new-datadir /data/pg15 \
--old-bindir /usr/lib/postgresql/12/bin \
--new-bindir /usr/lib/postgresql/15/bin \
--check
# If check passes, run actual upgrade
pg_upgrade \
--old-datadir /data/pg12 \
--new-datadir /data/pg15 \
--old-bindir /usr/lib/postgresql/12/bin \
--new-bindir /usr/lib/postgresql/15/bin \
--link # Use hard links (faster but no rollback)
# Run analyze on all tables
vacuumdb --all --analyze-in-stages
# Update extensions
psql -c "ALTER EXTENSION extension_name UPDATE;"
```
### Option 2: Logical Replication (Zero Downtime)
```sql
-- On PG 12 (publisher)
CREATE PUBLICATION my_pub FOR ALL TABLES;
-- On PG 15 (subscriber)
CREATE SUBSCRIPTION my_sub
CONNECTION 'host=pg12_host dbname=mydb user=repuser'
PUBLICATION my_pub;
-- Monitor replication lag
SELECT * FROM pg_stat_subscription;
-- When caught up, switch application to PG 15
```
## Post-Upgrade Checklist
```bash
# 1. Run ANALYZE
vacuumdb --all --analyze-in-stages
# 2. Update extensions
psql -c "SELECT 'ALTER EXTENSION ' || extname || ' UPDATE;'
FROM pg_extension WHERE extname != 'plpgsql';" | psql
# 3. Check for missing indexes
psql -c "SELECT schemaname, tablename FROM pg_stat_user_tables
WHERE idx_scan = 0 AND seq_scan > 1000;"
# 4. Verify replication (if applicable)
psql -c "SELECT * FROM pg_stat_replication;"
# 5. Monitor error logs
tail -f /var/log/postgresql/postgresql-15-main.log
```
## Performance Regression Detection
Monitor these for 24-48 hours post-upgrade:
```sql
-- Slow queries
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
WHERE mean_exec_time > 1000
ORDER BY mean_exec_time DESC;
-- Lock contention
SELECT relation::regclass, mode, granted, COUNT(*)
FROM pg_locks
GROUP BY 1,2,3
HAVING COUNT(*) > 5;
-- Blocking queries
SELECT pid, usename, query, wait_event_type, wait_event
FROM pg_stat_activity
WHERE wait_event IS NOT NULL;
```
Would you like me to create a detailed upgrade checklist script, or help you analyze specific parts of your database schema?
Agent Playbook
When an agent encounters this scenario, Schema provides these diagnostic steps automatically.
When planning a PostgreSQL major version upgrade, start by identifying breaking changes and deprecated features in your current database, then validate extension compatibility before you invest time in testing. Only after those blockers are cleared should you establish performance baselines and build your validation test suite. Always have a tested rollback plan before you touch production.
1Audit for deprecated features and breaking changes
First thing I do is query pg_catalog to find deprecated features that will break in PostgreSQL 15. Check for removed data types (like abstime, reltime), WITH OIDS tables, deprecated SQL constructs, and hash indexes on system catalogs. Run pg_upgrade --check on a snapshot first—it'll catch most issues. The `deprecated-features-in-use` insight is critical here; a single incompatibility can block your entire upgrade until you refactor application code.
2Verify extension compatibility and upgrade paths
Check every extension in pg_extension against the PostgreSQL 15 compatibility matrix. Extensions like PostGIS, TimescaleDB, pgvector often require specific versions for PG15 compatibility. Test extension upgrades in a dev environment first—some require ALTER EXTENSION ... UPDATE, others need a drop/recreate. This is where `extension-version-compatibility` becomes a hard blocker if you're running custom or unmaintained extensions.
3Map configuration parameter changes
Review your current postgresql.conf against the release notes for PG 13, 14, and 15—several GUCs were renamed or removed. Key changes: wal_keep_segments became wal_keep_size (units changed!), vacuum_cleanup_index_scale_factor was removed, and many defaults tightened. The `configuration-parameter-changes-needed` insight will guide which params need updates. Missing this means your tuned settings won't apply post-upgrade.
4Capture current performance baseline
Before touching anything, capture at least a week of `queries_duration_max`, pg_stat_statements data, buffer hit ratios, and I/O patterns. Note the `stats_reset` timestamp so you know your baseline window. Also record `uptime` and `postgresql.database.size` to validate the upgrade didn't corrupt data. Without this baseline, you can't objectively measure whether performance regressed—just gut feelings and angry users.
5Build test plan for query plan changes
PostgreSQL 15 has planner improvements that can change query plans—sometimes for worse on your specific schema. Run EXPLAIN ANALYZE on your top 50 queries (by total time) in both PG12 and PG15, comparing plans and actual timings. Watch for nested loop vs hash join switches, or new parallel query paths that might thrash your I/O. The `testing-and-validation-methodology` insight should guide your test coverage. Replay production workload with pgbench or pg_replay if possible.
6Prepare and test rollback procedure
Major version upgrades are one-way via pg_upgrade—you can't downgrade the data directory. Take a full pg_dump before starting, verify restore time fits your RTO, and keep the old PG12 binaries around. If using replication, plan to promote a lagging PG12 replica if the upgrade fails. The `rollback-plan-needed` insight is non-negotiable; I've seen teams stuck on a broken PG15 for days because they had no rollback path. Test the rollback, don't just document it.
Technologies
Related Insights
configuration parameter changes needed
Stub insight created from scenario discovery hint: configuration parameter changes needed
version upgrade compatibility check
diagnostic_step
Stub insight created from scenario discovery hint: version upgrade compatibility check
version upgrade considerations
diagnostic_step
Stub insight created from scenario discovery hint: version upgrade considerations
performance regression risk
supporting_evidence
Stub insight created from scenario discovery hint: performance regression risk
extension version compatibility
prerequisite
Stub insight created from scenario discovery hint: extension version compatibility
deprecated features in use
prerequisite
Stub insight created from scenario discovery hint: deprecated features in use
rollback plan needed
supporting_evidence
Stub insight created from scenario discovery hint: rollback plan needed
testing and validation methodology
supporting_evidence
Stub insight created from scenario discovery hint: testing and validation methodology
Relevant Metrics
Monitoring Interfaces
PostgreSQL Native