When an agent encounters this scenario, Schema provides these diagnostic steps automatically.
When migrating PostgreSQL monitoring between platforms, start by mapping the specific metrics you depend on most (connections, throughput, cache hits), then systematically verify that postgres_exporter collectors are enabled and capturing equivalent data. Run both platforms in parallel during migration to catch calculation differences and missing dimensions before you lose visibility.
1Map your connection metrics to postgres_exporter equivalents
Datadog's `postgresql.connections` and CloudWatch's `DatabaseConnections` both map to the `postgresql.backends` metric in postgres_exporter (sourced from `pg_stat_database.numbackends`). You'll also want `postgresql.connections_by_process` to get the breakdown by state (active, idle, idle in transaction). These are the most commonly monitored metrics, so verify them first to ensure you maintain visibility into connection saturation.
2Build a comprehensive metric mapping table for your existing dashboards
Go through each Datadog dashboard and alert and map every metric to its Prometheus equivalent: `postgresql.blocks_read` and `postgresql.blocks_hit` for I/O, `postgresql.commits` for transaction throughput, `postgresql.deadlocks` for lock issues, `postgresql.database.size` for growth monitoring. CloudWatch uses different naming (like `DatabaseConnections`, `ReadIOPS`, `WriteIOPS`) but they ultimately source from the same pg_stat views. Create a spreadsheet mapping old metric names to new ones before you start rebuilding dashboards.
3Verify postgres_exporter collectors aren't being silently skipped
Collectors can be disabled due to PostgreSQL version constraints or missing tags—for example, wal_stats requires PostgreSQL 14+ and collectors tagged 'extension:pg_stat_statements' won't run if that extension isn't installed. Run `pg_exporter --explain` to see which collectors will execute on your version, and check `SELECT * FROM pg_extension` to verify required extensions are installed. Missing collectors mean missing metrics that Datadog was providing, causing blind spots post-migration.
4Account for metric type and calculation differences between platforms
Datadog often auto-calculates rates (queries per second, commits per second), while Prometheus postgres_exporter exposes raw counters that require `rate()` or `increase()` functions in PromQL. CloudWatch aggregates over 60-second periods by default, while Prometheus captures instant values at scrape time. Test your PromQL queries against the Datadog UI to ensure `rate(postgresql.commits[5m])` matches what Datadog shows for commit throughput, or your alerts will fire at wrong thresholds.
5Validate label/dimension parity for metric slicing
Datadog uses tags like `db:production`, `table:users`, `state:active` to slice metrics. In Prometheus these become labels on the metric series. Verify that postgres_exporter exposes the dimensions you need—for example, `postgresql.backends` should have a `datname` label for per-database filtering, and `postgresql.connections_by_process` should have a `state` label. Missing labels mean you can't recreate your existing per-database or per-table dashboards and alerts.
6Run both platforms in parallel for at least one full monitoring cycle
Keep Datadog running while you bring up Prometheus, and compare side-by-side for at least 24-48 hours to catch daily/weekly patterns. Focus on validating that critical metrics like `buffer_hit` ratio, `replication_lag_seconds`, and transaction rates match between platforms. This parallel run catches edge cases like stale statistics causing different values, scrape interval mismatches, or missing configuration before you lose the safety net of your existing monitoring.