Cross-platform metric mapping for PostgreSQL

infoCross-Platform

Understanding how PostgreSQL metrics are named and exposed differently across monitoring platforms (CloudWatch, Cloud Monitoring, Azure Monitor, Datadog, Prometheus)

Prompt: I'm migrating my PostgreSQL monitoring from Datadog to Prometheus. How do Datadog's postgresql.connections metric and AWS CloudWatch's DatabaseConnections metric map to the postgres_exporter metrics in Prometheus?

With vs Without Schema

Same prompt, same model (claude-sonnet-4-5-20250929) — the only difference is whether Schema's knowledge graph was available.

Tokens
835vs3,031
Latency
16.3smsvs1.1mms
Turns
2vs36
Tool Calls
0vs17
Schema Calls
0vs13
Response
1,857 charsvs2,187 chars
Evaluation
Without Schema
A-
vs
With Schema
B+
Both responses provide accurate, actionable guidance with the correct metric mappings and concrete PromQL examples. The response without Schema includes a practical tip about filtering template databases that the Schema-enabled response lacks, while the Schema response adds transparency by showing it queried a knowledge base but includes meta-commentary that slightly clutters the presentation. The core technical content is nearly identical in both cases, suggesting the Schema.ai knowledge graph didn't provide unique insights beyond what the LLM already knew for this particular question. Overall, both would reliably help the operator complete the migration, with the non-Schema response being marginally cleaner and more practical.
Without Schema
I can help you map these metrics. Here's how they correspond: ## Metric Mappings ### **Datadog `postgresql.connections`** → Prometheus Maps to: **`pg_stat_database_numbackends`** - Both measure the number of active backends/connections per database - Source: `pg_stat_database.numbackends` - Labels in Prometheus: `{datname="your_database"}` To get total connections across all databases (like Datadog's aggregated view): ```promql sum(pg_stat_database_numbackends) ``` ### **AWS CloudWatch `DatabaseConnections`** → Prometheus Maps to: **`pg_stat_database_numbackends`** (same as above) - CloudWatch typically sums this across all databases - Equivalent query: ```promql sum(pg_stat_database_numbackends) ``` ## Additional Useful Metrics from postgres_exporter You'll also want these related metrics: - **`pg_settings_max_connections`** - Maximum allowed connections (limit) - **`pg_stat_activity_count`** - Connections by state (active, idle, idle in transaction) - Labels: `{state="active"}`, `{state="idle"}`, etc. ## Example Queries **Connection utilization percentage:** ```promql (sum(pg_stat_database_numbackends) / pg_settings_max_connections) * 100 ``` **Active vs idle connections:** ```promql sum by (state) (pg_stat_activity_count) ``` **Connections by database:** ```promql pg_stat_database_numbackends{datname!=""} ``` ## Key Differences to Note 1. **Granularity**: postgres_exporter gives you more detailed breakdowns (by state, by database) than CloudWatch 2. **Scrape interval**: Prometheus metrics depend on your scrape interval, while Datadog/CloudWatch have their own collection frequencies 3. **Template databases**: postgres_exporter includes template databases; you may want to filter those out: `{datname!~"template.*"}` Would you like help setting up alerts or dashboards for these connection metrics in Prometheus?
With Schema
17 tool calls13 Schema
I'll help you understand how these connection metrics map across monitoring platforms. Excellent! I found the metric mapping information. Let me get the detailed cross-platform mapping guide. Based on Schema.ai's cross-platform metric mapping data, here's how PostgreSQL connection metrics map across Datadog, AWS CloudWatch, and Prometheus: ## Connection Count Mapping **Source Metric** → **Prometheus postgres_exporter Equivalent** 1. **Datadog: `postgresql.connections`** and **AWS CloudWatch: `DatabaseConnections`** - **Prometheus**: `pg_stat_database_numbackends` - This comes from PostgreSQL's `pg_stat_database` system view - Shows the number of backends (active connections) connected to each database - Query: `SELECT datname, numbackends FROM pg_stat_database;` 2. **Total Active Connections Across All Databases** - **Prometheus**: `sum(pg_stat_database_numbackends)` - Aggregates connections across all databases - Equivalent to what Datadog/CloudWatch report as total connections 3. **Maximum Connections Limit** - **Prometheus**: `pg_settings_max_connections` - Comes from `pg_settings` where `name = 'max_connections'` - Shows the configured connection limit ## Additional Useful postgres_exporter Metrics - **`pg_stat_activity_count`** - Count of current connections by state (active, idle, idle in transaction) - **`pg_up`** - PostgreSQL instance availability (1 = up, 0 = down) ## Example PromQL Queries ```promql # Current connection count (similar to Datadog postgresql.connections) sum(pg_stat_database_numbackends) # Connection utilization percentage (sum(pg_stat_database_numbackends) / pg_settings_max_connections) * 100 # Connections by database pg_stat_database_numbackends{datname!=""} ``` ## Key Differences to Note - **Datadog** and **CloudWatch** typically aggregate across all databases by default - **postgres_exporter** exposes per-database granularity via the `datname` label - postgres_exporter also provides state-level breakdown via `pg_stat_activity` that Datadog/CloudWatch don't expose directly The core mapping is: **`postgresql.connections` / `DatabaseConnections` → `pg_stat_database_numbackends`**

Agent Playbook

When an agent encounters this scenario, Schema provides these diagnostic steps automatically.

When migrating PostgreSQL monitoring between platforms, start by mapping the specific metrics you depend on most (connections, throughput, cache hits), then systematically verify that postgres_exporter collectors are enabled and capturing equivalent data. Run both platforms in parallel during migration to catch calculation differences and missing dimensions before you lose visibility.

1Map your connection metrics to postgres_exporter equivalents
Datadog's `postgresql.connections` and CloudWatch's `DatabaseConnections` both map to the `postgresql.backends` metric in postgres_exporter (sourced from `pg_stat_database.numbackends`). You'll also want `postgresql.connections_by_process` to get the breakdown by state (active, idle, idle in transaction). These are the most commonly monitored metrics, so verify them first to ensure you maintain visibility into connection saturation.
2Build a comprehensive metric mapping table for your existing dashboards
Go through each Datadog dashboard and alert and map every metric to its Prometheus equivalent: `postgresql.blocks_read` and `postgresql.blocks_hit` for I/O, `postgresql.commits` for transaction throughput, `postgresql.deadlocks` for lock issues, `postgresql.database.size` for growth monitoring. CloudWatch uses different naming (like `DatabaseConnections`, `ReadIOPS`, `WriteIOPS`) but they ultimately source from the same pg_stat views. Create a spreadsheet mapping old metric names to new ones before you start rebuilding dashboards.
3Verify postgres_exporter collectors aren't being silently skipped
Collectors can be disabled due to PostgreSQL version constraints or missing tags—for example, wal_stats requires PostgreSQL 14+ and collectors tagged 'extension:pg_stat_statements' won't run if that extension isn't installed. Run `pg_exporter --explain` to see which collectors will execute on your version, and check `SELECT * FROM pg_extension` to verify required extensions are installed. Missing collectors mean missing metrics that Datadog was providing, causing blind spots post-migration.
4Account for metric type and calculation differences between platforms
Datadog often auto-calculates rates (queries per second, commits per second), while Prometheus postgres_exporter exposes raw counters that require `rate()` or `increase()` functions in PromQL. CloudWatch aggregates over 60-second periods by default, while Prometheus captures instant values at scrape time. Test your PromQL queries against the Datadog UI to ensure `rate(postgresql.commits[5m])` matches what Datadog shows for commit throughput, or your alerts will fire at wrong thresholds.
5Validate label/dimension parity for metric slicing
Datadog uses tags like `db:production`, `table:users`, `state:active` to slice metrics. In Prometheus these become labels on the metric series. Verify that postgres_exporter exposes the dimensions you need—for example, `postgresql.backends` should have a `datname` label for per-database filtering, and `postgresql.connections_by_process` should have a `state` label. Missing labels mean you can't recreate your existing per-database or per-table dashboards and alerts.
6Run both platforms in parallel for at least one full monitoring cycle
Keep Datadog running while you bring up Prometheus, and compare side-by-side for at least 24-48 hours to catch daily/weekly patterns. Focus on validating that critical metrics like `buffer_hit` ratio, `replication_lag_seconds`, and transaction rates match between platforms. This parallel run catches edge cases like stale statistics causing different values, scrape interval mismatches, or missing configuration before you lose the safety net of your existing monitoring.

Technologies

Related Insights

Relevant Metrics

Monitoring Interfaces

PostgreSQL CloudWatch Metrics
PostgreSQL Datadog Integration
PostgreSQL OpenTelemetry Receiver
PostgreSQL Prometheus Exporter (pg_exporter)
PostgreSQL Dynatrace Extension
Redis Datadog
Redis Prometheus
Redis Native Metrics
Redis OpenTelemetry
PostgreSQL Native