When an agent encounters this scenario, Schema provides these diagnostic steps automatically.
N+1 query problems manifest as connection storms and slow responses when the application executes one query to fetch a list, then separate queries for each item's related data. Start by confirming the pattern through query frequency analysis, identify the specific offending queries, check if you're exhausting connections, then trace back to the ORM code to implement proper eager loading strategies.
1Confirm N+1 pattern through query frequency analysis
Check `postgresql.queries.count` grouped by normalized query text to find queries executing hundreds or thousands of times per minute. Look for simple SELECT statements with nearly identical structure but different parameter values (e.g., 'SELECT * FROM orders WHERE user_id = ?'). If you see one query fetching a parent list followed by many identical child queries, that's your N+1 smoking gun. The `n-plus-one-query-problem` insight describes exactly this pattern - one query for the list, then N queries for related data.
2Identify the specific offending queries and their total impact
Use `queries_duration_max` to see individual query duration, but multiply by `postgresql.queries.count` to calculate total time impact. A 5ms query running 10,000 times per minute consumes 50 seconds of database time per minute - far worse than a single 1-second query. Focus on queries with high frequency × duration, not just slow queries. These are your primary optimization targets.
3Check if N+1 queries are causing connection exhaustion
Monitor `postgresql.backends` during busy periods - if you're hitting 200+ connections, check `connections_by_process` to see which application processes are holding connections. The `n-plus-one-query-connection-exhaustion` insight documents cases where N+1 patterns during high traffic (like Black Friday) caused 50,000 simultaneous connections. If connections spike in correlation with traffic, your N+1 queries are creating connection pool pressure.
4Enable ORM query logging to trace offenders to code
Turn on SQL query logging in your ORM (e.g., SQLAlchemy's `echo=True`, Django's `DEBUG=True` with SQL logging, or ActiveRecord's query log). Reproduce a slow API endpoint and watch the log output - you'll see the parent query followed by a loop of child queries. Match the query patterns you found in step 1 to specific ORM calls in your codebase. Look for loops that reference related objects without explicit eager loading.
5Look for correlated subqueries as a variant of N+1
Not all N+1 problems appear as application loops - sometimes they're hidden in SQL as correlated subqueries in SELECT lists. The `correlated-subqueries-anti-pattern` insight explains how subqueries that reference the outer query execute once per row. If you see subqueries in your slow queries, rewrite them as JOINs or LATERAL JOINs. Ensure proper indexes exist on join columns.
6Implement eager loading with proper constraints
Fix identified N+1 patterns by using your ORM's eager loading: SQLAlchemy's `joinedload()`, Django's `select_related()` for foreign keys or `prefetch_related()` for many-to-many. Critically, limit the blast radius per the `n-plus-one-query-connection-exhaustion` insight: add pagination with explicit limits (50-100 rows), use `attributes` clauses to fetch only needed columns, and optimize `include` clauses to minimal nested data. Overly aggressive eager loading can itself cause connection exhaustion.
7Verify the fix by measuring before and after metrics
After deploying eager loading fixes, compare `postgresql.queries.count` for the specific query patterns you identified - it should drop from thousands to tens or hundreds. Check that `postgresql.backends` stays well below connection pool limits during peak traffic. Measure API response times - eliminating N+1 should cut hundreds of milliseconds per request. If connections or query counts are still high, review your eager loading strategy for missing relationships or overfetching.