SnowflakeDuckDB

Majority of queries scan under 100 GB indicating engine mismatch

info
performanceUpdated Mar 24, 2026
How to detect:

When 80%+ of read queries scan under 100 GB (especially on Small or larger warehouses), this indicates a workload placement problem rather than a pure optimization problem. Public analyses show 99.9th percentile queries scan ~300 GB, and ~99% scan under 100 GB. These filtered aggregations, small joins, and reporting rollups don't need distributed warehouse compute but get billed at Snowflake's credit rates.

Recommended action:

Run the provided audit query to measure what percentage of queries scan under 100 GB. If most read workload concentrates in smaller buckets and queries return in seconds, consider multi-engine routing to cheaper execution paths (e.g., DuckDB) for read-heavy BI and analytics while keeping large scans on Snowflake.