Chroma

Vector Index Memory Exhaustion on Large Collections

critical
Resource ContentionUpdated Mar 2, 2026

HNSW indices load entirely into memory for query performance. Collections with millions of high-dimensional vectors can exhaust available RAM, causing OOM errors, system instability, or swapping that degrades query performance by orders of magnitude. Memory requirements scale with collection size, dimensionality, and HNSW parameters (M, ef_construction).

Technologies:
How to detect:

Memory usage approaches or exceeds available RAM. OOM killer terminates Chroma process. System swapping increases dramatically. Query latency degrades by >10x compared to baseline. Memory usage calculation: (num_vectors * dimensions * 4 bytes per float32) + (num_vectors * M * 2 * 8 bytes for graph pointers) approaches RAM limit.

Recommended action:

1. Assess: Calculate expected memory usage: (collection_size * dimensions * 4) + (collection_size * M * 16). Add 20% overhead for metadata and OS buffers. Compare to available RAM. 2. Immediate: If memory exhausted, restart Chroma on larger instance or reduce collection size by archiving/deleting old vectors. 3. Optimize HNSW: Reduce M parameter (default 16) to lower memory usage, accepting slight recall reduction. Test M=8 or M=12 for 30-40% memory savings. 4. Scale vertically: Provision instance with RAM to accommodate expected collection growth (current size * 2-3x for headroom). 5. Scale horizontally: Shard collections across multiple Chroma instances, routing queries based on collection ID or tenant. 6. Reduce dimensionality: Use embedding models with lower dimensions if accuracy permits (see embedding-dimensionality-storage-overhead insight). 7. Monitor: Alert on memory usage >80%, track growth rate, project time to exhaustion.