Excessive Shard Count Degrading Performance
criticalToo many shards consume cluster resources even when idle, causing slow queries, increased overhead, and reduced stability. Rule of thumb: keep shards below 20 per GB of heap configured.
Total elasticsearch.cluster.shards count / (jvm.memory.heap.max across all data nodes in GB) > 20, or high elasticsearch.node.shards.size indicating oversized shards, combined with elevated resource utilization during inactivity
Reduce shard count via shrink API for over-sharded indices. Implement hot/warm architecture with rollover for time-based indices - use ILM policies (elasticsearch.ilm) to automate. Freeze inactive indices using _freeze API to reduce overhead. Perform capacity planning: target 10-50GB per shard, use 1 Primary:1 Replica model unless specific availability requirements. Use index templates to enforce appropriate shard count for new indices. For existing clusters, identify problem indices via elasticsearch.index.shards.size and consolidate. Monitor elasticsearch.node.shards.reserved.size for system shard overhead. If necessary, add nodes and rebalance to maintain shard-to-heap ratio.