Technologies/Elasticsearch/jvm.memory.heap.max
ElasticsearchElasticsearchMetric

jvm.memory.heap.max

JVM heap max bytes
Dimensions:None
Available on:DatadogDatadog (2)OpenTelemetryOpenTelemetry (2)PrometheusPrometheus (2)DynatraceDynatrace (1)

Summary

Indicates the maximum heap memory configured for the JVM (-Xmx setting). This is a static configuration value that represents the upper bound of heap memory available to Elasticsearch. Comparing current heap usage against this maximum reveals how close the cluster is to heap exhaustion, which can trigger aggressive garbage collection or out-of-memory errors.

Interface Metrics (7)
DatadogDatadog
The maximum amount of memory that can be used by the Young Generation heap region.
Dimensions:None
OpenTelemetryOpenTelemetry
The maximum amount of memory can be used for the heap
Dimensions:None
OpenTelemetryOpenTelemetry
The maximum amount of memory can be used for the memory pool
Dimensions:None
PrometheusPrometheus
JVM memory peak max by pool
Dimensions:None
DatadogDatadog
The maximum amount of memory that can be used by the Survivor Space.
Dimensions:None
DynatraceDynatrace
Heap max bytes
Dimensions:None
PrometheusPrometheus
JVM memory max
Dimensions:None
Related Insights (2)
JVM Heap Pressure Cascading Failurecritical

When JVM heap usage stays above 85% for extended periods, garbage collection pauses increase dramatically, leading to node unresponsiveness, cluster state propagation failures, and potential split-brain scenarios.

Excessive Shard Count Degrading Performancecritical

Too many shards consume cluster resources even when idle, causing slow queries, increased overhead, and reduced stability. Rule of thumb: keep shards below 20 per GB of heap configured.