Technologies/Elasticsearch/jvm.memory.heap.used
ElasticsearchElasticsearchMetric

jvm.memory.heap.used

JVM heap used bytes
Dimensions:None
Available on:DynatraceDynatrace (1)PrometheusPrometheus (1)OpenTelemetryOpenTelemetry (1)DatadogDatadog (1)

Summary

Tracks the current amount of JVM heap memory in use. Heap usage directly impacts garbage collection frequency and application performance. The insight about JVM heap pressure cascading failures highlights how sustained high heap usage (>85%) triggers frequent GC pauses, degrading query and indexing performance. Heap exhaustion leads to OutOfMemoryErrors and node failures.

Interface Metrics (4)
DynatraceDynatrace
Heap used bytes
Dimensions:None
PrometheusPrometheus
JVM memory currently used by area
Dimensions:None
OpenTelemetryOpenTelemetry
The current non-heap memory usage
Dimensions:None
DatadogDatadog
The amount of memory in bytes currently used by the JVM non-heap.
Dimensions:None

Technical Annotations (2)

Technical References (2)
model cachecomponentJDK 25.0.2+10component
Related Insights (8)
JVM Garbage Collection Pauses Trigger Session Timeoutswarning

Long GC pauses halt all ZooKeeper threads including heartbeat processing, causing client sessions to expire. Even well-tuned JVMs can experience occasional long pauses that exceed typical session timeout windows.

JVM Heap Pressure Cascading Failurecritical

When JVM heap usage stays above 85% for extended periods, garbage collection pauses increase dramatically, leading to node unresponsiveness, cluster state propagation failures, and potential split-brain scenarios.

Field Data Cache Eviction Thrashwarning

Field data (inverted reverse index for aggregations) loads into JVM heap on first access and persists for segment lifetime. When circuit breaker limit or cache size is too small, frequent evictions cause repeated expensive field data loading, spiking CPU and heap pressure.

JVM Heap Exhaustion Before Cassandra Node Failurecritical

Cassandra nodes running on the JVM can experience heap exhaustion where heap usage climbs to 80-90% and stays elevated without dropping after GC, leading to OutOfMemoryError or node instability. This manifests as timeout errors at the application layer before the node crashes.

Circuit Breaker Trips Preventing OOMwarning

Circuit breakers trip to prevent operations that would cause OutOfMemoryError by estimating memory requirements and rejecting requests that exceed configured limits. Frequent trips indicate memory pressure or oversized operations.

Node Role Imbalance Causing Hotspotswarning

Improper distribution of shards or unbalanced node roles can cause resource hotspots where some nodes are overloaded while others are underutilized.

Machine learning model cache eviction prevents OOM during model loadingwarning
Bundled JDK updated to 25.0.2+10 requires compatibility validationwarning