Technologies/Elasticsearch/jvm.gc.collections.elapsed
ElasticsearchElasticsearchMetric

jvm.gc.collections.elapsed

GC elapsed time
Dimensions:None
Available on:DatadogDatadog (7)OpenTelemetryOpenTelemetry (1)

Summary

Tracks the cumulative time spent in garbage collection across all GC cycles since JVM startup. Rising values indicate increasing GC overhead, which can signal heap pressure or inefficient memory allocation patterns. Sustained high GC duration (especially when combined with frequent collections) typically means the heap is undersized or the workload is creating excessive object churn, leading to degraded query and indexing performance.

Interface Metrics (8)
DatadogDatadog
The total time spent on garbage collection in the JVM [v<0.9.10].
Dimensions:None
OpenTelemetryOpenTelemetry
The approximate accumulated collection elapsed time
Dimensions:None
DatadogDatadog
The total time spent in major GCs in the JVM that collect old generation objects.
Dimensions:None
DatadogDatadog
The total time (per second) spent in major GCs in the JVM that collect old generation objects.
Dimensions:None
DatadogDatadog
The total time spent in minor GCs in the JVM that collects young generation objects [v0.9.10+].
Dimensions:None
DatadogDatadog
The total time (per second) spent in minor GCs in the JVM that collects young generation objects [v0.9.10+].
Dimensions:None
DatadogDatadog
The total time spent on "concurrent mark & sweep" GCs in the JVM [v<0.9.10].
Dimensions:None
DatadogDatadog
The total time spent on "parallel new" GCs in the JVM [v<0.9.10].
Dimensions:None
Related Insights (2)
JVM Heap Pressure Cascading Failurecritical

When JVM heap usage stays above 85% for extended periods, garbage collection pauses increase dramatically, leading to node unresponsiveness, cluster state propagation failures, and potential split-brain scenarios.

JVM Heap Exhaustion Before Cassandra Node Failurecritical

Cassandra nodes running on the JVM can experience heap exhaustion where heap usage climbs to 80-90% and stays elevated without dropping after GC, leading to OutOfMemoryError or node instability. This manifests as timeout errors at the application layer before the node crashes.