Technologies/Apache Spark/spark_executor_memory_used
Apache SparkApache SparkMetric

spark_executor_memory_used

Disk space used by executor
Dimensions:None
Available on:DatadogDatadog (1)
Interface Metrics (1)
DatadogDatadog
Amount of memory used for cached RDDs in the application's executors
Dimensions:None
Related Insights (6)
Executor Memory Pressure from Oversized Partitionscritical

Spark executors fail with OOM errors when processing partitions significantly larger than 200-500MB, exhausting executor heap memory and causing cascading failures across the cluster.

JVM Garbage Collection Thrashingcritical

Excessive JVM garbage collection time relative to task execution indicates memory leaks, inefficient data structures, or insufficient executor heap allocation, severely degrading Spark performance.

Inefficient Caching Causing Memory Evictionwarning

Indiscriminate caching of large datasets competes with execution memory, causing high cache eviction rates and forcing re-computation of expensive operations.

Executor Out of Memory from Oversized Partitionscritical

Individual Spark executors run out of heap memory when processing oversized partitions (>500MB), causing task failures and cascading cluster degradation. Partition size imbalance creates memory pressure that ripples across healthy executors.

Excessive Disk I/O from Shuffle Spillwarning

Spark executor memory exhaustion during shuffle operations causes data to spill to disk, dramatically slowing down jobs. High spark_executor_diskused during shuffle-heavy stages indicates memory-to-disk spill, which can be 10-100x slower than in-memory processing.

GC Pressure Degrading Task Performancewarning

Excessive JVM garbage collection time (spark_executor_jvmgctime) consumes CPU cycles and pauses task execution, causing tasks to take longer despite adequate cluster resources. GC time >10% of total execution time indicates memory management issues.