spark_executor_memory_used
Disk space used by executorInterface Metrics (1)
Related Insights (6)
Spark executors fail with OOM errors when processing partitions significantly larger than 200-500MB, exhausting executor heap memory and causing cascading failures across the cluster.
Excessive JVM garbage collection time relative to task execution indicates memory leaks, inefficient data structures, or insufficient executor heap allocation, severely degrading Spark performance.
Indiscriminate caching of large datasets competes with execution memory, causing high cache eviction rates and forcing re-computation of expensive operations.
Individual Spark executors run out of heap memory when processing oversized partitions (>500MB), causing task failures and cascading cluster degradation. Partition size imbalance creates memory pressure that ripples across healthy executors.
Spark executor memory exhaustion during shuffle operations causes data to spill to disk, dramatically slowing down jobs. High spark_executor_diskused during shuffle-heavy stages indicates memory-to-disk spill, which can be 10-100x slower than in-memory processing.
Excessive JVM garbage collection time (spark_executor_jvmgctime) consumes CPU cycles and pauses task execution, causing tasks to take longer despite adequate cluster resources. GC time >10% of total execution time indicates memory management issues.