Technologies/Apache Spark/spark_executor_active_tasks
Apache SparkApache SparkMetric

spark_executor_active_tasks

Total number of stages registered.
Dimensions:None
Available on:DatadogDatadog (1)
Interface Metrics (1)
DatadogDatadog
Number of active tasks in the application's executors
Dimensions:None
Related Insights (3)
Driver Bottleneck Under High Task Volumecritical

When processing thousands of small files or high-cardinality shuffles, the Spark driver becomes CPU and memory saturated, causing executors to sit idle despite available capacity. Job duration extends 3-4x beyond expected runtime.

Cluster Underutilization Despite Job Failureswarning

Jobs fail or run slowly while cluster CPU and memory utilization remain below 60%, indicating resource allocation mismatch, insufficient parallelism, or driver bottlenecks rather than capacity constraints.

Autoscaling Lag During Load Spikeswarning

Databricks autoscaling takes 3-5 minutes to provision new VMs during demand spikes, causing queued tasks and degraded performance while pending tasks exceed available executor cores.