spark_executor_active_tasks
Total number of stages registered.Dimensions:None
Available on:
Datadog (1)
Interface Metrics (1)
Dimensions:None
Related Insights (3)
Driver Bottleneck Under High Task Volumecritical
When processing thousands of small files or high-cardinality shuffles, the Spark driver becomes CPU and memory saturated, causing executors to sit idle despite available capacity. Job duration extends 3-4x beyond expected runtime.
▸
Cluster Underutilization Despite Job Failureswarning
Jobs fail or run slowly while cluster CPU and memory utilization remain below 60%, indicating resource allocation mismatch, insufficient parallelism, or driver bottlenecks rather than capacity constraints.
▸
Autoscaling Lag During Load Spikeswarning
Databricks autoscaling takes 3-5 minutes to provision new VMs during demand spikes, causing queued tasks and degraded performance while pending tasks exceed available executor cores.
▸