Technologies/Celery/celery.queue.length
CeleryCeleryMetric

celery.queue.length

Pending tasks in queue
Dimensions:None
Available on:OpenTelemetryOpenTelemetry (1)Native (1)PrometheusPrometheus (1)DatadogDatadog (1)
Interface Metrics (4)
OpenTelemetryOpenTelemetry
Number of messages waiting in the queue
Dimensions:None
Native
Number of messages waiting in queue
Dimensions:None
PrometheusPrometheus
Number of messages in queue
Dimensions:None
DatadogDatadog
Number of tasks waiting in the queue
Dimensions:None

Technical Annotations (42)

Configuration Parameters (8)
retry_jitterrecommended: True
prevents synchronized retry attempts
max_retriesrecommended: 3-5
prevents queue flooding and resource drain
CELERYD_CONCURRENCYrecommended: 4
Controls the number of concurrent worker processes
worker_prefetch_multiplierrecommended: 4
Balance throughput and latency, affects queue drainage rate
worker_max_tasks_per_childrecommended: 1000
Prevents memory leaks that can slow workers
broker_connection_retryrecommended: False
workaround to make worker crash on broker loss, requires restart policy
CELERYD_PREFETCH_MULTIPLIERrecommended: 1
limits concurrent unacked tasks per worker to prevent multiple stuck restoration attempts
CELERY_TASK_RESULT_EXPIRESrecommended: 600
task result expiration set to 10 minutes in affected configuration
Error Signatures (5)
Connection to broker lost. Trying to re-establish the connectionlog pattern
ConnectionResetError: [Errno 104] Connection reset by peerexception
redis.exceptions.ConnectionError: Error while reading fromexception
missed heartbeat from celery@log pattern
Task requeue attempts exceeded max; marking failedlog pattern
CLI Commands (2)
celery inspect pingdiagnostic
celery inspect active_queuesdiagnostic
Technical References (27)
kombu.transport.rediscomponentcelery.worker.consumer.consumercomponentrestore_visiblecomponentMutexcomponentrouting keyconceptcircuit breakerconceptpriority queueconcepttask starvationconceptFIFOconceptFlowercomponentPrometheuscomponentGrafanacomponentDataDogcomponentexponential backoffconceptqueue saturationconcepttask retryconceptCeleryHighQueueLengthconceptworker pool sizecomponentcatatonic stateconceptconsumer registrationconcepttransport levelconceptCeleryExecutorcomponentairflow-providers-celerycomponentrequeue limitconceptunacked_mutexcomponentvisibility timeoutconceptkombu/transport/redis.py#L414file path
Related Insights (18)
Celery worker stops consuming tasks after Redis connection reset and fails to recover automaticallycritical
Task routing misconfigurations introduce bottleneckswarning
Aggressive periodic task scheduling causes 25% failure increasewarning
Retry storms from synchronized retries overwhelm downstream servicescritical
Retries without concurrency caps flood queues causing latencycritical
Insufficient worker processes cause task queue buildup and delayswarning
Priority queues without monitoring cause starvation of lower-priority taskswarning
Monitoring tools reduce downtime by 30% through proactive issue detectioninfo
Exponential backoff reduces false-positive timeouts by 44%warning
Queue saturation and invisible retries cause up to 30% of Celery inefficiencieswarning
Queue backlog exceeding 50 tasks signals under-provisioned workerswarning
Queue depth exceeding thresholds requires worker autoscalingwarning
Queue backlog exceeds 100 tasks indicating worker capacity shortagewarning
Reduced downtime through continuous monitoring implementationinfo
High Celery queue length indicates worker processing bottleneckwarning
Celery worker enters catatonic state after Redis broker restartcritical
Airflow health check fails to detect Celery worker queue consumer losscritical
Workers stuck processing large unacked tasks cause network congestion and worker stallscritical