celery.queue.length
Pending tasks in queueDimensions:None
Technical Annotations (42)
Configuration Parameters (8)
retry_jitterrecommended: Truemax_retriesrecommended: 3-5CELERYD_CONCURRENCYrecommended: 4worker_prefetch_multiplierrecommended: 4worker_max_tasks_per_childrecommended: 1000broker_connection_retryrecommended: FalseCELERYD_PREFETCH_MULTIPLIERrecommended: 1CELERY_TASK_RESULT_EXPIRESrecommended: 600Error Signatures (5)
Connection to broker lost. Trying to re-establish the connectionlog patternConnectionResetError: [Errno 104] Connection reset by peerexceptionredis.exceptions.ConnectionError: Error while reading fromexceptionmissed heartbeat from celery@log patternTask requeue attempts exceeded max; marking failedlog patternCLI Commands (2)
celery inspect pingdiagnosticcelery inspect active_queuesdiagnosticTechnical References (27)
kombu.transport.rediscomponentcelery.worker.consumer.consumercomponentrestore_visiblecomponentMutexcomponentrouting keyconceptcircuit breakerconceptpriority queueconcepttask starvationconceptFIFOconceptFlowercomponentPrometheuscomponentGrafanacomponentDataDogcomponentexponential backoffconceptqueue saturationconcepttask retryconceptCeleryHighQueueLengthconceptworker pool sizecomponentcatatonic stateconceptconsumer registrationconcepttransport levelconceptCeleryExecutorcomponentairflow-providers-celerycomponentrequeue limitconceptunacked_mutexcomponentvisibility timeoutconceptkombu/transport/redis.py#L414file pathRelated Insights (18)
Celery worker stops consuming tasks after Redis connection reset and fails to recover automaticallycritical
▸
Task routing misconfigurations introduce bottleneckswarning
▸
Aggressive periodic task scheduling causes 25% failure increasewarning
▸
Retry storms from synchronized retries overwhelm downstream servicescritical
▸
Retries without concurrency caps flood queues causing latencycritical
▸
Insufficient worker processes cause task queue buildup and delayswarning
▸
Priority queues without monitoring cause starvation of lower-priority taskswarning
▸
Monitoring tools reduce downtime by 30% through proactive issue detectioninfo
▸
Exponential backoff reduces false-positive timeouts by 44%warning
▸
Queue saturation and invisible retries cause up to 30% of Celery inefficiencieswarning
▸
Queue backlog exceeding 50 tasks signals under-provisioned workerswarning
▸
Queue depth exceeding thresholds requires worker autoscalingwarning
▸
Queue backlog exceeds 100 tasks indicating worker capacity shortagewarning
▸
Reduced downtime through continuous monitoring implementationinfo
▸
High Celery queue length indicates worker processing bottleneckwarning
▸
Celery worker enters catatonic state after Redis broker restartcritical
▸
Airflow health check fails to detect Celery worker queue consumer losscritical
▸
Workers stuck processing large unacked tasks cause network congestion and worker stallscritical
▸