prefect.flow_run.count
Total flow runs by stateDimensions:None
Interface Metrics (3)
Sources
Technical Annotations (61)
Configuration Parameters (8)
concurrency_limitrecommended: 5-100 depending on capacityfor_eachrecommended: ['prefect.resource.id'] or ['client_id']work_pool_typerecommended: cloud-run:pushmarklateruns.loop_intervalrecommended: 5.0 seconds--limitPREFECT_AGENT_QUERY_RUN_FETCH_LIMITforrecommended: 5mPREFECT_EVENTS_RETENTION_PERIODrecommended: 1-2 days for high volume, 3-5 days for medium volumeError Signatures (7)
MarkLateRuns tooklog pattern401http statushttpx.PoolTimeoutexceptionhttpcore.PoolTimeoutexceptionHandler '_replicate_pod_event' failed with an exceptionlog patternPendinglog patternFailedlog patternCLI Commands (12)
prefect work-pool set-concurrency-limit my-pool 5remediationprefect work-queue set-concurrency-limit my-queue 5 --pool my-poolremediationprefect work-pool create db-events-pool --type cloud-run:pushremediationprefect work-pool provision-infra db-events-poolremediationprefect work-pool set-concurrency-limit db-events-pool 100remediationprefect agent start -q default --limit Nremediationprefect agent start --hide-welcome -q default -p default-agent-pooldiagnosticdf -h /path/to/postgresql/datadiagnosticSELECT pg_size_pretty(pg_database_size('prefect')) AS database_size;diagnosticprefect config set PREFECT_EVENTS_RETENTION_PERIOD="2d"remediationSELECT pg_size_pretty(pg_total_relation_size('public.events')) AS total_size, to_char(count(*), 'FM999,999,999') AS row_count, min(occurred) AS oldest_event, max(occurred) AS newest_event FROM events;diagnosticprefect config view | grep EVENTS_RETENTIONdiagnosticTechnical References (34)
concurrency_limitcomponentfor_eachcomponentbackpressureconceptCloud RuncomponentECS FargatecomponentAzure Container Instancescomponentcold startconceptENI quotaconceptemit_eventcomponentprefect.resource.idcomponentMarkLateRunscomponentprefect.server.services.marklaterunscomponentread_flow_runscomponentset_flow_run_statecomponentkopfcomponent_replicate_pod_eventcomponentprefect_kubernetes.observercomponentHeroku schedulercomponentPrefect Cloudcomponentprefect agentcomponentwork queuecomponentflow runconceptwork poolcomponentPENDINGconceptRUNNINGconceptCANCELLINGconceptPydanticcomponentparameter schemaconceptagentcomponentworkercomponenton_failureconcepteventscomponentlogcomponentbackground servicescomponentRelated Insights (17)
API server overload at high scale causes HTTP 500 errorswarning
▸
Alert flooding kills workers without concurrency limitscritical
▸
Push work pools scale to high concurrency for event-driven workloadsinfo
▸
Time-based polling obscures pipeline trigger causes and creates constant execution overheadwarning
▸
MarkLateRuns service execution exceeds loop intervalwarning
▸
Flow runs stuck in Cancelling state trigger persistent alertswarning
▸
Prefect Cloud free tier rate limits cause request failureswarning
▸
Prefect Kubernetes worker CPU spike and crash above 3.5K deployed flowscritical
▸
Prefect observer cannot keep pace with K8s events at 5-10K concurrent flowswarning
▸
Short-lived serverless agent deployments waste resources without timeoutwarning
▸
Agent workload imbalance causes resource exhaustion on multi-agent deploymentswarning
▸
Work pool concurrency limit blocks agent from picking up late runs when PENDING runs accumulatecritical
▸
Flow runs stuck in PENDING state accumulate and consume work pool capacity indefinitelywarning
▸
Invalid parameters cause flow runs to fail before executionwarning
▸
Flow run submission failure sets incorrect Failed state instead of Crashedwarning
▸
Disk usage exceeds critical threshold causing outage riskcritical
▸
High-volume workload with default event retention causes rapid database growthwarning
▸