ingestion_failure
Number of failures during ingestionInterface Metrics (1)
Related Insights (4)
DataHub's asynchronous write architecture can hide processing failures. High Kafka consumer lag combined with ingestion warnings/failures indicates metadata events are queued but not successfully persisting to primary or search storage.
DataHub ingestion jobs failing silently or with warnings, causing metadata gaps that prevent data quality monitoring, lineage tracking, and incident detection from functioning properly.
Organizations experiencing data quality incidents (bad data reaching dashboards, ML models) that DataHub's observability should catch, but failures occur because assertions are not configured or monitored on critical datasets.
When upstream data quality issues occur, teams cannot quickly identify downstream impact (which dashboards, ML models, reports are affected) because column-level lineage is incomplete or not queried during incident response.