ArgoCD application controller cannot keep up with reconciliation demand when status/operation processor counts are too low for the number of managed applications, causing sync delays and stale application states.
ArgoCD applications polling GitHub every 3 minutes exhaust API rate limits (60 requests/minute for basic auth), causing sync operations to fail randomly with 'API rate limit exceeded' errors.
Kubernetes rejects ArgoCD sync operations due to invalid resource definitions (names too long, invalid labels, missing required fields), causing deployment failures that require manual intervention.
ArgoCD auto-prune enabled on applications deletes resources not explicitly defined in Git, including operator-managed resources (External Secrets, Istio sidecars, cert-manager certificates), causing service disruption and data loss.
ArgoCD's default kubectl parallelism limit throttles sync operations when applying multiple resources, causing syncs to take 10+ minutes for applications with many Kubernetes objects.
ArgoCD reports applications as 'Healthy' even when serving errors or experiencing high latency because built-in health checks only verify Kubernetes resource readiness, not actual application behavior.
ArgoCD repo-server pods hit memory limits and crash when processing massive Helm charts or monorepos with hundreds of YAML files, causing sync failures across applications.
ArgoCD cannot sync applications to remote clusters when cluster API endpoints change (common with managed Kubernetes services) or service account tokens expire, causing 'connection refused' errors.
ArgoCD invalidates manifest cache for all applications when any commit occurs in a shared mono-repo, causing thousands of applications to reconcile unnecessarily and overwhelming the repo-server.