ArgoCDKubernetes

Repo Server Memory Exhaustion from Large Manifests

critical
Resource ContentionUpdated Aug 25, 2025

ArgoCD repo-server pods hit memory limits and crash when processing massive Helm charts or monorepos with hundreds of YAML files, causing sync failures across applications.

How to detect:

Monitor argocd_repo_server_process_resident_memory_size approaching container memory limits (default 500Mi). High argocd_repo_server_go_memstats_heap_allocated_size combined with repo-server pod restarts indicates memory pressure. Check for OutOfMemoryKilled events in pod status.

Recommended action:

Increase repo-server memory limits to 2Gi+ in Helm values (spec.template.spec.containers[repo-server].resources.limits.memory). Long-term: split monorepos into smaller repositories organized by team ownership, use ApplicationSets for large applications, and enable manifest caching with expiration (reposerver.max.combined.directory.manifests.size, repository.cache.expiration.duration).