Technologies/Ceph/ceph_osd_pct_used
CephCephMetric

ceph_osd_pct_used

Percentage used of full/near full osds
Dimensions:None
Available on:DatadogDatadog (1)
Interface Metrics (1)
DatadogDatadog
Percentage used of full/near full osds
Dimensions:None
Related Insights (2)
Full or Near-Full OSDs Trigger Performance Collapsecritical

When OSDs reach nearfull (default 85%) or full (default 95%) thresholds, Ceph begins throttling operations and can trigger rebalancing, severely degrading performance. Full OSDs prevent all writes and can cause cluster-wide unavailability.

Unbalanced OSD Utilization Creates Hot Spotswarning

When data distribution across OSDs is significantly uneven (high variance in ceph_osd_pct_used), some OSDs become hotspots handling disproportionate load while others remain underutilized. This reduces overall cluster performance and accelerates filling of busy OSDs.