kafka.broker.config.log_segment_bytes
Log segment bytes configInterface Metrics (1)
About this metric
The kafka.broker.config.log_segment_bytes metric represents the configured maximum size in bytes for individual Kafka log segments before a new segment is created. This is a fundamental broker-level configuration that directly controls log segment files - the physical storage units that comprise Kafka topic partitions. Each partition in Kafka is implemented as a sequence of log segments on disk, and this metric exposes the log.segment.bytes broker configuration value, which determines when Kafka will roll to a new segment file. The default value is typically 1 GB (1073741824 bytes), though this can be overridden at the topic level. Understanding this metric is operationally significant because segment size directly impacts retention management, replication efficiency, storage layout, and compaction behavior across the Kafka cluster.
From an operational perspective, this metric helps teams manage storage efficiency, optimize data retention, and troubleshoot performance issues. Larger segment sizes reduce the number of files on disk and minimize file handle overhead, which can improve performance for high-throughput workloads, but they also delay log compaction operations since compaction only runs on closed (non-active) segments. Conversely, smaller segments enable more frequent compaction and finer-grained retention control, but may increase metadata overhead and disk I/O due to more frequent file operations. Monitoring this metric is particularly important when investigating unexpected retention behavior, as segments must be completely beyond the retention period before deletion - a large segment that took days to fill will delay deletion even after its retention time expires.
Healthy values depend on specific use cases, but most production deployments maintain the default 1 GB setting or adjust to between 512 MB and 2 GB based on message throughput and retention requirements. Teams should alert when this metric shows unexpected changes across brokers, as configuration drift can lead to inconsistent behavior. Common troubleshooting scenarios include investigating why old data isn't being deleted (segments may be too large relative to retention windows), diagnosing slow log compaction (very large segments take longer to compact), or resolving "too many open files" errors (segments too small creating excessive file handles). This metric should be correlated with kafka.log.log_end_offset growth rates and available disk space to ensure segment sizing aligns with retention policies and storage capacity.