tailscale.bytes.sent.total
Total bytes sentDimensions:None
Summary
Cumulative count of bytes transmitted across all Tailscale network connections since daemon startup. This counter monotonically increases and resets only on process restart. Rising send rates indicate active outbound traffic flow, while significant deviation from receive rates may indicate data distribution patterns or potential routing problems. Critical for understanding egress bandwidth usage and detecting traffic anomalies.
Interface Metrics (3)
Dimensions:None
Dimensions:None
Sources
Technical Annotations (17)
Configuration Parameters (3)
RouteAllrecommended: falseExitNodeIDrecommended: emptyallow_incoming_connectionsrecommended: disabled (workaround only)CLI Commands (4)
tailscale metrics printdiagnostictailscale prefsdiagnostictailscale set --relay-server-port=<port>remediationtailscale ping <node>diagnosticTechnical References (10)
exit nodecomponentDERP relaycomponentsubnet routercomponentDERP relay serverscomponentNAT traversalconcepttailscaled.execomponentHyper-VcomponentDERPcomponentpeer relaycomponentnetwork flow logscomponentRelated Insights (9)
Exit node resource exhaustion from excessive inbound connections degrades performancewarning
▸
Tailscale client metrics available for Prometheus scrapinginfo
▸
Exit node usage causes unexpected egress IP and geo-based service alertswarning
▸
Exit node becomes single point of failure causing company-wide outagescritical
▸
DERP relay usage indicates direct connectivity failure between peerswarning
▸
Tailscale daemon CPU usage spikes to 25-60% with high network utilization on Windows Serverwarning
▸
DERP relay throughput throttling causes severe degradation on intercontinental connectionswarning
▸
Excessive traffic through exit node indicating performance bottleneckwarning
When a large volume of traffic is routed through a single exit node, it can create a performance bottleneck, saturating the exit node's bandwidth, CPU, or network interfaces. This degrades performance for all users routing through that exit node and creates a single point of failure.
▸
Network flow logs now include node information automaticallyinfo
▸