Technologies/Tailscale/tailscale.bytes.received.total
TailscaleTailscaleMetric

tailscale.bytes.received.total

Total bytes received
Dimensions:None
Available on:PrometheusPrometheus (1)CloudWatchCloudWatch (1)Google Cloud MonitoringGoogle Cloud Monitoring (1)

Summary

Cumulative count of bytes received across all Tailscale network connections since daemon startup. This counter monotonically increases and resets only on process restart. Rising receive rates indicate active inbound traffic flow, while asymmetric patterns (compared to bytes sent) may signal one-way data transfers or routing issues. Essential for tracking overall network consumption and identifying bandwidth anomalies.

Interface Metrics (3)
PrometheusPrometheus
Total bytes received through the Tailscale network interface
Dimensions:None
CloudWatchCloudWatch
Total bytes received across all Tailscale connections
Dimensions:None
Google Cloud MonitoringGoogle Cloud Monitoring
Total bytes received by this Tailscale node from other nodes in the network
Dimensions:None

Technical Annotations (9)

CLI Commands (1)
tailscale metrics printdiagnostic
Technical References (8)
exit nodecomponentDERP relaycomponentsubnet routercomponenttailnet policy filecomponentTailscale access controlcomponentDERP relay serverscomponentNAT traversalconceptnetwork flow logscomponent
Related Insights (6)
Exit node resource exhaustion from excessive inbound connections degrades performancewarning
Tailscale client metrics available for Prometheus scrapinginfo
High packet drop rate from ACL blocking indicates misconfigured access controlswarning
DERP relay usage indicates direct connectivity failure between peerswarning
Excessive traffic through exit node indicating performance bottleneckwarning

When a large volume of traffic is routed through a single exit node, it can create a performance bottleneck, saturating the exit node's bandwidth, CPU, or network interfaces. This degrades performance for all users routing through that exit node and creates a single point of failure.

Network flow logs now include node information automaticallyinfo