client_operation_time
The duration of the SQL query execution.Summary
Tracks the total time taken to execute Redis operations from the client perspective, including network round-trip time and server processing time. This end-to-end latency metric helps identify whether slowdowns originate from network issues, Redis server performance, or client-side blocking. Spikes in this metric warrant investigation into command complexity, network conditions, or server-side resource contention.
Interface Metrics (1)
Knowledge Base (4 documents, 0 chunks)
Technical Annotations (26)
Configuration Parameters (2)
connection_timeoutrecommended: 30 secondsquery_timeoutrecommended: 60 secondsCLI Commands (1)
gcloud spanner operations list --instance=INSTANCE --database=DATABASE --filter="@TYPE:UpdateDatabaseDdlMetadata"diagnosticTechnical References (23)
query latency SLOconceptuser experience impactconceptiSQconceptslow query thresholdconceptquery timeconceptmulti-tenancyconceptI/O resource allocationconceptCPU utilizationconceptworkload spikeconceptQuery insights dashboardcomponentGemini Cloud Assistcomponentlock wait ratioconceptactive queriesconceptsecondary indexcomponentquery execution plancomponentFORCE_INDEX directivecomponentoptimizer statisticscomponentstatistics packagecomponentcross apply operatorcomponentdistributed cross apply operatorcomponentindex scan operatorcomponentprimary keyconceptbase tablecomponentRelated Insights (26)
Applications performing Scan operations instead of Query with key conditions consume excessive capacity and exhibit 50-1000x higher latency compared to targeted queries, causing throttling and slow response times.
Event loop blocking creates false appearance of cache ineffectiveness - Redis cache hits are fast individually, but serial request processing prevents concurrent cache lookups from improving overall throughput during traffic bursts.
Client-measured latency exceeds service-side cassandra_client_request_read_time/write_time by large margins, indicating network overhead is the bottleneck rather than database processing. This misdiagnosis leads to incorrect remediation efforts.
Frequent Cypher query replanning events indicate schema changes, statistics updates, or cache eviction forcing query plan regeneration, adding CPU overhead and potentially causing performance variability.
When Redis connection pool exhausts under high concurrency, blocking Redis operations (even from async endpoints) stall the FastAPI event loop, causing serial-like request processing and tail latency spikes despite low CPU utilization.
Long-running import/export operations block other critical operations including automated daily backups, creating backup gaps and recovery risks. Only one import/export/backup can run at a time per instance.
Query Insights for Enterprise Plus reveals specific wait events (disk I/O, locks, etc.) causing query slowdowns that aggregate metrics alone cannot diagnose. Wait event analysis provides granular root cause identification for query performance issues.
When CPU utilization approaches reserved cores capacity, query performance degrades and connection handling slows. Understanding reserved vs. utilization metrics is critical for capacity planning.
With Query Insights Enterprise Plus 30-day retention, comparing query plans over time reveals when optimizer choices change and cause performance degradation without any application code changes. This indicates statistics staleness or plan instability.
CPU-intensive queries on read replicas (sorting, regex, complex functions) can cause replication lag by consuming CPU needed for replication apply workers, especially when replica vCPUs are insufficient.
Long-running or sub-optimal queries consuming excessive resources can be identified and terminated through Query Insights Enterprise Plus to free resources, unblock critical operations, and prevent resource exhaustion.
Sustained high query rates combined with increased latency indicate the instance is approaching throughput limits. This pattern often precedes resource exhaustion and requires proactive scaling.
Sudden increases in query operation time indicate performance degradation from various causes including lock contention, cache misses, inefficient queries, or resource pressure.
Automated and manual backup operations can cause temporary performance degradation on the primary instance due to increased disk I/O and CPU usage, especially for large databases without serverless export enabled.