Accessing logs in Cloud Logging
The Cloud Service Mesh pages provide links to three different types of logs in Cloud Logging: application logs, error logs, and traffic logs.
Accessing application logs
To view application logs for a Service during a specified time span, follow these steps:
Go to the Cloud Service Mesh page in the Google Cloud console.
Under Services, select the name of the Service you want to inspect.
Go to the Metrics page.
Specify a time span from the Time Span dropdown menu or set a custom span with the timeline.
Click View application logs.
The application logs are the logs generated by your own application code and are attached to the corresponding monitored resource (k8s_container or gce_instance) your application is using.
Accessing error logs
To view error logs for a Service during a specified time span, follow these steps:
In Google Cloud console, go to the Cloud Service Mesh page.
Under Services, select the name of the Service you want to inspect.
Go to the Diagnostics page.
Specify a time span from the Time Span dropdown menu or set a custom span with the timeline.
In the upper-right corner of the window, click Open in logging.
Accessing traffic logs
To view traffic logs, or access logs in Istio, for a Service during a specified time span follow these steps:
In Google Cloud console, go to the Cloud Service Mesh page.
Under Services, select the name of the Service you want to inspect.
Go to the Metrics page.
Specify a time span from the Time Span dropdown menu or set a custom span with the timeline.
Under filter_list Select a filter option, click View traffic logs.
The traffic log is named as server-accesslog-stackdriver and is attached to the corresponding monitored resource (k8s_container or gce_instance) your service is using. The traffic log contains the following information:
HTTP request properties, such as ID, URL, size, latency, and common headers.
Source and destination workload information, such as name, namespace, identity, and common labels.
If tracing is enabled, trace information, such as sampling, trace ID, and span ID.
An example log entry looks like the following:
{ insertId: "1awb4hug5pos2qi" httpRequest: { requestMethod: "GET" requestUrl: "YOUR-INGRESS/productpage" requestSize: "952" status: 200 responseSize: "5875" remoteIp: "10.8.0.44:0" serverIp: "10.56.4.25:9080" latency: "1.587232023s" protocol: "http" } resource: { type: "k8s_container" labels: { location: "us-central1-a" project_id: "YOUR-PROJECT" pod_name: "productpage-v1-76589d9fdc-ptnt9" cluster_name: "YOUR-CLUSTER-NAME" container_name: "productpage" namespace_name: "default" } } timestamp: "2020-04-28T19:55:21.056759Z" severity: "INFO" labels: { destination_principal: "spiffe://cluster.local/ns/default/sa/bookinfo-productpage" response_flag: "-" destination_service_host: "productpage.default.svc.cluster.local" source_app: "istio-ingressgateway" service_authentication_policy: "MUTUAL_TLS" source_name: "istio-ingressgateway-5ff85d8dd8-mwplb" mesh_uid: "YOUR-MESH-UID" request_id: "021ce752-9001-4ac6-b6d6-3b15f5d3632" destination_namespace: "default" source_principal: "spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account" destination_workload: "productpage-v1" destination_version: "v1" source_namespace: "istio-system" source_workload: "istio-ingressgateway" destination_name: "productpage-v1-76589d9fdc-ptnt9" destination_app: "productpage" } trace: "projects/YOUR-PROJECT/traces/d4197f59b7a43e3aeff3571bac99d536" receiveTimestamp: "2020-04-29T03:07:14.362416217Z" spanId: "43226343ca2bb2b1" traceSampled: true logName: "projects/YOUR-PROJECT/logs/server-accesslog-stackdriver" receiveTimestamp: "2020-04-28T19:55:32.185229100Z" }
Interpret Anthos Service Mesh telemetry
The following sections explain how to check the status of your mesh and review the various telemetries that contain helpful details to assist your troubleshooting.
Interpret control plane metrics
When installing Cloud Service Mesh with the in-cluster control plane, istiod
exports metrics to Google Cloud Observability for monitoring, by default.
istiod
prefixes these metrics with istio.io/control
and give insights into
the control plane state, such as number of proxies connected to each control
plane instance, configuration events, pushes and validations.
Observe or troubleshoot the control plane, using the following steps.
Load a sample dashboard:
git clone https://github.com/GoogleCloudPlatform/monitoring-dashboard-samples && cd monitoring-dashboard-samples/dashboards && git checkout servicemesh
Install the Cloud Service Mesh dashboard:
gcloud monitoring dashboards create --config-from-file=dashboards/servicemesh/anthos-service-mesh-control-plane-monitoring.json
Look for a dashboard named
Istio Control Plane Dashboard
in the list. For more information, see Viewing the installed dashboard.
For the full list of metrics available, see Exported metrics.
Diagnose configuration delays
The following steps explain how to use the pilot_proxy_convergence_time
metric
to diagnose a delay between a configuration change and all proxies converging.
Run a shell command in a pod:
kubectl exec -it $(kubectl get pod -l app=pilot -o jsonpath='{.items[0].metadata.name}' -n istio-system) -n istio-system -c istio-proxy -- curl -s
Access
localhost:15014
andgrep
forconvergence
in metrics:curl http://localhost:15014/metrics | grep convergence
Interpret Google Cloud Observability access logs
The following information explains how to use the Google Cloud Observability access logs to troubleshoot connection problems. Google Cloud Observability access/traffic logs are enabled by default.
Cloud Service Mesh exports data into Google Cloud Observability access logs that can help you debug the following types of problems:
- Traffic flow and failures
- End-to-end request routing
Google Cloud Observability access logs are enabled by default for Cloud Service Mesh
installations on Google Kubernetes Engine. You can enable Google Cloud Observability
access logs by re-running asmcli install
. Use the same options that you
originally installed but omit the custom overlay that disabled Stackdriver.
There are two types of access logs:
Server access logs give a server-side view of requests. They are located under
server-accesslog-stackdriver
, attached to thek8s_container
monitored resource. Use the following URL syntax to display server-side access logs:https://console.cloud.google.com/logs/viewer?advancedFilter=logName="projects/PROJECT_ID/logs/server-accesslog-stackdriver"&project=PROJECT_ID
Client access logs give a client-side view of requests. They are located under
client-accesslog-stackdriver
, attached to thek8s_pod
monitored resource. Use the following URL syntax to display client-side access logs:https://console.cloud.google.com/logs/viewer?advancedFilter=logName="projects/PROJECT_ID/logs/client-accesslog-stackdriver"&project=PROJECT_ID
Access logs contain the following information:
- HTTP request properties, such as ID, URL, size, latency, and common headers.
- Source and destination workload information, such as name, namespace, identity, and common labels.
- Source and destination canonical service and revision information.
- If tracing is enabled, the logs contain trace information, such as sampling, trace ID, and span ID.
The information displayed in the Google Cloud Observability access logs originates from Envoy access logs, when you enable them in the Istio configuration. They contain the following headers:
route_name
upstream_cluster
X-Envoy-Original-Path
X-Envoy-Original-Host
This is an example log entry:
{ "insertId": "1j84zg8g68vb62z", "httpRequest": { "requestMethod": "GET", "requestUrl": "http://35.235.89.201:80/productpage", "requestSize": "795", "status": 200, "responseSize": "7005", "remoteIp": "10.168.0.26:0", "serverIp": "10.36.3.153:9080", "latency": "0.229384205s", "protocol": "http" }, "resource": { "type": "k8s_container", "labels": { "cluster_name": "istio-e2e22", "namespace_name": "istio-bookinfo-1-68819", "container_name": "productpage", "project_id": "***", "location": "us-west2-a", "pod_name": "productpage-v1-64794f5db4-8xbtf" } }, "timestamp": "2020-08-13T21:37:42.963881Z", "severity": "INFO", "labels": { "protocol": "http", "upstream_host": "127.0.0.1:9080", "source_canonical_service": "istio-ingressgateway", "source_namespace": "istio-system", "x-envoy-original-path": "", "source_canonical_revision": "latest", "connection_id": "32", "upstream_cluster": "inbound|9080|http|productpage.istio-bookinfo-1-68819.svc.cluster.local", "requested_server_name": "outbound_.9080_._.productpage.istio-bookinfo-1-68819.svc.cluster.local", "destination_version": "v1", "destination_workload": "productpage-v1", "source_workload": "istio-ingressgateway", "destination_canonical_revision": "v1", "mesh_uid": "cluster.local", "source_principal": "spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account", "x-envoy-original-dst-host": "", "service_authentication_policy": "MUTUAL_TLS", "destination_principal": "spiffe://cluster.local/ns/istio-bookinfo-1-68819/sa/bookinfo-productpage", "response_flag": "-", "log_sampled": "false", "destination_service_host": "productpage.istio-bookinfo-1-68819.svc.cluster.local", "destination_name": "productpage-v1-64794f5db4-8xbtf", "destination_canonical_service": "productpage", "destination_namespace": "istio-bookinfo-1-68819", "source_name": "istio-ingressgateway-6845f6d664-lnfvp", "source_app": "istio-ingressgateway", "destination_app": "productpage", "request_id": "39013650-4e62-9be2-9d25-78682dd27ea4", "route_name": "default" }, "logName": "projects/***/logs/server-accesslog-stackdriver", "trace": "projects/***t/traces/466d77d15753cb4d7749ba5413b5f70f", "receiveTimestamp": "2020-08-13T21:37:48.758673203Z", "spanId": "633831cb1fda4fd5", "traceSampled": true }
You can use this log in various ways:
- Integrate with Cloud Trace, which is an optional feature in Cloud Service Mesh.
- Export traffic logs to BigQuery, where you can run queries like selecting all requests take more than 5 seconds.
- Create log-based metrics.
- Troubleshoot
404
and503
errors
Troubleshoot 404
and 503
errors
The following example explains how to use this log to troubleshoot when a
request fails with a 404
or 503
response code.
In the client access log, search for an entry like the following:
httpRequest: { requestMethod: "GET" requestUrl: "://IP_ADDRESS/src/Util/PHP/eval-stdin.php" requestSize: "2088" status: 404 responseSize: "75" remoteIp: "10.168.0.26:34165" serverIp: "10.36.3.149:8080" latency: "0.000371440s" protocol: "http" }
Navigate to the labels in the access log entry. Find the
response_flag
field that looks like the following:response_flag: "NR"
The
NR
value is an acronym forNoRoute
, which means no route was found for the destination or there was no matching filter chain for a downstream connection. Similarly, you can use theresponse_flag
label to troubleshoot503
errors also.If you see
503
errors in both the client and server access logs, check that the port names set for each service match the name of the protocol in use between them. For example, if a golang binary client connects to a golang server using HTTP, but the port is namedhttp2
, the protocol will not auto-negotiate correctly.
For more information, see response flags.
Interpret Envoy logs
The following steps explain how to use the Envoy proxy access logs to show traffic between both ends of a connection for troubleshooting purposes.
Envoy access logs are useful for diagnosing issues like:
- Traffic flow and failures
- End-to-end request routing
Access logs are not enabled by default in Cloud Service Mesh and can only be enabled globally across the entire mesh.
You can troubleshoot connection/request failures by generating activity in your application that triggers an HTTP request, then inspecting the associated request in the source or destination logs.
If you trigger a request appears and it appears in the source proxy logs, it
indicates that iptables
traffic redirection is working correctly and the Envoy
proxy is handling traffic. If you see errors in the logs, generate an Envoy
configuration dump and check the envoy cluster configuration to ensure it is
correct. If you see the request but the log has no errors, check the destination
proxy logs instead.
If the request appears in the destination proxy logs, it indicates that the mesh itself is working correctly. If you see an error instead, run an Envoy configuration dump and verify the correct values for the traffic port set in the listener configuration.
If the problem persists after performing the previous steps, Envoy might be
unable to auto-negotiate the protocol between the sidecar and its application
pod. Ensure that the Kubernetes service port name, for example http-80
,
matches the protocol that the application uses.
Use Logs Explorer to query logs
You can use the Logs Explorer interface
to query specific access logs. For example, to query all requests that have
MULTUAL_TLS
enabled and use protocol grpc
, append following to the server
access logs query:
labels.protocol="grpc" labels.service_authentication_policy="MULTUAL_TLS"
Set an access log policy
To configure proxy logging for managed Cloud Service Mesh, see Envoy access logs.
To set an access log policy for Cloud Service Mesh with the in-cluster control plane:
Create an
IstioOperator
custom overlay file that includes the applicableAccessLogPolicyConfig
values for your scenario.Pass this file to
asmcli
using the--custom_overlay
option to update the in-cluster control plane configuration. For information on runningasmcli install
with a custom overlay file, see Install with optional features.
View service or workload-specific information
If you have an issue with a specific service or workload rather than a mesh-wide
problem, inspect the individual Envoy proxies and gather relevant information
from them. To gather information about a particular workload and its proxies,
you can use pilot-agent
:
kubectl exec POD_NAME -c istio-proxy -- pilot-agent request GET SCOPE
In the example, SCOPE is one of the following:
certs
- Certificates within the Envoy instanceclusters
- Clusters with Envoy configuredconfig_dump
- Dumps the Envoy configurationlisteners
- Listeners with Envoy configuredlogging
- View and change logging settingsstats
- Envoy statisticsstats/prometheus
- Envoy statistics as Prometheus records
View proxy socket states
You can directly examine the state of Envoy proxy sockets by using the following process.
Display a list of established sockets, including sockets in the
TIME_WAIT
state, which can negatively affect scalability if their count is high:kubectl exec POD_NAME -c istio-proxy -- ss -anopim
Display a summary of socket statistics:
kubectl exec POD_NAME -c istio-proxy -- ss -s
For more information, see An Introduction to the ss Command.
istio-proxy
and istio-init
logs
In addition, retrieve the istio-proxy
logs and review its contents for any
errors that might suggest the cause of the problem:
kubectl logs POD_NAME -c istio-proxy
You can do the same for the init
container:
kubectl logs POD_NAME -c istio-init
What's next
Integrate with Cloud Trace. Cloud Trace is an optional feature in Cloud Service Mesh.