Configurar uma malha de serviço gRPC sem proxy no GKE
Esta página descreve como implantar um exemplo de cliente e servidor gRPC sem proxy em uma malha de serviço do Cloud.
Pré-requisitos
Como ponto de partida, este guia pressupõe que você já:
- Criou um cluster do GKE e o registrou em uma frota.
- Instalou as definições de recursos personalizados.
Requisitos
Esta seção lista os requisitos para os serviços compatíveis:
- gRPC em C++, versão 1.62.0 ou posterior
- gRPC em Java: versão 1.65.0 ou mais recente
- gRPC em Go: versão 1.65.0 ou mais recente
- gRPC em Python: versão 1.65.0 ou mais recente
Configurar o serviço
Implante um serviço gRPC:
C++
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
selector:
app: psm-grpc-server
ports:
- port: 50051
targetPort: 50051
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: psm-grpc-server
labels:
app: psm-grpc-server
spec:
replicas: 2
selector:
matchLabels:
app: psm-grpc-server
template:
metadata:
labels:
app: psm-grpc-server
spec:
containers:
- name: psm-grpc-server
image: grpc/csm-o11y-example-cpp-server:latest
imagePullPolicy: Always
args:
"--port=50051"
ports:
- containerPort: 50051
env:
- name: GRPC_XDS_BOOTSTRAP
value: "/tmp/grpc-xds/td-grpc-bootstrap.json"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CSM_WORKLOAD_NAME
value: psm-grpc-server
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.pod.name=\$(POD_NAME),k8s.namespace.name=\$(NAMESPACE_NAME),k8s.container.name=\$(CONTAINER_NAME)
volumeMounts:
- mountPath: /tmp/grpc-xds/
name: grpc-td-conf
readOnly: true
initContainers:
- name: grpc-td-init
image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0
imagePullPolicy: Always
args:
- "--output=/tmp/bootstrap/td-grpc-bootstrap.json"
- "--vpc-network-name=default"
- "--xds-server-uri=trafficdirector.googleapis.com:443"
- "--generate-mesh-id"
volumeMounts:
- mountPath: /tmp/bootstrap/
name: grpc-td-conf
volumes:
- name: grpc-td-conf
emptyDir:
medium: Memory
EOF
Java
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
selector:
app: psm-grpc-server
ports:
- port: 50051
targetPort: 50051
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: psm-grpc-server
labels:
app: psm-grpc-server
spec:
replicas: 2
selector:
matchLabels:
app: psm-grpc-server
template:
metadata:
labels:
app: psm-grpc-server
spec:
containers:
- name: psm-grpc-server
image: grpc/csm-o11y-example-java-server:latest
imagePullPolicy: Always
args:
"50051"
"9464"
ports:
- containerPort: 50051
env:
- name: GRPC_XDS_BOOTSTRAP
value: "/tmp/grpc-xds/td-grpc-bootstrap.json"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CSM_WORKLOAD_NAME
value: psm-grpc-server
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.pod.name=\$(POD_NAME),k8s.namespace.name=\$(NAMESPACE_NAME),k8s.container.name=\$(CONTAINER_NAME)
volumeMounts:
- mountPath: /tmp/grpc-xds/
name: grpc-td-conf
readOnly: true
initContainers:
- name: grpc-td-init
image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0
imagePullPolicy: Always
args:
- "--output=/tmp/bootstrap/td-grpc-bootstrap.json"
- "--vpc-network-name=default"
- "--xds-server-uri=trafficdirector.googleapis.com:443"
- "--generate-mesh-id"
volumeMounts:
- mountPath: /tmp/bootstrap/
name: grpc-td-conf
volumes:
- name: grpc-td-conf
emptyDir:
medium: Memory
EOF
Go
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
selector:
app: psm-grpc-server
ports:
- port: 50051
targetPort: 50051
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: psm-grpc-server
labels:
app: psm-grpc-server
spec:
replicas: 2
selector:
matchLabels:
app: psm-grpc-server
template:
metadata:
labels:
app: psm-grpc-server
spec:
containers:
- name: psm-grpc-server
image: grpc/csm-o11y-example-go-server:latest
imagePullPolicy: Always
args:
"--port=50051"
ports:
- containerPort: 50051
env:
- name: GRPC_XDS_BOOTSTRAP
value: "/tmp/grpc-xds/td-grpc-bootstrap.json"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CSM_WORKLOAD_NAME
value: psm-grpc-server
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.pod.name=\$(POD_NAME),k8s.namespace.name=\$(NAMESPACE_NAME),k8s.container.name=\$(CONTAINER_NAME)
volumeMounts:
- mountPath: /tmp/grpc-xds/
name: grpc-td-conf
readOnly: true
initContainers:
- name: grpc-td-init
image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0
imagePullPolicy: Always
args:
- "--output=/tmp/bootstrap/td-grpc-bootstrap.json"
- "--vpc-network-name=default"
- "--xds-server-uri=trafficdirector.googleapis.com:443"
- "--generate-mesh-id"
volumeMounts:
- mountPath: /tmp/bootstrap/
name: grpc-td-conf
volumes:
- name: grpc-td-conf
emptyDir:
medium: Memory
EOF
Python
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
selector:
app: psm-grpc-server
ports:
- port: 50051
targetPort: 50051
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: psm-grpc-server
labels:
app: psm-grpc-server
spec:
replicas: 2
selector:
matchLabels:
app: psm-grpc-server
template:
metadata:
labels:
app: psm-grpc-server
spec:
containers:
- name: psm-grpc-server
image: grpc/csm-o11y-example-python-server:latest
imagePullPolicy: Always
args:
"--port=50051"
ports:
- containerPort: 50051
env:
- name: GRPC_XDS_BOOTSTRAP
value: "/tmp/grpc-xds/td-grpc-bootstrap.json"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CSM_WORKLOAD_NAME
value: psm-grpc-server
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.pod.name=\$(POD_NAME),k8s.namespace.name=\$(NAMESPACE_NAME),k8s.container.name=\$(CONTAINER_NAME)
volumeMounts:
- mountPath: /tmp/grpc-xds/
name: grpc-td-conf
readOnly: true
initContainers:
- name: grpc-td-init
image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0
imagePullPolicy: Always
args:
- "--output=/tmp/bootstrap/td-grpc-bootstrap.json"
- "--vpc-network-name=default"
- "--xds-server-uri=trafficdirector.googleapis.com:443"
- "--generate-mesh-id"
volumeMounts:
- mountPath: /tmp/bootstrap/
name: grpc-td-conf
volumes:
- name: grpc-td-conf
emptyDir:
medium: Memory
EOF
A resposta é semelhante a:
service/helloworld created
deployment.apps/psm-grpc-server created
Verifique se os pods foram criados:
kubectl get pods
A resposta é semelhante a:
NAME READY STATUS RESTARTS AGE psm-grpc-server-65966bf76d-2wwxz 1/1 Running 0 13s psm-grpc-server-65966bf76d-nbxd2 1/1 Running 0 13s
Aguarde até que todos os pods estejam prontos e tenham o status
Status
Running antes de continuar.Implante um HTTPRoute:
kubectl apply -f - <<EOF apiVersion: gateway.networking.k8s.io/v1beta1 kind: HTTPRoute metadata: name: app1 spec: parentRefs: - name: helloworld kind: Service group: "" rules: - backendRefs: - name: helloworld port: 50051 EOF
Esse comando cria uma HTTPRoute chamada app1 e envia todos os RPCs para o serviço
helloworld
.O serviço
parentRef
também éhelloworld
, o que significa que nossa HTTPRoute está anexada a esse serviço e processará todos os RPCs endereçados a ele. No caso do gRPC sem proxy, isso significa qualquer cliente que envia RPCs em um canal gRPC para o destinoxds:///helloworld.default.svc.cluster.local:50051
.Verifique se a nova HTTPRoute app1 foi criada:
kubectl get httproute
A resposta é semelhante a:
NAME HOSTNAMES AGE app1 72s
Configurar o cliente
Esta seção descreve como usar um cliente gRPC para verificar se o Cloud Service Mesh está roteando o tráfego corretamente na malha.
Execute um cliente gRPC e direcione-o para usar a configuração de roteamento especificada por o HTTPRoute:
C++
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: psm-grpc-client
labels:
app: psm-grpc-client
spec:
replicas: 1
selector:
matchLabels:
app: psm-grpc-client
template:
metadata:
labels:
app: psm-grpc-client
spec:
containers:
- name: psm-grpc-client
image: grpc/csm-o11y-example-cpp-server:latest
imagePullPolicy: Always
args:
"--target=xds:///helloworld.default.svc.cluster.local:50051"
env:
- name: GRPC_XDS_BOOTSTRAP
value: "/tmp/grpc-xds/td-grpc-bootstrap.json"
- name: CSM_WORKLOAD_NAME
value: psm-grpc-client
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTAINER_NAME
value: psm-grpc-client
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.pod.name=\$(POD_NAME),k8s.namespace.name=\$(NAMESPACE_NAME),k8s.container.name=\$(CONTAINER_NAME)
volumeMounts:
- mountPath: /tmp/grpc-xds/
name: grpc-td-conf
readOnly: true
initContainers:
- name: grpc-td-init
image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0
imagePullPolicy: Always
args:
- "--output=/tmp/bootstrap/td-grpc-bootstrap.json"
- "--vpc-network-name=default"
- "--xds-server-uri=trafficdirector.googleapis.com:443"
- "--generate-mesh-id"
volumeMounts:
- mountPath: /tmp/bootstrap/
name: grpc-td-conf
volumes:
- name: grpc-td-conf
emptyDir:
medium: Memory
EOF
Java
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: psm-grpc-client
labels:
app: psm-grpc-client
spec:
replicas: 1
selector:
matchLabels:
app: psm-grpc-client
template:
metadata:
labels:
app: psm-grpc-client
spec:
containers:
- name: psm-grpc-client
image: grpc/csm-o11y-example-java-server:latest
imagePullPolicy: Always
args:
"world"<br/>"xds:///helloworld.default.svc.cluster.local:50051"
"9464"
env:
- name: GRPC_XDS_BOOTSTRAP
value: "/tmp/grpc-xds/td-grpc-bootstrap.json"
- name: CSM_WORKLOAD_NAME
value: psm-grpc-client
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTAINER_NAME
value: psm-grpc-client
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.pod.name=\$(POD_NAME),k8s.namespace.name=\$(NAMESPACE_NAME),k8s.container.name=\$(CONTAINER_NAME)
volumeMounts:
- mountPath: /tmp/grpc-xds/
name: grpc-td-conf
readOnly: true
initContainers:
- name: grpc-td-init
image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0
imagePullPolicy: Always
args:
- "--output=/tmp/bootstrap/td-grpc-bootstrap.json"
- "--vpc-network-name=default"
- "--xds-server-uri=trafficdirector.googleapis.com:443"
- "--generate-mesh-id"
volumeMounts:
- mountPath: /tmp/bootstrap/
name: grpc-td-conf
volumes:
- name: grpc-td-conf
emptyDir:
medium: Memory
EOF
Go
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: psm-grpc-client
labels:
app: psm-grpc-client
spec:
replicas: 1
selector:
matchLabels:
app: psm-grpc-client
template:
metadata:
labels:
app: psm-grpc-client
spec:
containers:
- name: psm-grpc-client
image: grpc/csm-o11y-example-go-server:latest
imagePullPolicy: Always
args:
"--target=xds:///helloworld.default.svc.cluster.local:50051"
env:
- name: GRPC_XDS_BOOTSTRAP
value: "/tmp/grpc-xds/td-grpc-bootstrap.json"
- name: CSM_WORKLOAD_NAME
value: psm-grpc-client
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTAINER_NAME
value: psm-grpc-client
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.pod.name=\$(POD_NAME),k8s.namespace.name=\$(NAMESPACE_NAME),k8s.container.name=\$(CONTAINER_NAME)
volumeMounts:
- mountPath: /tmp/grpc-xds/
name: grpc-td-conf
readOnly: true
initContainers:
- name: grpc-td-init
image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0
imagePullPolicy: Always
args:
- "--output=/tmp/bootstrap/td-grpc-bootstrap.json"
- "--vpc-network-name=default"
- "--xds-server-uri=trafficdirector.googleapis.com:443"
- "--generate-mesh-id"
volumeMounts:
- mountPath: /tmp/bootstrap/
name: grpc-td-conf
volumes:
- name: grpc-td-conf
emptyDir:
medium: Memory
EOF
Python
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: psm-grpc-client
labels:
app: psm-grpc-client
spec:
replicas: 1
selector:
matchLabels:
app: psm-grpc-client
template:
metadata:
labels:
app: psm-grpc-client
spec:
containers:
- name: psm-grpc-client
image: grpc/csm-o11y-example-python-server:latest
imagePullPolicy: Always
args:
"--target=xds:///helloworld.default.svc.cluster.local:50051"
env:
- name: GRPC_XDS_BOOTSTRAP
value: "/tmp/grpc-xds/td-grpc-bootstrap.json"
- name: CSM_WORKLOAD_NAME
value: psm-grpc-client
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTAINER_NAME
value: psm-grpc-client
- name: OTEL_RESOURCE_ATTRIBUTES
value: k8s.pod.name=\$(POD_NAME),k8s.namespace.name=\$(NAMESPACE_NAME),k8s.container.name=\$(CONTAINER_NAME)
volumeMounts:
- mountPath: /tmp/grpc-xds/
name: grpc-td-conf
readOnly: true
initContainers:
- name: grpc-td-init
image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0
imagePullPolicy: Always
args:
- "--output=/tmp/bootstrap/td-grpc-bootstrap.json"
- "--vpc-network-name=default"
- "--xds-server-uri=trafficdirector.googleapis.com:443"
- "--generate-mesh-id"
volumeMounts:
- mountPath: /tmp/bootstrap/
name: grpc-td-conf
volumes:
- name: grpc-td-conf
emptyDir:
medium: Memory
EOF
A resposta é semelhante a:
deployment.apps/psm-grpc-client created
Configuração do Google Cloud Managed Service para Prometheus (opcional)
É possível implantar o recurso PodMonitoring
do Google Cloud Managed Service para Prometheus para exportar as métricas para o Cloud Monitoring.
Para servidores, execute o seguinte comando:
kubectl apply -f - <<EOF apiVersion: monitoring.googleapis.com/v1 kind: PodMonitoring metadata: name: psm-grpc-server-gmp spec: selector: matchLabels: app: psm-grpc-server endpoints: - port: 9464 interval: 10s EOF
Para clientes, execute o seguinte comando:
kubectl apply -f - <<EOF apiVersion: monitoring.googleapis.com/v1 kind: PodMonitoring metadata: name: psm-grpc-client-gmp spec: selector: matchLabels: app: psm-grpc-client endpoints: - port: 9464 interval: 10s EOF
Depois de implantar o recurso
PodMonitoring
, olocalhost:9464/metrics
de cada pod correspondente será raspado a cada 10 segundos e exportará os resultados para o Cloud Monitoring.
Para conferir as métricas no Cloud Monitoring, siga estas etapas:
navegue até a seção Metrics Explorer do console do Google Cloud, selecione Prometheus Target > Grpc para encontrar as métricas.
Verificar a configuração da malha de serviço gRPC sem proxy e a Observabilidade da malha
Verifique a saída de métricas do pod do cliente:
kubectl exec $(kubectl get po | grep psm-grpc-client | awk '{print $1;}') -- /usr/bin/curl -s http://localhost:9464/metrics
A saída mostra as métricas (com os rótulos da malha de serviço) que foram extraídos do endpoint:
Defaulted container "psm-grpc-client" out of: psm-grpc-client, grpc-td-init (init) # HELP exposer_transferred_bytes_total Transferred bytes to metrics services # TYPE exposer_transferred_bytes_total counter exposer_transferred_bytes_total 36047 # HELP exposer_scrapes_total Number of times metrics were scraped # TYPE exposer_scrapes_total counter exposer_scrapes_total 1 # HELP exposer_request_latencies Latencies of serving scrape requests, in microseconds # TYPE exposer_request_latencies summary exposer_request_latencies_count 1 exposer_request_latencies_sum 1246 exposer_request_latencies{quantile="0.5"} Nan exposer_request_latencies{quantile="0.9"} Nan exposer_request_latencies{quantile="0.99"} Nan # HELP grpc_client_attempt_rcvd_total_compressed_message_size_By Compressed message bytes received per call attempt # TYPE grpc_client_attempt_rcvd_total_compressed_message_size_By histogram grpc_client_attempt_rcvd_total_compressed_message_size_By_count{csm_mesh_id="gsmmesh-35av-my-cluster-3-us-east7-c-35av6nnbi9jz",csm_remote_workload_canonical_service="unknown",csm_remote_workload_cluster_name="my-cluster-3",csm_remote_workload_location="us-east7-c",csm_remote_workload_name="psm-grpc-server",csm_remote_workload_namespace_name="default",csm_remote_workload_project_id="grpc-testing",csm_remote_workload_type="gcp_kubernetes_engine",csm_workload_canonical_service="unknown",grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",grpc_target="xds:///helloworld.default.svc.cluster.local:50051"} 114 1695445356167 ...
Verifique a saída de métricas do servidor:
kubectl exec $(kubectl get po | grep psm-grpc-server | awk '{print $1;}' | head -n 1) -- /usr/bin/curl -s http://localhost:9464/metrics
A saída mostra as métricas (com os rótulos da malha de serviço) que foram extraídos do endpoint:
Defaulted container "psm-grpc-server" out of: psm-grpc-server, grpc-td-init (init) # HELP exposer_transferred_bytes_total Transferred bytes to metrics services # TYPE exposer_transferred_bytes_total counter exposer_transferred_bytes_total 35945 # HELP exposer_scrapes_total Number of times metrics were scraped # TYPE exposer_scrapes_total counter exposer_scrapes_total 1 # HELP exposer_request_latencies Latencies of serving scrape requests, in microseconds # TYPE exposer_request_latencies summary exposer_request_latencies_count 1 exposer_request_latencies_sum 2369 exposer_request_latencies{quantile="0.5"} Nan exposer_request_latencies{quantile="0.9"} Nan exposer_request_latencies{quantile="0.99"} Nan # HELP target Target metadata # TYPE target gauge target_info{otel_scope_name="grpc-c++",otel_scope_version="1.62.0-dev",service_name="unknown_service",k8s_pod_name="psm-grpc-server-6f75c8f857-qst5k",k8s_namespace_name="default",k8s_container_name="$(CONTAINER_NAME)",telemetry_sdk_version="1.13.0",telemetry_sdk_name="opentelemetry",telemetry_sdk_language="cpp"} 1 1708157423871 # HELP grpc_server_call_sent_total_compressed_message_size_bytes Compressed message bytes sent per server call # TYPE grpc_server_call_sent_total_compressed_message_size_bytes histogram grpc_server_call_sent_total_compressed_message_size_bytes_count{csm_mesh_id="gsmmesh-fogj-my-cluster-us-central1-a-fogjnvmqo8fn",csm_remote_workload_canonical_service="unknown",csm_remote_workload_cluster_name="my-cluster",csm_remote_workload_location="us-central1-a",csm_remote_workload_name="test-workload-name",csm_remote_workload_namespace_name="default",csm_remote_workload_project_id="project-id",csm_remote_workload_type="gcp_kubernetes_engine",csm_workload_canonical_service="unknown",grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",otel_scope_name="grpc-c++",otel_scope_version="1.62.0-dev"} 1796 1708157423871 …
Resolver problemas de observabilidade do Cloud Service Mesh
Esta seção mostra como resolver problemas comuns.
Um valor de atributo na métrica aparece como desconhecido
Se um valor de atributo na métrica aparecer como desconhecido, verifique se todos os binários envolvidos (cliente e servidor) estão configurados com uma malha de serviço gRPC sem proxy no GKE.
A Observabilidade do Cloud Service Mesh determina as informações topológicas da malha usando rótulos de ambiente. Verifique se a especificação do pod ou do serviço para o cliente e o serviço especifica todos os rótulos, conforme descrito no exemplo.
Descreva a implantação do psm-grpc-server:
kubectl describe Deployment psm-grpc-server | grep "psm-grpc-server:" -A 12
A saída é semelhante a:
psm-grpc-server: Image: grpc/csm-example-server:2024-02-13 Port: 50051/TCP Host Port: 0/TCP Args: --port=50051 Environment: GRPC_XDS_BOOTSTRAP: /tmp/grpc-xds/td-grpc-bootstrap.json POD_NAME: (v1:metadata.name) NAMESPACE_NAME: (v1:metadata.namespace) CSM_WORKLOAD_NAME: psm-grpc-server OTEL_RESOURCE_ATTRIBUTES: k8s.pod.name=$(POD_NAME),k8s.namespace.name=$(NAMESPACE_NAME),k8s.container.name=$(CONTAINER_NAME)
Descreva a implantação do psm-grpc-client:
kubectl describe Deployment psm-grpc-client | grep "psm-grpc-client:" -A 12
A saída é semelhante a:
psm-grpc-client: Image: grpc/csm-example-client:2024-02-13 Port: <none> Host Port: <none> Args: --target=xds:///helloworld.default.svc.cluster.local:50051 Environment: GRPC_XDS_BOOTSTRAP: /tmp/grpc-xds/td-grpc-bootstrap.json CSM_WORKLOAD_NAME: test-workload-name POD_NAME: (v1:metadata.name) NAMESPACE_NAME: (v1:metadata.namespace) CONTAINER_NAME: psm-grpc-client
As métricas não aparecem
Se você estiver usando o exportador do Prometheus, verifique se o URL dele está configurado corretamente.
Verifique se os canais gRPC estão ativados na Cloud Service Mesh. Os canais
precisam ter um destino do formato xds:///
. Os servidores gRPC estão sempre ativados para
o Cloud Service Mesh.