Como exportar as métricas do agente do Connect para o Cloud Monitoring
Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Esta página explica como exportar métricas do agente do Connect para o Cloud Monitoring a partir do Google Distributed Cloud, do GKE na AWS ou de qualquer outro cluster do Kubernetes registrado.
Visão geral
Em um cluster do Google Distributed Cloud ou do GKE na AWS, o Prometheus coleta métricas e as armazena localmente no cluster. O registro de um cluster fora do Google Cloud em uma frota
cria uma implantação chamada agente do Connect no cluster. O Prometheus coleta métricas úteis do Connect Agent, como erros de conexão com o Google e o número de conexões abertas. Para disponibilizar essas métricas para o Cloud Monitoring, é preciso:
Expor o agente do Connect usando um serviço.
Implante prometheus-to-sd, um componente simples que copia as métricas do Prometheus e as exporta para o Cloud Monitoring.
Em seguida, você visualiza as métricas usando o Monitoring no
Console do Cloud ou encaminhando a porta do Serviço e usando curl.
Como criar uma variável para o namespace do agente do Connect
O agente do Connect normalmente é executado no namespace gke-connect.
O agente do Connect tem um rótulo, hub.gke.io/project. O servidor HTTP detecta na porta 8080.
Crie uma variável, AGENT_NS, para o namespace:
AGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project=PROJECT_ID)
Substitua:
KUBECONFIG: o arquivo kubeconfig do cluster
PROJECT_ID: o ID do projeto;
Como expor a implantação do agente do Connect
Copie a seguinte configuração para um arquivo YAML chamado gke-connect-agent.yaml. Essa configuração cria um serviço, gke-connect-agent, que expõe a implantação do agente do Connect.
CLUSTER_NAME é o cluster do Kubernetes em que o agente do Connect é executado;
REGION é o local geograficamente próximo ao local em que seu cluster é executado. Escolha uma zona do Google Cloud que esteja geograficamente próxima à localização física do cluster.
ZONE é o local próximo ao data center local.
Escolha uma zona do Google Cloud geograficamente próxima de onde o tráfego flui.
Essa configuração cria dois recursos:
Um ConfigMap, prom-to-sd-user-config, que declara várias variáveis para uso pela implantação
Uma implantação, prometheus-to-monitoring, que executa prometheus-to-sd em um único pod.
apiVersion: v1
kind: ConfigMap
metadata:
name: prom-to-sd-user-config
data:
# The project that the Connect Agent uses. Accepts ID or number.
project: PROJECT_ID
# A name for the cluster, which shows up in Cloud Monitoring.
cluster_name: CLUSTER_NAME
# cluster_location must be valid (e.g. us-west1-a); shows up in Cloud Monitoring.
cluster_location: REGION
# A zone name to report (e.g. us-central1-a).
zone: ZONE
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-to-monitoring
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
run: prometheus-to-monitoring
template:
metadata:
labels:
run: prometheus-to-monitoring
spec:
containers:
- args:
- /monitor
# 'gke-connect-agent' is the text that will show up in the Cloud Monitoring metric name.
- --source=gke-connect-agent:http://gke-connect-agent:8080
- --monitored-resource-types=k8s
- --stackdriver-prefix=custom.googleapis.com
- --project-id=$(PROM_PROJECT)
- --cluster-name=$(PROM_CLUSTER_NAME)
- --cluster-location=$(PROM_CLUSTER_LOCATION)
- --zone-override=$(PROM_ZONE)
# A node name to report. This is a dummy value.
- --node-name=MyGkeConnectAgent
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/creds/creds-gcp.json
- name: PROM_PROJECT
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: project
- name: PROM_CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: cluster_name
- name: PROM_CLUSTER_LOCATION
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: cluster_location
- name: PROM_ZONE
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: zone
image: gcr.io/google-containers/prometheus-to-sd:v0.7.1
imagePullPolicy: IfNotPresent
name: prometheus-to-monitoring
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/creds
name: creds-gcp
readOnly: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: creds-gcp
secret:
defaultMode: 420
# This secret is already set up for the Connect Agent.
secretName: creds-gcp
Aplique o arquivo YAML ao namespace do agente do Connect no cluster, em que KUBECONFIG é o caminho para o arquivo kubeconfig do cluster:
As métricas do agente do Connect têm como prefixo custom.googleapis.com/gke-connect-agent/, em que gke-connect-agent é a string especificada no argumento --source. Exemplo:
custom.googleapis.com/gke-connect-agent/gkeconnect_dialer_connection_errors_total
cURL
Em um shell, use kubectl para encaminhar o serviço gke-connect-monitoring:
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2024-10-16 UTC."],[],[],null,["This page explains how to export Connect Agent metrics to\nCloud Monitoring from Google Distributed Cloud, GKE on AWS, or any other\nregistered Kubernetes cluster.\n\nOverview\n\nIn a Google Distributed Cloud or GKE on AWS cluster, Prometheus collects metrics and stores\nthem locally within the cluster. Registering a cluster outside Google Cloud to a fleet\ncreates a Deployment called Connect Agent in the cluster. Prometheus collects\nuseful metrics from Connect Agent, like errors connecting to Google and the\nnumber of open connections. To make these metrics available to\nCloud Monitoring, you must:\n\n- Expose the Connect Agent using a Service.\n- Deploy [`prometheus-to-sd`](https://github.com/GoogleCloudPlatform/k8s-stackdriver), a simple component that scrapes Prometheus metrics and exports them to Cloud Monitoring.\n\nAfterwards, you view the metrics by using Monitoring in the\nGoogle Cloud console, or by port forwarding the Service and using `curl`.\n\nCreating a variable for Connect Agent's namespace\n\nConnect Agent typically runs in the namespace `gke-connect`.\n\nConnect Agent has a label, `hub.gke.io/project`. The HTTP server listens on\nport 8080.\n\nCreate a variable, `AGENT_NS`, for the namespace: \n\n```\nAGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project=PROJECT_ID)\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e: the kubeconfig file for your cluster\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: the project ID\n\nExposing Connect Agent Deployment\n\n1. Copy the following configuration to a YAML file named\n `gke-connect-agent.yaml`. This configuration creates a Service,\n `gke-connect-agent`, which exposes the Connect Agent Deployment.\n\n ```\n apiVersion: v1\n kind: Service\n metadata:\n labels:\n app: gke-connect-agent\n name: gke-connect-agent\n spec:\n ports:\n - port: 8080\n protocol: TCP\n targetPort: 8080\n selector:\n app: gke-connect-agent\n type: ClusterIP\n ```\n2. Apply the YAML file to the Connect Agent's namespace in your cluster, where\n \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e is the path to the cluster's kubeconfig file:\n\n ```\n kubectl apply -n ${AGENT_NS} --kubeconfig KUBECONFIG -f gke-connect-agent.yaml\n ```\n3. Bind the `roles/monitoring.metricWriter` IAM role to the fleet Google service account:\n\n ```\n gcloud projects add-iam-policy-binding PROJECT_ID \\\n --member=\"serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com\" \\\n --role=\"roles/monitoring.metricWriter\"\n ```\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is your Google Cloud project ID. [Learn how to find\n this value](/resource-manager/docs/creating-managing-projects#identifying_projects).\n - \u003cvar translate=\"no\"\u003eSERVICE_ACCOUNT_NAME\u003c/var\u003e is the service account used when [registering the\n cluster](https://cloud.google.com/service-mesh/docs/register-cluster#creating_a_service_account_and_key_file).\n\nDeploying `prometheus-to-sd`\n\n1. Copy the following configuration to a YAML file, named `prometheus-to-sd.yaml`\n where:\n\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is your Google Cloud project ID. [Learn how to find\n this value](/resource-manager/docs/creating-managing-projects#identifying_projects).\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e is the of the Kubernetes cluster where Connect Agent runs.\n - \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e is the location that is geographically close to where your cluster runs. Choose a [Google Cloud zone](/compute/docs/regions-zones#available) that is geographically close to where the cluster is physically located.\n - \u003cvar translate=\"no\"\u003eZONE\u003c/var\u003e is the location near your on-prem datacenter. Choose a Google Cloud zone that is geographically close to where traffic flows.\n\n This configuration creates two resources:\n - A ConfigMap, `prom-to-sd-user-config`, which declares several variables for use by the Deployment\n - A Deployment, `prometheus-to-monitoring`, which runs `prometheus-to-sd` in a single Pod.\n\n ```\n apiVersion: v1\n kind: ConfigMap\n metadata:\n name: prom-to-sd-user-config\n data:\n # The project that the Connect Agent uses. Accepts ID or number.\n project: PROJECT_ID\n # A name for the cluster, which shows up in Cloud Monitoring.\n cluster_name: CLUSTER_NAME\n # cluster_location must be valid (e.g. us-west1-a); shows up in Cloud Monitoring.\n cluster_location: REGION\n # A zone name to report (e.g. us-central1-a).\n zone: ZONE\n ---\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: prometheus-to-monitoring\n spec:\n progressDeadlineSeconds: 600\n replicas: 1\n revisionHistoryLimit: 2\n selector:\n matchLabels:\n run: prometheus-to-monitoring\n template:\n metadata:\n labels:\n run: prometheus-to-monitoring\n spec:\n containers:\n - args:\n - /monitor\n # 'gke-connect-agent' is the text that will show up in the Cloud Monitoring metric name.\n - --source=gke-connect-agent:http://gke-connect-agent:8080\n - --monitored-resource-types=k8s\n - --stackdriver-prefix=custom.googleapis.com\n - --project-id=$(PROM_PROJECT)\n - --cluster-name=$(PROM_CLUSTER_NAME)\n - --cluster-location=$(PROM_CLUSTER_LOCATION)\n - --zone-override=$(PROM_ZONE)\n # A node name to report. This is a dummy value.\n - --node-name=MyGkeConnectAgent\n env:\n - name: GOOGLE_APPLICATION_CREDENTIALS\n value: /etc/creds/creds-gcp.json\n - name: PROM_PROJECT\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: project\n - name: PROM_CLUSTER_NAME\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: cluster_name\n - name: PROM_CLUSTER_LOCATION\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: cluster_location\n - name: PROM_ZONE\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: zone\n image: gcr.io/google-containers/prometheus-to-sd:v0.7.1\n imagePullPolicy: IfNotPresent\n name: prometheus-to-monitoring\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /etc/creds\n name: creds-gcp\n readOnly: true\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\n volumes:\n - name: creds-gcp\n secret:\n defaultMode: 420\n # This secret is already set up for the Connect Agent.\n secretName: creds-gcp\n ```\n2. Apply the YAML file to the Connect Agent's namespace in your cluster, where\n \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e is the path to the cluster's kubeconfig file:\n\n ```\n kubectl apply -n ${AGENT_NS} --kubeconfig KUBECONFIG -f prometheus-to-sd.yaml\n ```\n\nViewing metrics \n\nConsole\n\n1. Go to the Monitoring page in Google Cloud console.\n\n [Go to the Monitoring page](https://console.cloud.google.com/monitoring)\n2. From the left menu, click **Metrics Explorer**.\n\n3. Connect Agent's metrics are prefixed with\n `custom.googleapis.com/gke-connect-agent/`, where `gke-connect-agent` is\n the string specified in the `--source` argument. For example,\n `custom.googleapis.com/gke-connect-agent/gkeconnect_dialer_connection_errors_total`\n\ncURL\n\n1. In a shell, use `kubectl` to port forward the `gke-connect-monitoring` Service:\n\n ```\n kubectl -n ${AGENT_NS} port-forward svc/gke-connect-monitoring 8080\n ```\n2. Open another shell, then run:\n\n ```\n curl localhost:8080/metrics\n ```\n\nCleaning up\n\nTo delete the resources you created in this topic: \n\n```\nAGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project)\nkubectl delete configmap prom-to-sd-user-config --kubeconfig KUBECONFIG -n ${AGENT_NS}\nkubectl delete service gke-connect-agent --kubeconfig KUBECONFIG -n ${AGENT_NS}\nkubectl delete deployment prometheus-to-monitoring --kubeconfig KUBECONFIG -n ${AGENT_NS}\n```"]]