Esportare le metriche dell'agente Connect in Cloud Monitoring
Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
Questa pagina spiega come esportare le metriche dell'agente Connect in Cloud Monitoring da Google Distributed Cloud, GKE su AWS o qualsiasi altro cluster Kubernetes registrato.
Panoramica
In un cluster Google Distributed Cloud o GKE on AWS, Prometheus raccoglie le metriche e le archivia localmente all'interno del cluster. La registrazione di un cluster esterno Google Cloud in un parco risorse
crea un deployment denominato Agente Connect nel cluster. Prometheus raccoglie
metriche utili dall'agente Connect, ad esempio gli errori di connessione a Google e il
numero di connessioni aperte. Per rendere disponibili queste metriche per Cloud Monitoring, devi:
Esponi l'agente Connect utilizzando un servizio.
Esegui il deployment di prometheus-to-sd, un semplice componente che esegue lo scraping delle metriche di Prometheus e le esporta in Cloud Monitoring.
Successivamente, puoi visualizzare le metriche utilizzando il monitoraggio nella consoleGoogle Cloud o inoltrando la porta del servizio e utilizzando curl.
Creazione di una variabile per lo spazio dei nomi di Connect Agent
In genere, l'agente Connect viene eseguito nello spazio dei nomi gke-connect.
L'agente Connect ha un'etichetta, hub.gke.io/project. Il server HTTP rimane in ascolto sulla porta 8080.
Crea una variabile, AGENT_NS, per lo spazio dei nomi:
AGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project=PROJECT_ID)
Sostituisci quanto segue:
KUBECONFIG: il file kubeconfig del cluster
PROJECT_ID: l'ID progetto
Esposizione del deployment dell'agente Connect
Copia la seguente configurazione in un file YAML denominato
gke-connect-agent.yaml. Questa configurazione crea un servizio,gke-connect-agent, che espone il deployment dell'agente Connect.
CLUSTER_NAME è il nome del cluster Kubernetes in cui viene eseguito l'agente Connect.
REGION è la località geograficamente vicina alla posizione in cui viene eseguito il cluster. Scegli una
Google Cloud zona che sia
geograficamente vicina alla posizione in cui si trova fisicamente il cluster.
ZONE è la località vicino al tuo data center on-premise.
Scegli una Google Cloud zona geograficamente vicina a dove si genera il traffico.
Questa configurazione crea due risorse:
Un ConfigMap, prom-to-sd-user-config, che dichiara diverse variabili per l'utilizzo da parte del deployment
Un deployment, prometheus-to-monitoring, che esegue prometheus-to-sd in un unico pod.
apiVersion: v1
kind: ConfigMap
metadata:
name: prom-to-sd-user-config
data:
# The project that the Connect Agent uses. Accepts ID or number.
project: PROJECT_ID
# A name for the cluster, which shows up in Cloud Monitoring.
cluster_name: CLUSTER_NAME
# cluster_location must be valid (e.g. us-west1-a); shows up in Cloud Monitoring.
cluster_location: REGION
# A zone name to report (e.g. us-central1-a).
zone: ZONE
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-to-monitoring
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
run: prometheus-to-monitoring
template:
metadata:
labels:
run: prometheus-to-monitoring
spec:
containers:
- args:
- /monitor
# 'gke-connect-agent' is the text that will show up in the Cloud Monitoring metric name.
- --source=gke-connect-agent:http://gke-connect-agent:8080
- --monitored-resource-types=k8s
- --stackdriver-prefix=custom.googleapis.com
- --project-id=$(PROM_PROJECT)
- --cluster-name=$(PROM_CLUSTER_NAME)
- --cluster-location=$(PROM_CLUSTER_LOCATION)
- --zone-override=$(PROM_ZONE)
# A node name to report. This is a dummy value.
- --node-name=MyGkeConnectAgent
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/creds/creds-gcp.json
- name: PROM_PROJECT
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: project
- name: PROM_CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: cluster_name
- name: PROM_CLUSTER_LOCATION
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: cluster_location
- name: PROM_ZONE
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: zone
image: gcr.io/google-containers/prometheus-to-sd:v0.7.1
imagePullPolicy: IfNotPresent
name: prometheus-to-monitoring
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/creds
name: creds-gcp
readOnly: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: creds-gcp
secret:
defaultMode: 420
# This secret is already set up for the Connect Agent.
secretName: creds-gcp
Applica il file YAML allo spazio dei nomi di Connect Agent nel cluster, dove
KUBECONFIG è il percorso del file kubeconfig del cluster:
Nel menu a sinistra, fai clic su Esplora metriche.
Le metriche di Connect Agent sono precedute dal prefisso custom.googleapis.com/gke-connect-agent/, dove gke-connect-agent è la stringa specificata nell'argomento --source. Ad esempio:
custom.googleapis.com/gke-connect-agent/gkeconnect_dialer_connection_errors_total
cURL
In una shell, utilizza kubectl per eseguire il port forwarding del servizio gke-connect-monitoring:
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],["Ultimo aggiornamento 2025-09-04 UTC."],[],[],null,["This page explains how to export Connect Agent metrics to\nCloud Monitoring from Google Distributed Cloud, GKE on AWS, or any other\nregistered Kubernetes cluster.\n\nOverview\n\nIn a Google Distributed Cloud or GKE on AWS cluster, Prometheus collects metrics and stores\nthem locally within the cluster. Registering a cluster outside Google Cloud to a fleet\ncreates a Deployment called Connect Agent in the cluster. Prometheus collects\nuseful metrics from Connect Agent, like errors connecting to Google and the\nnumber of open connections. To make these metrics available to\nCloud Monitoring, you must:\n\n- Expose the Connect Agent using a Service.\n- Deploy [`prometheus-to-sd`](https://github.com/GoogleCloudPlatform/k8s-stackdriver), a simple component that scrapes Prometheus metrics and exports them to Cloud Monitoring.\n\nAfterwards, you view the metrics by using Monitoring in the\nGoogle Cloud console, or by port forwarding the Service and using `curl`.\n\nCreating a variable for Connect Agent's namespace\n\nConnect Agent typically runs in the namespace `gke-connect`.\n\nConnect Agent has a label, `hub.gke.io/project`. The HTTP server listens on\nport 8080.\n\nCreate a variable, `AGENT_NS`, for the namespace: \n\n```\nAGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project=PROJECT_ID)\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e: the kubeconfig file for your cluster\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: the project ID\n\nExposing Connect Agent Deployment\n\n1. Copy the following configuration to a YAML file named\n `gke-connect-agent.yaml`. This configuration creates a Service,\n `gke-connect-agent`, which exposes the Connect Agent Deployment.\n\n ```\n apiVersion: v1\n kind: Service\n metadata:\n labels:\n app: gke-connect-agent\n name: gke-connect-agent\n spec:\n ports:\n - port: 8080\n protocol: TCP\n targetPort: 8080\n selector:\n app: gke-connect-agent\n type: ClusterIP\n ```\n2. Apply the YAML file to the Connect Agent's namespace in your cluster, where\n \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e is the path to the cluster's kubeconfig file:\n\n ```\n kubectl apply -n ${AGENT_NS} --kubeconfig KUBECONFIG -f gke-connect-agent.yaml\n ```\n3. Bind the `roles/monitoring.metricWriter` IAM role to the fleet Google service account:\n\n ```\n gcloud projects add-iam-policy-binding PROJECT_ID \\\n --member=\"serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com\" \\\n --role=\"roles/monitoring.metricWriter\"\n ```\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is your Google Cloud project ID. [Learn how to find\n this value](/resource-manager/docs/creating-managing-projects#identifying_projects).\n - \u003cvar translate=\"no\"\u003eSERVICE_ACCOUNT_NAME\u003c/var\u003e is the service account used when [registering the\n cluster](https://cloud.google.com/service-mesh/docs/register-cluster#creating_a_service_account_and_key_file).\n\nDeploying `prometheus-to-sd`\n\n1. Copy the following configuration to a YAML file, named `prometheus-to-sd.yaml`\n where:\n\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is your Google Cloud project ID. [Learn how to find\n this value](/resource-manager/docs/creating-managing-projects#identifying_projects).\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e is the of the Kubernetes cluster where Connect Agent runs.\n - \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e is the location that is geographically close to where your cluster runs. Choose a [Google Cloud zone](/compute/docs/regions-zones#available) that is geographically close to where the cluster is physically located.\n - \u003cvar translate=\"no\"\u003eZONE\u003c/var\u003e is the location near your on-prem datacenter. Choose a Google Cloud zone that is geographically close to where traffic flows.\n\n This configuration creates two resources:\n - A ConfigMap, `prom-to-sd-user-config`, which declares several variables for use by the Deployment\n - A Deployment, `prometheus-to-monitoring`, which runs `prometheus-to-sd` in a single Pod.\n\n ```\n apiVersion: v1\n kind: ConfigMap\n metadata:\n name: prom-to-sd-user-config\n data:\n # The project that the Connect Agent uses. Accepts ID or number.\n project: PROJECT_ID\n # A name for the cluster, which shows up in Cloud Monitoring.\n cluster_name: CLUSTER_NAME\n # cluster_location must be valid (e.g. us-west1-a); shows up in Cloud Monitoring.\n cluster_location: REGION\n # A zone name to report (e.g. us-central1-a).\n zone: ZONE\n ---\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: prometheus-to-monitoring\n spec:\n progressDeadlineSeconds: 600\n replicas: 1\n revisionHistoryLimit: 2\n selector:\n matchLabels:\n run: prometheus-to-monitoring\n template:\n metadata:\n labels:\n run: prometheus-to-monitoring\n spec:\n containers:\n - args:\n - /monitor\n # 'gke-connect-agent' is the text that will show up in the Cloud Monitoring metric name.\n - --source=gke-connect-agent:http://gke-connect-agent:8080\n - --monitored-resource-types=k8s\n - --stackdriver-prefix=custom.googleapis.com\n - --project-id=$(PROM_PROJECT)\n - --cluster-name=$(PROM_CLUSTER_NAME)\n - --cluster-location=$(PROM_CLUSTER_LOCATION)\n - --zone-override=$(PROM_ZONE)\n # A node name to report. This is a dummy value.\n - --node-name=MyGkeConnectAgent\n env:\n - name: GOOGLE_APPLICATION_CREDENTIALS\n value: /etc/creds/creds-gcp.json\n - name: PROM_PROJECT\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: project\n - name: PROM_CLUSTER_NAME\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: cluster_name\n - name: PROM_CLUSTER_LOCATION\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: cluster_location\n - name: PROM_ZONE\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: zone\n image: gcr.io/google-containers/prometheus-to-sd:v0.7.1\n imagePullPolicy: IfNotPresent\n name: prometheus-to-monitoring\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /etc/creds\n name: creds-gcp\n readOnly: true\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\n volumes:\n - name: creds-gcp\n secret:\n defaultMode: 420\n # This secret is already set up for the Connect Agent.\n secretName: creds-gcp\n ```\n2. Apply the YAML file to the Connect Agent's namespace in your cluster, where\n \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e is the path to the cluster's kubeconfig file:\n\n ```\n kubectl apply -n ${AGENT_NS} --kubeconfig KUBECONFIG -f prometheus-to-sd.yaml\n ```\n\nViewing metrics \n\nConsole\n\n1. Go to the Monitoring page in Google Cloud console.\n\n [Go to the Monitoring page](https://console.cloud.google.com/monitoring)\n2. From the left menu, click **Metrics Explorer**.\n\n3. Connect Agent's metrics are prefixed with\n `custom.googleapis.com/gke-connect-agent/`, where `gke-connect-agent` is\n the string specified in the `--source` argument. For example,\n `custom.googleapis.com/gke-connect-agent/gkeconnect_dialer_connection_errors_total`\n\ncURL\n\n1. In a shell, use `kubectl` to port forward the `gke-connect-monitoring` Service:\n\n ```\n kubectl -n ${AGENT_NS} port-forward svc/gke-connect-monitoring 8080\n ```\n2. Open another shell, then run:\n\n ```\n curl localhost:8080/metrics\n ```\n\nCleaning up\n\nTo delete the resources you created in this topic: \n\n```\nAGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project)\nkubectl delete configmap prom-to-sd-user-config --kubeconfig KUBECONFIG -n ${AGENT_NS}\nkubectl delete service gke-connect-agent --kubeconfig KUBECONFIG -n ${AGENT_NS}\nkubectl delete deployment prometheus-to-monitoring --kubeconfig KUBECONFIG -n ${AGENT_NS}\n```"]]