Connect Agent-Messwerte in Cloud Monitoring exportieren
Mit Sammlungen den Überblick behalten
Sie können Inhalte basierend auf Ihren Einstellungen speichern und kategorisieren.
Auf dieser Seite wird erläutert, wie Sie Connect-Agent-Messwerte von der Google Distributed Cloud, GKE on AWS oder einem anderen registrierten Kubernetes-Cluster nach Cloud Monitoring exportieren.
Übersicht
In einem Google Distributed Cloud- oder GKE on AWS-Cluster erfasst Prometheus Messwerte und speichert sie lokal im Cluster. Wenn Sie einen Cluster außerhalb von Google Cloud für eine Flotte registrieren, wird im Cluster ein Deployment namens Connect-Agent erstellt. Prometheus erfasst nützliche Messwerte von Connect-Agent, z. B. Fehler, die eine Verbindung zu Google herstellen, und die Anzahl der offenen Verbindungen. So stellen Sie diese Messwerte für Cloud Monitoring zur Verfügung:
Connect-Agent mithilfe eines Dienstes verfügbar machen.
Stellen Sie prometheus-to-sd bereit, eine einfache Komponente, die Prometheus-Messwerte abruft und sie in Cloud Monitoring exportiert.
Anschließend können Sie die Messwerte mithilfe von Monitoring in der Google Cloud Console oder durch Portweiterleitung des Dienstes und curl aufrufen.
Variable für den Namespace des Connect-Agents erstellen
Connect Agent wird normalerweise im Namespace gke-connect ausgeführt.
Connect-Agent hat das Label hub.gke.io/project. Der HTTP-Server überwacht Port 8080.
Erstellen Sie eine Variable namens AGENT_NS für den Namespace:
AGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project=PROJECT_ID)
Ersetzen Sie dabei Folgendes:
KUBECONFIG: Die kubeconfig-Datei für Ihren Cluster
PROJECT_ID: die Projekt-ID
Connect-Agent-Deployment verfügbar machen
Kopieren Sie die folgende Konfiguration in eine YAML-Datei mit dem Namen gke-connect-agent.yaml. Mit dieser Konfiguration wird der gke-connect-agent-Dienst erstellt, der das Deployment des Connect-Agents bereitstellt.
CLUSTER_NAME ist der Kubernetes-Cluster, in dem der Connect-Agent ausgeführt wird.
REGION ist der Standort, der sich in der Nähe des Clusters befindet. Wählen Sie eine Google Cloud-Zone aus, die sich geografisch in der Nähe des Clusters befindet.
ZONE ist der Standort in der Nähe Ihres lokalen Rechenzentrums.
Wählen Sie eine Google Cloud-Zone aus, die geografisch in der Nähe des Traffics liegt.
Mit dieser Konfiguration werden zwei Ressourcen erstellt:
Die prom-to-sd-user-config-ConfigMap, die mehrere Variablen für das Deployment deklariert
Ein prometheus-to-monitoring-Deployment, die prometheus-to-sd in einem einzelnen Pod ausführt.
apiVersion: v1
kind: ConfigMap
metadata:
name: prom-to-sd-user-config
data:
# The project that the Connect Agent uses. Accepts ID or number.
project: PROJECT_ID
# A name for the cluster, which shows up in Cloud Monitoring.
cluster_name: CLUSTER_NAME
# cluster_location must be valid (e.g. us-west1-a); shows up in Cloud Monitoring.
cluster_location: REGION
# A zone name to report (e.g. us-central1-a).
zone: ZONE
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-to-monitoring
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
run: prometheus-to-monitoring
template:
metadata:
labels:
run: prometheus-to-monitoring
spec:
containers:
- args:
- /monitor
# 'gke-connect-agent' is the text that will show up in the Cloud Monitoring metric name.
- --source=gke-connect-agent:http://gke-connect-agent:8080
- --monitored-resource-types=k8s
- --stackdriver-prefix=custom.googleapis.com
- --project-id=$(PROM_PROJECT)
- --cluster-name=$(PROM_CLUSTER_NAME)
- --cluster-location=$(PROM_CLUSTER_LOCATION)
- --zone-override=$(PROM_ZONE)
# A node name to report. This is a dummy value.
- --node-name=MyGkeConnectAgent
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/creds/creds-gcp.json
- name: PROM_PROJECT
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: project
- name: PROM_CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: cluster_name
- name: PROM_CLUSTER_LOCATION
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: cluster_location
- name: PROM_ZONE
valueFrom:
configMapKeyRef:
name: prom-to-sd-user-config
key: zone
image: gcr.io/google-containers/prometheus-to-sd:v0.7.1
imagePullPolicy: IfNotPresent
name: prometheus-to-monitoring
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/creds
name: creds-gcp
readOnly: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: creds-gcp
secret:
defaultMode: 420
# This secret is already set up for the Connect Agent.
secretName: creds-gcp
Wenden Sie die YAML-Datei auf den Namespace des Connect-Agents in Ihrem Cluster an, wobei KUBECONFIG der Pfad zur kubeconfig-Datei des Clusters ist:
Die Messwerte des Connect-Agents haben das Präfix custom.googleapis.com/gke-connect-agent/, wobei gke-connect-agent der String ist, der im Argument --source angegeben ist. Beispiel: custom.googleapis.com/gke-connect-agent/gkeconnect_dialer_connection_errors_total
cURL
Verwenden Sie in einer Shell kubectl, um den gke-connect-monitoring-Dienst weiterzuleiten:
[[["Leicht verständlich","easyToUnderstand","thumb-up"],["Mein Problem wurde gelöst","solvedMyProblem","thumb-up"],["Sonstiges","otherUp","thumb-up"]],[["Schwer verständlich","hardToUnderstand","thumb-down"],["Informationen oder Beispielcode falsch","incorrectInformationOrSampleCode","thumb-down"],["Benötigte Informationen/Beispiele nicht gefunden","missingTheInformationSamplesINeed","thumb-down"],["Problem mit der Übersetzung","translationIssue","thumb-down"],["Sonstiges","otherDown","thumb-down"]],["Zuletzt aktualisiert: 2024-12-19 (UTC)."],[],[],null,["This page explains how to export Connect Agent metrics to\nCloud Monitoring from Google Distributed Cloud, GKE on AWS, or any other\nregistered Kubernetes cluster.\n\nOverview\n\nIn a Google Distributed Cloud or GKE on AWS cluster, Prometheus collects metrics and stores\nthem locally within the cluster. Registering a cluster outside Google Cloud to a fleet\ncreates a Deployment called Connect Agent in the cluster. Prometheus collects\nuseful metrics from Connect Agent, like errors connecting to Google and the\nnumber of open connections. To make these metrics available to\nCloud Monitoring, you must:\n\n- Expose the Connect Agent using a Service.\n- Deploy [`prometheus-to-sd`](https://github.com/GoogleCloudPlatform/k8s-stackdriver), a simple component that scrapes Prometheus metrics and exports them to Cloud Monitoring.\n\nAfterwards, you view the metrics by using Monitoring in the\nGoogle Cloud console, or by port forwarding the Service and using `curl`.\n\nCreating a variable for Connect Agent's namespace\n\nConnect Agent typically runs in the namespace `gke-connect`.\n\nConnect Agent has a label, `hub.gke.io/project`. The HTTP server listens on\nport 8080.\n\nCreate a variable, `AGENT_NS`, for the namespace: \n\n```\nAGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project=PROJECT_ID)\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e: the kubeconfig file for your cluster\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: the project ID\n\nExposing Connect Agent Deployment\n\n1. Copy the following configuration to a YAML file named\n `gke-connect-agent.yaml`. This configuration creates a Service,\n `gke-connect-agent`, which exposes the Connect Agent Deployment.\n\n ```\n apiVersion: v1\n kind: Service\n metadata:\n labels:\n app: gke-connect-agent\n name: gke-connect-agent\n spec:\n ports:\n - port: 8080\n protocol: TCP\n targetPort: 8080\n selector:\n app: gke-connect-agent\n type: ClusterIP\n ```\n2. Apply the YAML file to the Connect Agent's namespace in your cluster, where\n \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e is the path to the cluster's kubeconfig file:\n\n ```\n kubectl apply -n ${AGENT_NS} --kubeconfig KUBECONFIG -f gke-connect-agent.yaml\n ```\n3. Bind the `roles/monitoring.metricWriter` IAM role to the fleet Google service account:\n\n ```\n gcloud projects add-iam-policy-binding PROJECT_ID \\\n --member=\"serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com\" \\\n --role=\"roles/monitoring.metricWriter\"\n ```\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is your Google Cloud project ID. [Learn how to find\n this value](/resource-manager/docs/creating-managing-projects#identifying_projects).\n - \u003cvar translate=\"no\"\u003eSERVICE_ACCOUNT_NAME\u003c/var\u003e is the service account used when [registering the\n cluster](https://cloud.google.com/service-mesh/docs/register-cluster#creating_a_service_account_and_key_file).\n\nDeploying `prometheus-to-sd`\n\n1. Copy the following configuration to a YAML file, named `prometheus-to-sd.yaml`\n where:\n\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e is your Google Cloud project ID. [Learn how to find\n this value](/resource-manager/docs/creating-managing-projects#identifying_projects).\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e is the of the Kubernetes cluster where Connect Agent runs.\n - \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e is the location that is geographically close to where your cluster runs. Choose a [Google Cloud zone](/compute/docs/regions-zones#available) that is geographically close to where the cluster is physically located.\n - \u003cvar translate=\"no\"\u003eZONE\u003c/var\u003e is the location near your on-prem datacenter. Choose a Google Cloud zone that is geographically close to where traffic flows.\n\n This configuration creates two resources:\n - A ConfigMap, `prom-to-sd-user-config`, which declares several variables for use by the Deployment\n - A Deployment, `prometheus-to-monitoring`, which runs `prometheus-to-sd` in a single Pod.\n\n ```\n apiVersion: v1\n kind: ConfigMap\n metadata:\n name: prom-to-sd-user-config\n data:\n # The project that the Connect Agent uses. Accepts ID or number.\n project: PROJECT_ID\n # A name for the cluster, which shows up in Cloud Monitoring.\n cluster_name: CLUSTER_NAME\n # cluster_location must be valid (e.g. us-west1-a); shows up in Cloud Monitoring.\n cluster_location: REGION\n # A zone name to report (e.g. us-central1-a).\n zone: ZONE\n ---\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: prometheus-to-monitoring\n spec:\n progressDeadlineSeconds: 600\n replicas: 1\n revisionHistoryLimit: 2\n selector:\n matchLabels:\n run: prometheus-to-monitoring\n template:\n metadata:\n labels:\n run: prometheus-to-monitoring\n spec:\n containers:\n - args:\n - /monitor\n # 'gke-connect-agent' is the text that will show up in the Cloud Monitoring metric name.\n - --source=gke-connect-agent:http://gke-connect-agent:8080\n - --monitored-resource-types=k8s\n - --stackdriver-prefix=custom.googleapis.com\n - --project-id=$(PROM_PROJECT)\n - --cluster-name=$(PROM_CLUSTER_NAME)\n - --cluster-location=$(PROM_CLUSTER_LOCATION)\n - --zone-override=$(PROM_ZONE)\n # A node name to report. This is a dummy value.\n - --node-name=MyGkeConnectAgent\n env:\n - name: GOOGLE_APPLICATION_CREDENTIALS\n value: /etc/creds/creds-gcp.json\n - name: PROM_PROJECT\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: project\n - name: PROM_CLUSTER_NAME\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: cluster_name\n - name: PROM_CLUSTER_LOCATION\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: cluster_location\n - name: PROM_ZONE\n valueFrom:\n configMapKeyRef:\n name: prom-to-sd-user-config\n key: zone\n image: gcr.io/google-containers/prometheus-to-sd:v0.7.1\n imagePullPolicy: IfNotPresent\n name: prometheus-to-monitoring\n resources: {}\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n volumeMounts:\n - mountPath: /etc/creds\n name: creds-gcp\n readOnly: true\n restartPolicy: Always\n schedulerName: default-scheduler\n securityContext: {}\n terminationGracePeriodSeconds: 30\n volumes:\n - name: creds-gcp\n secret:\n defaultMode: 420\n # This secret is already set up for the Connect Agent.\n secretName: creds-gcp\n ```\n2. Apply the YAML file to the Connect Agent's namespace in your cluster, where\n \u003cvar translate=\"no\"\u003eKUBECONFIG\u003c/var\u003e is the path to the cluster's kubeconfig file:\n\n ```\n kubectl apply -n ${AGENT_NS} --kubeconfig KUBECONFIG -f prometheus-to-sd.yaml\n ```\n\nViewing metrics \n\nConsole\n\n1. Go to the Monitoring page in Google Cloud console.\n\n [Go to the Monitoring page](https://console.cloud.google.com/monitoring)\n2. From the left menu, click **Metrics Explorer**.\n\n3. Connect Agent's metrics are prefixed with\n `custom.googleapis.com/gke-connect-agent/`, where `gke-connect-agent` is\n the string specified in the `--source` argument. For example,\n `custom.googleapis.com/gke-connect-agent/gkeconnect_dialer_connection_errors_total`\n\ncURL\n\n1. In a shell, use `kubectl` to port forward the `gke-connect-monitoring` Service:\n\n ```\n kubectl -n ${AGENT_NS} port-forward svc/gke-connect-monitoring 8080\n ```\n2. Open another shell, then run:\n\n ```\n curl localhost:8080/metrics\n ```\n\nCleaning up\n\nTo delete the resources you created in this topic: \n\n```\nAGENT_NS=$(kubectl get ns --kubeconfig KUBECONFIG -o jsonpath={.items..metadata.name} -l hub.gke.io/project)\nkubectl delete configmap prom-to-sd-user-config --kubeconfig KUBECONFIG -n ${AGENT_NS}\nkubectl delete service gke-connect-agent --kubeconfig KUBECONFIG -n ${AGENT_NS}\nkubectl delete deployment prometheus-to-monitoring --kubeconfig KUBECONFIG -n ${AGENT_NS}\n```"]]