Prima di applicare la configurazione e installare i grafici Helm ibrida, devi verificare che il tuo cluster Kubernetes sia pronto per l'installazione di Apigee Hybrid.
Per verificare l'idoneità del cluster, crea un file YAML con una definizione di job Kubernetes e applicalo con i comandi kubectl per controllare il cluster.
Controlla quindi lo stato del job di test Kubernetes con un comando kubectl get jobs.
Verifica che kubectl sia impostato sul contesto corretto utilizzando il seguente comando.
Il contesto corrente deve essere impostato sul cluster in cui esegui il deployment di Apigee hybrid.
kubectl config current-context
Il risultato deve includere il nome del cluster in cui stai eseguendo il deployment di Apigee Hybrid. Ad esempio, su GKE, il nome del contesto è in genere nel formato gke_project-id_cluster-location_cluster-name, come in:
gke_my-project_us-central1_my-cluster
Se il nome del cluster nel contesto non corrisponde, il seguente comando recupererà le credenziali gcloud del cluster e imposterà il contesto kubectl:
L'output dovrebbe indicare che l'account di servizio e il job sono stati creati. Ad esempio:
kubectl apply -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml
serviceaccount/apigee-k8s-cluster-ready-check created
job.batch/apigee-k8s-cluster-ready-check created
Controlla lo stato del job Kubernetes con il seguente comando:
kubectl get jobs apigee-k8s-cluster-ready-check
Se il cluster è pronto, l'output dovrebbe avere il seguente aspetto:
NAME COMPLETIONS DURATION AGE
apigee-k8s-cluster-ready-check 1/1 8s 1h23m
Se il test non è riuscito e il cluster non è pronto, l'output sarà simile al seguente:
NAME COMPLETIONS DURATION AGE
apigee-k8s-cluster-ready-check 0/1 44s 44s
Cerca il numero di completamenti:
1/1 Operazione riuscita. Il cluster è pronto per l'installazione di Apigee hybrid.
0/1 Il test non è riuscito. Il cluster non è pronto. Procedi con i passaggi riportati di seguito per risolvere i problemi del cluster.
Se il test non è andato a buon fine, controlla i log con i seguenti comandi.
Recupera il nome del pod per il job di precontrollo del cluster:
kubectl get pods | grep apigee-k8s-cluster-ready-check
Recupera i log di Kubernetes per il pod:
kubectl logs pod_name
dove pod_name è il nome del pod apigee-k8s-cluster-ready-check.
Ripulisci prima di procedere con il passaggio successivo. Elimina il job Kubernetes con il seguente
comando:
Ora hai verificato che il cluster Apigee hybrid è pronto. Ora installiamo i grafici per applicare la configurazione al runtime ibrida.
Risoluzione dei problemi
Controllo DNS di Cassandra: se trovi log di errore simili a
DNS resolution was successful but IP doesn't match POD IP,
could not resolve hostname o error determining hostname, significa che il DNS del tuo
cluster non è configurato correttamente per una configurazione multiregione. Puoi ignorare questo errore se
non intendi configurare più regioni.
Controllo della connettività del piano di controllo: se trovi log di errore simili a
error creating TCP connection with host, devi risolvere la connettività
dal cluster ad apigee.googleapis.com ed eseguire di nuovo il job.
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],["Ultimo aggiornamento 2025-09-04 UTC."],[[["\u003cp\u003eBefore installing Apigee hybrid, verify your Kubernetes cluster's readiness by creating and applying a YAML file containing a Kubernetes Job definition.\u003c/p\u003e\n"],["\u003cp\u003eUse the \u003ccode\u003ekubectl config current-context\u003c/code\u003e command to confirm that your \u003ccode\u003ekubectl\u003c/code\u003e is connected to the correct cluster for Apigee hybrid deployment.\u003c/p\u003e\n"],["\u003cp\u003eApply the \u003ccode\u003eapigee-k8s-cluster-ready-check.yaml\u003c/code\u003e file with \u003ccode\u003ekubectl\u003c/code\u003e to run the readiness test, and monitor its progress with \u003ccode\u003ekubectl get jobs apigee-k8s-cluster-ready-check\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eA successful test will show "1/1" completions in the job status, indicating that your cluster is ready, whereas "0/1" means the cluster is not ready and requires troubleshooting.\u003c/p\u003e\n"],["\u003cp\u003eClean up after the test, by deleting the Kubernetes job and service account using \u003ccode\u003ekubectl delete -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Step 10: Check cluster readiness\n\n| You are currently viewing version 1.12 of the Apigee hybrid documentation. **This version is end of life.** You should upgrade to a newer version. For more information, see [Supported versions](/apigee/docs/hybrid/supported-platforms#supported-versions).\n\nCheck hybrid cluster readiness\n------------------------------\n\nBefore applying your configuration and installing the hybrid helm charts, you should\ncheck that your Kubernetes cluster is ready for Apigee hybrid installation.\n\n\nTo check the readiness of your cluster, you will create a YAML file with a Kubernetes Job\ndefinition and apply that file with `kubectl` commands to check the cluster.\nYou then check the status of the Kubernetes test job with a `kubectl get jobs` command.\n\n1. Verify that `kubectl` is set to the correct context using the following command. The current context should be set to the cluster to which you are deploying Apigee hybrid. \n\n ```\n kubectl config current-context\n ```\n\n The result should include the name of the cluster you are deploying Apigee hybrid in. For\n example, on GKE, the context name is usually in the form\n `gke_`\u003cvar translate=\"no\"\u003eproject-id\u003c/var\u003e`_`\u003cvar translate=\"no\"\u003ecluster-location\u003c/var\u003e`_`\u003cvar translate=\"no\"\u003ecluster-name\u003c/var\u003e, as\n in: \n\n ```\n gke_my-project_us-central1_my-cluster\n ```\n\n If the name cluster name in the context does not match, the following command will get the\n `gcloud` credentials of the cluster and set the `kubectl` context:\n\n ### Regional clusters\n\n ```\n gcloud container clusters get-credentials $CLUSTER_NAME \\\n --region $CLUSTER_LOCATION \\\n --project $PROJECT_ID\n ```\n\n ### Zonal clusters\n\n ```\n gcloud container clusters get-credentials $CLUSTER_NAME \\\n --zone $CLUSTER_LOCATION \\\n --project $PROJECT_ID\n ```\n2. In your helm-charts directory, create a `cluster-check` directory: \n\n ```\n mkdir $APIGEE_HELM_CHARTS_HOME/cluster-check\n ```\n3. In the `$APIGEE_HELM_CHARTS_HOME/cluster-check` directory, create a file named `apigee-k8s-cluster-ready-check.yaml` with the following contents: \n\n ```\n apiVersion: v1\n kind: ServiceAccount\n metadata:\n name: apigee-k8s-cluster-ready-check\n ---\n apiVersion: rbac.authorization.k8s.io/v1\n kind: ClusterRoleBinding\n metadata:\n name: cluster-check-admin\n roleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\n subjects:\n - apiGroup: \"\"\n kind: ServiceAccount\n namespace: default\n name: apigee-k8s-cluster-ready-check\n ---\n apiVersion: batch/v1\n kind: Job\n metadata:\n name: apigee-k8s-cluster-ready-check\n spec:\n template:\n spec:\n hostNetwork: true\n serviceAccountName: apigee-k8s-cluster-ready-check\n containers:\n - name: manager\n image: gcr.io/apigee-release/hybrid/apigee-operators:1.12.4\n command:\n - /manager\n args:\n - --k8s-cluster-ready-check\n env:\n - name: POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n securityContext:\n runAsGroup: 998\n runAsNonRoot: true\n runAsUser: 999\n restartPolicy: Never\n backoffLimit: 1\n ```\n4. Apply the `apigee-k8s-cluster-ready-check.yaml` with the following `kubectl` command. This will run the test: \n\n ```\n kubectl apply -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml\n ```\n\n\n The output should show that the service account and job were created. For example: \n\n kubectl apply -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml\n serviceaccount/apigee-k8s-cluster-ready-check created\n job.batch/apigee-k8s-cluster-ready-check created\n\n5. Check the status of the Kubernetes job with the following command: \n\n ```\n kubectl get jobs apigee-k8s-cluster-ready-check\n ```\n\n\n If your cluster is ready, the output should look something like: \n\n ```\n NAME COMPLETIONS DURATION AGE\n apigee-k8s-cluster-ready-check 1/1 8s 1h23m\n ```\n\n\n If the test failed and your cluster is not ready, the output will look something like: \n\n ```\n NAME COMPLETIONS DURATION AGE\n apigee-k8s-cluster-ready-check 0/1 44s 44s\n ```\n\n Look for the number of completions:\n - **1/1** Success, your cluster is ready for Apigee hybrid installation.\n - **0/1** The test failed. The cluster is not ready. Proceed to the following steps to troubleshoot the cluster.\n\n \u003cbr /\u003e\n\n6. If the test did not succeed, check the logs with the following commands.\n 1. Get the name of the pod for the cluster pre-check job: \n\n ```\n kubectl get pods | grep apigee-k8s-cluster-ready-check\n ```\n 2. Get the Kubernetes logs for the pod: \n\n ```\n kubectl logs pod_name\n ```\n\n\n Where \u003cvar translate=\"no\"\u003epod_name\u003c/var\u003e is the name of the apigee-k8s-cluster-ready-check pod.\n7. Clean up before proceeding to the next step. Delete the Kubernetes job with the following command: \n\n ```\n kubectl delete -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml\n ```\n\n\n The output should show that the service account and job were deleted. For example: \n\n kubectl delete -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml\n serviceaccount \"apigee-k8s-cluster-ready-check\" deleted\n job.batch \"apigee-k8s-cluster-ready-check\" deleted\n\n| **Tip:** On some platforms, like Google Distributed Cloud, you may see DNS resolution errors. If the DNS resolution is not working, try this workaround.\n|\n|\n| Add a static host entry in `/etc/hosts` file of each cassandra worker node\n| similar to the following: \n|\n| ```\n| echo -e \"\\\\n127.0.1.1 ${HOSTNAME}\" \u003e\u003e \"/etc/hosts\"\n| ```\n|\n|\n| or \n|\n| ```\n| \"\\\\nWORKER-NODE-IP ${HOSTNAME}\" \u003e\u003e \"/etc/hosts\"\n| ```\n\nYou have now made sure your Apigee hybrid cluster is ready. Next, let's install the charts to\napply your configuration to the hybrid runtime.\n\nTroubleshooting\n---------------\n\n1. Cassandra DNS check: If you find error logs similar to `DNS resolution was successful but IP doesn't match POD IP`, `could not resolve hostname` or `error determining hostname` it means your cluster DNS is not configured correctly for a multi-region setup. You can ignore this error if you do not intend to set up multi-region.\n2. Control Plane connectivity check: If you find error logs similar to `error creating TCP connection with host` then you need to resolve the connectivity from cluster to apigee.googleapis.com and re-run the job.\n\nNext step\n---------\n\n\u003cbr /\u003e\n\n[1](/apigee/docs/hybrid/v1.12/install-create-cluster) [2](/apigee/docs/hybrid/v1.12/install-download-charts) [3](/apigee/docs/hybrid/v1.12/install-create-namespace) [4](/apigee/docs/hybrid/v1.12/install-service-accounts) [5](/apigee/docs/hybrid/v1.12/install-create-tls-certificates) [6](/apigee/docs/hybrid/v1.12/install-create-overrides) [7](/apigee/docs/hybrid/v1.12/install-enable-synchronizer-access) [8](/apigee/docs/hybrid/v1.12/install-cert-manager) [9](/apigee/docs/hybrid/v1.12/install-crds) [10](/apigee/docs/hybrid/v1.12/install-check-cluster) [(NEXT) Step 11: Install Apigee hybrid using Helm charts](/apigee/docs/hybrid/v1.12/install-helm-charts) [12](/apigee/docs/hybrid/v1.12/install-workload-identity)\n\n\u003cbr /\u003e"]]