Check hybrid cluster readiness
Before applying your configuration and installing the hybrid helm charts, you should check that your Kubernetes cluster is ready for Apigee hybrid installation.
To check the readiness of your cluster, you will create a YAML file with a Kubernetes Job
definition and apply that file with kubectl
commands to check the cluster.
You then check the status of the Kubernetes test job with a kubectl get jobs
command.
- Verify that
kubectl
is set to the correct context using the following command. The current context should be set to the cluster to which you are deploying Apigee hybrid.kubectl config current-context
The result should include the name of the cluster you are deploying Apigee hybrid in. For example, on GKE, the context name is usually in the form
gke_project-id_cluster-location_cluster-name
, as in:gke_my-project_us-central1_my-cluster
If the name cluster name in the context does not match, the following command will get the
gcloud
credentials of the cluster and set thekubectl
context:Regional clusters
gcloud container clusters get-credentials $CLUSTER_NAME \ --region $CLUSTER_LOCATION \ --project $PROJECT_ID
Zonal clusters
gcloud container clusters get-credentials $CLUSTER_NAME \ --zone $CLUSTER_LOCATION \ --project $PROJECT_ID
- In your helm-charts directory, create a
cluster-check
directory:mkdir $APIGEE_HELM_CHARTS_HOME/cluster-check
- In the
$APIGEE_HELM_CHARTS_HOME/cluster-check
directory, create a file namedapigee-k8s-cluster-ready-check.yaml
with the following contents:apiVersion: v1 kind: ServiceAccount metadata: name: apigee-k8s-cluster-ready-check --- apiVersion: batch/v1 kind: Job metadata: name: apigee-k8s-cluster-ready-check spec: template: spec: hostNetwork: true serviceAccountName: apigee-k8s-cluster-ready-check containers: - name: manager image: gcr.io/apigee-release/hybrid/apigee-operators:1.13.1 command: - /manager args: - --k8s-cluster-ready-check env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP securityContext: runAsGroup: 998 runAsNonRoot: true runAsUser: 999 restartPolicy: Never backoffLimit: 1
- Apply the
apigee-k8s-cluster-ready-check.yaml
with the followingkubectl
command. This will run the test:kubectl apply -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml
The output should show that the service account and job were created. For example:
kubectl apply -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml
serviceaccount/apigee-k8s-cluster-ready-check created job.batch/apigee-k8s-cluster-ready-check created - Check the status of the Kubernetes job with the following command:
kubectl get jobs apigee-k8s-cluster-ready-check
If your cluster is ready, the output should look something like:
NAME COMPLETIONS DURATION AGE apigee-k8s-cluster-ready-check 1/1 8s 1h23m
If the test failed and your cluster is not ready, the output will look something like:
NAME COMPLETIONS DURATION AGE apigee-k8s-cluster-ready-check 0/1 44s 44s
Look for the number of completions:
- 1/1 Success, your cluster is ready for Apigee hybrid installation.
- 0/1 The test failed. The cluster is not ready. Proceed to the following steps to troubleshoot the cluster.
- If the test did not succeed, check the logs with the following commands.
- Get the name of the pod for the cluster pre-check job:
kubectl get pods | grep apigee-k8s-cluster-ready-check
- Get the Kubernetes logs for the pod:
kubectl logs pod_name
Where pod_name is the name of the apigee-k8s-cluster-ready-check pod.
- Get the name of the pod for the cluster pre-check job:
- Clean up before proceeding to the next step. Delete the Kubernetes job with the following
command:
kubectl delete -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml
The output should show that the service account and job were deleted. For example:
kubectl delete -f $APIGEE_HELM_CHARTS_HOME/cluster-check/apigee-k8s-cluster-ready-check.yaml
serviceaccount "apigee-k8s-cluster-ready-check" deleted job.batch "apigee-k8s-cluster-ready-check" deleted
You have now made sure your Apigee hybrid cluster is ready. Next, let's install the charts to apply your configuration to the hybrid runtime.
Troubleshooting
- Cassandra DNS check: If you find error logs similar to
DNS resolution was successful but IP doesn't match POD IP
,could not resolve hostname
orerror determining hostname
it means your cluster DNS is not configured correctly for a multi-region setup. You can ignore this error if you do not intend to set up multi-region. - Control Plane connectivity check: If you find error logs similar to
error creating TCP connection with host
then you need to resolve the connectivity from cluster to apigee.googleapis.com and re-run the job.