This page details how to troubleshoot common software provisioning issues you might face.
Troubleshoot control plane bootstrapping
To troubleshoot the bootstrap control plane process, check the following resources within the KIND cluster:
Examine the
anthos-cluster-operator
Pod:kubectl logs deployment/anthos-cluster-operator -c operator -n kube-system
Inspect the
AddOn
resources:kubectl get AddOn -A
You can also view the resources with the
gpc-addons
command for more information.Check the
AddOnSet
resources:kubectl get AddOnSet -A
If the control plane bootstrap does not finish processing after the creation of a root admin cluster, such as during a resource pivot, export the
kubeconfig
file manually:kubectl get secret root-admin-kubeconfig -n cluster-admin -o jsonpath="{.data.value}" | base64 -d > /root/path/to/root-admin-kubeconfig.yaml
Troubleshoot cluster creation
To troubleshoot the cluster creation process, follow these steps:
Examine the current stage of the cluster and check whether the object
Harbor-operator
or Harbor exists and check its status from the KIND cluster in theAddOnSet
resource by running the following command:kubectl get addonset -n fleet-root root-admin -o yaml
After running this command, find the current stage in
status.currentStage
.From the root admin cluster, check the resources in the
harbor-system
namespace. If you deployed theHarborCluster
object through thekubectl get harborcluster harbor -n harbor-system
command, that indicates the installation has moved to the second stage, the Harbor stage.Inspect the
status
field in theHarborCluster
object:kubectl -n harbor-system get harborcluster harbor
If the status indicates
healthy
, verify its functionality through the web portal by using thedocker push
command.Access the web portal at the reported URL. For example:
https://10.200.0.36:10443
.In the bootstrapper, run the command
docker push
.If the status is
unhealthy
, review the details from the status conditions by executing the following entry:kubectl -n harbor-system get harborcluster harbor -o yaml
If the status shows
healthy
, navigate to the Check the istio status section. If the status does not showhealthy
, continue to the next section, Evaluate Harbor namespace resources.
Evaluate Harbor namespace resources
These steps only need to be performed if the harbor-system
resources do not
have a status of healthy
. To check the status of your harbor-system
resources, see Troubleshoot cluster creation.
Evaluate the
harbor-system
namespace resources:kubectl get all -n harbor-system
Check the status for resources related to the Harbor operator, including the PostgreSQL and Redis operators that are built as sub-charts during the second stage of the root admin cluster creation process. Validate the following operators are in the
Running
state:deployment/harbor-operator-harbor-operator deployment/harbor-operator-postgres-operator deployment/harbor-operator-redisoperator
Verify the resources for PostgreSQL, Redis, and Harbor components all have a state of
healthy
. If some Pods are not in ahealthy
state, review the details from the failing Pod using these commands:kubectl get pod -o wide kubectl describe pod
You can also view the logs from the managing operator using this command:
kubectl logs -f deploy/harbor-operator-postgres-operator
To check the PostgreSQL resource, view
statefulset/postgresql-harbor-system-harbor
.To check the Redis resource, view the:
- Redis instance:
statefulset/rfr-harbor-redis
- Redis sentinel:
deployment/rfs-harbor-redis
The Harbor core components only deploy when the two components are ready.
If some Harbor core component Pods fail, review the logs from the failing Pod. In some cases, there is a communication issue with PostgreSQL or Redis instances.
For example, if the Harbor core component cannot connect to PostgreSQL or Redis with the provided credentials, there is a potential issue with the Harbor Operator, which is responsible for setting up the credentials before deploying Harbor core components.
- Redis instance:
Check the istio status
To verify the Pod status is istio
, such as istio-system
and
service/istio-ingressgateway
, check whether port 10443 is exposed.