[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Troubleshoot appliance access issues\n\nThis page outlines how to troubleshoot appliance inaccessibility issues post-bootstrapping. You might encounter the following issues:\n\n- Error messages such as `Unable to connect to the server: dial tcp 198.18.0.64:443: i/o timeout` when attempting to query using kubectl.\n- `Webpage not available` error when trying to access the UI.\n- Deployed applications on the appliance are not working, or you cannot deploy any new applications.\n\nTroubleshooting UI inaccessible issue\n-------------------------------------\n\n1. Follow the [UI inaccessible](/distributed-cloud/hosted/docs/1.14/gdch/gdch-io/service-manual/ui/runbooks/ui-r0001) runbook to troubleshoot the issue.\n2. Check if the cluster is reachable by following the [Cluster reachability](#cluster-reachable) section.\n3. If the cluster is responsive, verify if the management API is accessible by following the [Management API accessible](#mgmt-api-accessible) section.\n4. If the cluster is not reachable and returns errors like `Connection timed out` or `i/o timeout error`, refer to the [troubleshooting guide](#basic-troubleshooting-steps) for further troubleshooting steps.\n\nBasic troubleshooting steps\n---------------------------\n\n1. Verify the power supply of the chassis by checking if the indicator lights (green) on either of the two power supplies are illuminated, as indicated by the arrows in the image.\n\n2. If the indicator lights are off, first ensure the power cord is receiving power. If the power cord is functioning properly, the power supplies are likely faulty and need to be replaced. For replacement instructions, refer to the [Power Supply Replacement Guide](/distributed-cloud/hosted/docs/1.14/gdch/gdch-io/service-manual/appl/runbooks/appl-r0011).\n\n3. If the power supplies are functioning but the device is still not working, check for any [loose or damaged connections](#loose-or-damaged-connections).\n\n4. Verify the LEDs of the switch and servers are illuminated as indicated by the arrows in the image.\n\n5. If the Link LED of the switch is solid green, verify it is operational by following [Verify switch operational](#verify-switch-operational) section.\n\n6. If the switch health and configuration is correct, log in to iLO using the steps mentioned in [Steps to log in to iLO](/distributed-cloud/hosted/docs/1.14/gdch/gdch-io/service-manual/appl/guides/appl-g0001). to check the health of the device.\n\n 1. If any of the fans are critical, contact HPE support team to get a replacement of the critical fan and follow the [Fan Replacement Guide](https://support.hpe.com/hpesc/public/docDisplay?docId=psg000031aen_us&page=GUID-84AFBBBB-F451-4EF3-8A6E-890CAC2A799E.html) to replace it.\n 2. If any blades are powered off, turn them on by navigating to the Blades section, selecting the blade, and pressing the power button.\n 3. If any of the blades are in a critical state, navigate to the Blades section, select the critical blade, go to the Power section, and initiate a Force System Reset.\n 4. If the chassis health is critical, you can also try resetting the chassis by going to the **Power and Thermal** tab. Select the **Management Power** section, click **Reset EL8000CM Button**. This process resets the chassis manager firmware and might take a few minutes, during which the chassis is unavailable.\n 5. If the issue persists, go to the **Information** tab, select **Logs** , choose **Health Logs** from the drop-down menu, and download them as a CSV file. Raise a ticket with Google and attach the logs to request hardware replacement.\n\n7. If the Power LEDs on the blades are illuminated, perform a ping test to the following blade IP addresses from a machine connected to the appliance:\n\n ping 198.18.0.7 //BM01\n ping 198.18.0.8 //BM02\n ping 198.18.0.9 //BM03\n\n If the ping test is successful, it indicates that the nodes are operational.\n8. If all the nodes fail the ping test, escalate to Google Support.\n\n9. If the issue persists after following all the steps outlined in this section, escalate the issue to Google Support for further assistance.\n\n### Loose or damaged connections\n\n1. Verify that all connections are secure and properly seated. For guidance on checking and securing cable connections within the appliance, refer to [check cables](/distributed-cloud/hosted/docs/latest/appliance/admin/check-cables).\n\n2. Inspect the cables for any visible damage. If any cables are damaged, replace them.\n\n### Verify switch operational\n\n1. [Sign in to the switch's serial console](/distributed-cloud/hosted/docs/1.14/gdch/gdch-io/service-manual/appl/guides/appl-g0002). If the login is successful, run the following command to check the health of the switch. This command displays the uptime and resource consumption of the switch.\n\n show version\n\n2. If the serial console is responsive, validate the BGP configuration on the switch by referring to [Validate BGP Summary](/distributed-cloud/hosted/docs/1.14/gdch/gdch-io/service-manual/pnet/runbooks/pnet-r3000).\n\n3. If the Link LED is off or the serial console is unresponsive, the switch might be faulty. Escalate the issue to Google Support for a replacement.\n\nVerify cluster reachability\n---------------------------\n\n1. Log in to the gdcloud session with IO credentials:\n\n gdcloud auth login\n\n2. If you are unable to log in, locate the emergency credential backed up during the [appliance setup](/distributed-cloud/hosted/docs/latest/appliance/admin/setup#back_up_emergency_credentials) to be used with the command -: root-admin-kubeconfig.\n\n3. Check if the cluster is reachable:\n\n kubectl --kubeconfig root-admin-kubeconfig get servers -A\n\nVerify Management API accessibility\n-----------------------------------\n\n1. Log in to the gdcloud session with IO credentials:\n\n gdcloud auth login\n\n If the login fails, log in with management plane credentials.\n2. The AIS database can sometimes malfunction or be misconfigured causing login failure. Refer to [IAM-R0009 - AIS Database](/distributed-cloud/hosted/docs/1.14/gdch/gdch-io/service-manual/iam/runbooks/iam-r0009).\n\n3. If you are unable to resolve issues with login, locate the emergency credential backed up during the [appliance setup](/distributed-cloud/hosted/docs/latest/appliance/admin/setup#back_up_emergency_credentials) to be used with the command -: root-admin-kubeconfig.\n\n4. Fetch the management plane kubeconfig:\n\n kubectl --kubeconfig root-admin-kubeconfig -n management-kube-system get secret kube-admin-remote-kubeconfig -ojsonpath='{.data.value}' | base64 -d \u003e kube-admin-remote-kubeconfig\n\n5. Get the health status of the cluster:\n\n kubectl --kubeconfig kube-admin-remote-kubeconfig get --raw='/readyz?verbose'"]]