[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-31 (世界標準時間)。"],[],[],null,["# Quickstart\n\nThis topic shows you how to create a workload on GKE on AWS\nand expose it internally to your cluster.\n\nBefore you begin\n----------------\n\n\nBefore you start using GKE on AWS, make sure you have performed the following tasks:\n\n- Complete the [Prerequisites](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/prerequisites).\n\n\u003c!-- --\u003e\n\n- Install a [management service](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/installing-management).\n- Create a [user cluster](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/creating-user-cluster).\n- From your `anthos-aws` directory, use `anthos-gke` to switch context to your user cluster. \n\n ```sh\n cd anthos-aws\n env HTTPS_PROXY=http://localhost:8118 \\\n anthos-gke aws clusters get-credentials CLUSTER_NAME\n ```\n Replace \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e with your user cluster name.\n\nYou can perform these steps with `kubectl`, or with the Google Cloud console if you\nhave\n[Authenticated with Connect](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/connecting-to-a-cluster).\nIf you are using the Google Cloud console, skip to\n[Launch an NGINX Deployment](#launch_an_nginx_deployment).\n\nTo connect to your GKE on AWS resources, perform the following\nsteps. Select if you have an existing AWS VPC (or direct connection to\nyour VPC) or created a dedicated VPC when creating your management service. \n\n### Existing VPC\n\nIf you have a direct or VPN connection to an existing VPC, omit the line\n`env HTTP_PROXY=http://localhost:8118` from commands in this topic.\n\n### Dedicated VPC\n\nWhen you create a management service in a dedicated VPC,\nGKE on AWS includes a\n[bastion](https://en.wikipedia.org/wiki/Bastion_host) host in a\npublic subnet.\n| **Important:** If you restart your terminal session or the SSH connection is lost, you need to re-launch the `bastion-tunnel.sh` script.\n\nTo connect to your management service, perform the following steps:\n\n1. Change to the directory with your GKE on AWS configuration.\n You created this directory when\n [Installing the management service](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/installing-management).\n\n ```sh\n cd anthos-aws\n ```\n\n \u003cbr /\u003e\n\n2. To open the tunnel, run the `bastion-tunnel.sh` script. The tunnel forwards\n to `localhost:8118`.\n\n To open a tunnel to the bastion host, run the following command: \n\n ./bastion-tunnel.sh -N\n\n Messages from the SSH tunnel appear in this window. When you are ready to\n close the connection, stop the process by using \u003ckbd\u003eControl+C\u003c/kbd\u003e or\n closing the window.\n3. Open a new terminal and change into your `anthos-aws` directory.\n\n ```sh\n cd anthos-aws\n ```\n4. Check that you're able to connect to the cluster with `kubectl`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl cluster-info\n\n The output includes the URL for the management service API server.\n\nLaunch an NGINX Deployment\n--------------------------\n\nIn this section, you create a\n[Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)\nof the NGINX webserver named `nginx-1`. \n\n### kubectl\n\n1. Use `kubectl create` to create the Deployment.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl create deployment --image nginx nginx-1\n\n2. Use `kubectl` to get the status of the Deployment. Note the Pod's `NAME`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl get deployment\n\n### Console\n\nTo launch a NGINX Deployment with the Google Cloud console, perform the\nfollowing steps:\n\n1. Visit the GKE Workloads menu in\n Google Cloud console.\n\n [Visit the Workloads menu](https://console.cloud.google.com/kubernetes/workload)\n2. Click **Deploy**.\n\n3. Under **Edit container** , select **Existing container image** to choose\n a container image available from Container Registry. Fill **Image path**\n with the container image that you want to use\n and its version. For this quickstart, use `nginx:latest`.\n\n4. Click **Done** , and then click **Continue** . The **Configuration**\n screen appears.\n\n5. You can change your Deployment's **Application name** and\n Kubernetes **Namespace** . For this quickstart, you can use\n the application name `nginx-1` and namespace `default`\n\n6. From the **Cluster** drop-down menu, select your user cluster. By\n default, your first user cluster is named `cluster-0`.\n\n7. Click **Deploy** . GKE on AWS launches your NGINX Deployment.\n The **Deployment details** screen appears.\n\n### Exposing your pods\n\nThis section shows how to do one of the following:\n\n- Expose your Deployment internally in your cluster and confirm it is available\n with `kubectl port-forward`.\n\n- Expose your Deployment from the Google Cloud console to the addresses allowed by\n your node pool [security group](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/reference/security-groups).\n\n### kubectl\n\n1. Expose port 80 the Deployment to the cluster with `kubectl expose`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl expose deployment nginx-1 --port=80\n\n The Deployment is now accessible from within the cluster.\n2. Forward port `80` on the Deployment to port `8080` on your local machine\n with `kubectl port-forward`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl port-forward deployment/nginx-1 8080:80\n\n3. Connect to `http://localhost:8080` with `curl` or your web browser.\n The default NGINX web page appears.\n\n curl http://localhost:8080\n\n### Console\n\n1. Visit the GKE Workloads menu in Google Cloud console.\n\n [Visit the Workloads menu](https://console.cloud.google.com/kubernetes/workload)\n2. From the **Deployment details** screen, click **Expose** . The\n **Expose a deployment** screen appears.\n\n3. In the **Port mapping** section, leave the default port (`80`), and\n click **Done**.\n\n4. For **Service type** , select **Load balancer** . For more information\n on other options, see\n [Publishing services (ServiceTypes)](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)\n in the Kubernetes documentation.\n\n5. Click **Expose** . The **Service details** screen appears.\n GKE on AWS creates a\n [Classic Elastic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html)\n for the Service.\n\n | **Note:** Creating the load balancer takes several minutes. The hostname appears before you are able to access the deployment.\n6. Click on the link for **External Endpoints**. If the load balancer is\n ready, the default NGINX web page appears.\n\n### View your Deployment on Google Cloud console\n\nIf your cluster is\n[connected to Google Cloud console](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/connecting-to-a-cluster),\nyou can view your Deployment in the GKE Workloads page. To view\nyour workload, perform the following steps:\n\n1. In your browser, visit the Google Kubernetes Engine [Workloads page](https://console.cloud.google.com/kubernetes/workload).\n\n [Visit the Google Kubernetes Engine Workloads page](https://console.cloud.google.com/kubernetes/workload)\n\n The list of Workloads appears.\n2. Click the name of your workload, `nginx-1`. The **Deployment details**\n screen appears.\n\n3. From this screen, you can get details on your Deployment; view and edit YAML\n configuration; and take other Kubernetes actions.\n\nFor more information on options available from this page, see\n[Deploying a stateless application](/kubernetes-engine/docs/how-to/stateless-apps#console)\nin the GKE documentation.\n\n### Cleanup\n\nTo delete your NGINX Deployment, use `kubectl delete` or the Google Cloud console. \n\n### kubectl\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl delete service nginx-1 &&\\\n kubectl delete deployment nginx-1\n\n### Console\n\n1. Visit the Services and Ingress page menu on the Google Cloud console.\n\n [Visit the Services and Ingress page](https://console.cloud.google.com/kubernetes/discovery)\n2. Find your NGINX Service and click its **Name** . By default, the name\n is `nginx-1-service`. The **Service details** screen appears.\n\n3. Click delete\n **Delete** and confirm that you want to delete the Service.\n GKE on AWS deletes the load balancer.\n\n4. Visit the Google Kubernetes Engine [Workloads page](https://console.cloud.google.com/kubernetes/workload).\n\n [Visit the Google Kubernetes Engine Workloads page](https://console.cloud.google.com/kubernetes/workload)\n\n The list of Workloads appears.\n5. Click the name of your workload, `nginx-1`. The **Deployment details**\n screen appears.\n\n6. Click delete\n **Delete** and confirm that you want to delete the Deployment.\n GKE on AWS deletes the Deployment.\n\nWhat's next?\n------------\n\nCreate an internal or external load balancer using one of the following Services:\n\n- [AWS Classic and Network Load balancer](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/loadbalancer)\n- [AWS Application Load Balancer](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/loadbalancer-alb)\n- [Ingress with Cloud Service Mesh](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/ingress)\n\nYou can use other types of Kubernetes Workloads with GKE on AWS.\nSee the GKE documentation for more information on\n[Deploying workloads](/kubernetes-engine/docs/how-to/deploying-workloads-overview)."]]