El producto descrito en esta documentación, clústeres de Anthos en AWS (generación anterior), ahora está en modo de mantenimiento. Todas las instalaciones nuevas deben usar el producto de generación actual, clústeres de Anthos en AWS.
Para conectarte a tus recursos de GKE on AWS, realiza las siguientes instrucciones. Selecciona si tienes una VPC de AWS existente (o conexión directa a tu VPC) o si creaste una VPC dedicada cuando creaste tu servicio de administración.
VPC existente
Si tienes una conexión directa o de VPN con una VPC existente, omite la línea env HTTP_PROXY=http://localhost:8118 de los comandos en este tema.
VPC dedicada
Cuando creas un servicio de administración en una VPC dedicada, GKE on AWS incluye un host de bastión en una subred pública.
Para conectarte al servicio de administración, realiza los siguientes pasos:
Para abrir el túnel, ejecuta la secuencia de comandos bastion-tunnel.sh. El túnel reenvía a localhost:8118.
Para abrir un túnel al host de bastión, ejecuta el siguiente comando:
./bastion-tunnel.sh -N
Los mensajes del túnel SSH aparecen en esta ventana. Cuando estés listo para cerrar la conexión, detén el proceso mediante Control+C o cierra la ventana.
Abre una terminal nueva y cambia a tu directorio de anthos-aws.
cd anthos-aws
Verifica que puedas conectarte al clúster con kubectl.
En Editar contenedor, selecciona Imagen de contenedor existente para elegir una imagen de contenedor disponible en Container Registry. Completa la Ruta de acceso a la imagen con la imagen de contenedor que deseas usar y su versión. Para esta guía de inicio rápido, usa nginx:latest.
Haz clic en Listo y, luego, en Continuar. Aparecerá la pantalla Configuración.
Puedes cambiar el Nombre de la aplicación de tu implementación y el Espacio de nombres de Kubernetes. Para esta guía de inicio rápido, puedes usar el nombre de aplicación nginx-1 y el espacio de nombres default
En el menú desplegable Clúster, selecciona tu clúster de usuario. De forma predeterminada, tu primer clúster de usuario se llama cluster-0.
Haz clic en Implementar. GKE on AWS inicia la implementación de NGINX.
Aparecerá la pantalla Detalles de la implementación.
Expón tus pods
Esta sección te muestra cómo realizar una de las acciones siguientes:
Expón la implementación internamente en tu clúster y confirma que esté disponible con kubectl port-forward.
Expón tu implementación de la consola de Google Cloud a las direcciones permitidas por el grupo de seguridad del grupo de nodos.
kubectl
Expón el puerto 80 la implementación al clúster con kubectl expose.
En la pantalla Detalles de la implementación, haz clic en Exponer. Aparecerá la pantalla Exponer una implementación.
En la sección Asignación de puertos, deja el puerto predeterminado (80) y haz clic en Listo.
En Tipo de servicio, selecciona Balanceador de cargas. Para obtener más información sobre otras opciones, consulta Servicios de publicación (ServiceTypes) en la documentación de Kubernetes.
Haz clic en el vínculo Extremos externos. Si el balanceador de cargas está listo, aparecerá la página web predeterminada de NGINX.
Visualiza la implementación en la consola de Google Cloud
Si el clúster está conectado a la consola de Google Cloud, puedes ver la implementación en la página Cargas de trabajo de GKE. Para ver la carga de trabajo, sigue estos pasos:
Puedes usar otros tipos de cargas de trabajo de Kubernetes con GKE on AWS.
Consulta la documentación de GKE para obtener más información sobre la Implementación de cargas de trabajo.
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2024-07-02 (UTC)"],[],[],null,["# Quickstart\n\nThis topic shows you how to create a workload on GKE on AWS\nand expose it internally to your cluster.\n\nBefore you begin\n----------------\n\n\nBefore you start using GKE on AWS, make sure you have performed the following tasks:\n\n- Complete the [Prerequisites](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/prerequisites).\n\n\u003c!-- --\u003e\n\n- Install a [management service](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/installing-management).\n- Create a [user cluster](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/creating-user-cluster).\n- From your `anthos-aws` directory, use `anthos-gke` to switch context to your user cluster. \n\n ```sh\n cd anthos-aws\n env HTTPS_PROXY=http://localhost:8118 \\\n anthos-gke aws clusters get-credentials CLUSTER_NAME\n ```\n Replace \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e with your user cluster name.\n\nYou can perform these steps with `kubectl`, or with the Google Cloud console if you\nhave\n[Authenticated with Connect](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/connecting-to-a-cluster).\nIf you are using the Google Cloud console, skip to\n[Launch an NGINX Deployment](#launch_an_nginx_deployment).\n\nTo connect to your GKE on AWS resources, perform the following\nsteps. Select if you have an existing AWS VPC (or direct connection to\nyour VPC) or created a dedicated VPC when creating your management service. \n\n### Existing VPC\n\nIf you have a direct or VPN connection to an existing VPC, omit the line\n`env HTTP_PROXY=http://localhost:8118` from commands in this topic.\n\n### Dedicated VPC\n\nWhen you create a management service in a dedicated VPC,\nGKE on AWS includes a\n[bastion](https://en.wikipedia.org/wiki/Bastion_host) host in a\npublic subnet.\n| **Important:** If you restart your terminal session or the SSH connection is lost, you need to re-launch the `bastion-tunnel.sh` script.\n\nTo connect to your management service, perform the following steps:\n\n1. Change to the directory with your GKE on AWS configuration.\n You created this directory when\n [Installing the management service](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/installing-management).\n\n ```sh\n cd anthos-aws\n ```\n\n \u003cbr /\u003e\n\n2. To open the tunnel, run the `bastion-tunnel.sh` script. The tunnel forwards\n to `localhost:8118`.\n\n To open a tunnel to the bastion host, run the following command: \n\n ./bastion-tunnel.sh -N\n\n Messages from the SSH tunnel appear in this window. When you are ready to\n close the connection, stop the process by using \u003ckbd\u003eControl+C\u003c/kbd\u003e or\n closing the window.\n3. Open a new terminal and change into your `anthos-aws` directory.\n\n ```sh\n cd anthos-aws\n ```\n4. Check that you're able to connect to the cluster with `kubectl`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl cluster-info\n\n The output includes the URL for the management service API server.\n\nLaunch an NGINX Deployment\n--------------------------\n\nIn this section, you create a\n[Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)\nof the NGINX webserver named `nginx-1`. \n\n### kubectl\n\n1. Use `kubectl create` to create the Deployment.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl create deployment --image nginx nginx-1\n\n2. Use `kubectl` to get the status of the Deployment. Note the Pod's `NAME`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl get deployment\n\n### Console\n\nTo launch a NGINX Deployment with the Google Cloud console, perform the\nfollowing steps:\n\n1. Visit the GKE Workloads menu in\n Google Cloud console.\n\n [Visit the Workloads menu](https://console.cloud.google.com/kubernetes/workload)\n2. Click **Deploy**.\n\n3. Under **Edit container** , select **Existing container image** to choose\n a container image available from Container Registry. Fill **Image path**\n with the container image that you want to use\n and its version. For this quickstart, use `nginx:latest`.\n\n4. Click **Done** , and then click **Continue** . The **Configuration**\n screen appears.\n\n5. You can change your Deployment's **Application name** and\n Kubernetes **Namespace** . For this quickstart, you can use\n the application name `nginx-1` and namespace `default`\n\n6. From the **Cluster** drop-down menu, select your user cluster. By\n default, your first user cluster is named `cluster-0`.\n\n7. Click **Deploy** . GKE on AWS launches your NGINX Deployment.\n The **Deployment details** screen appears.\n\n### Exposing your pods\n\nThis section shows how to do one of the following:\n\n- Expose your Deployment internally in your cluster and confirm it is available\n with `kubectl port-forward`.\n\n- Expose your Deployment from the Google Cloud console to the addresses allowed by\n your node pool [security group](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/reference/security-groups).\n\n### kubectl\n\n1. Expose port 80 the Deployment to the cluster with `kubectl expose`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl expose deployment nginx-1 --port=80\n\n The Deployment is now accessible from within the cluster.\n2. Forward port `80` on the Deployment to port `8080` on your local machine\n with `kubectl port-forward`.\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl port-forward deployment/nginx-1 8080:80\n\n3. Connect to `http://localhost:8080` with `curl` or your web browser.\n The default NGINX web page appears.\n\n curl http://localhost:8080\n\n### Console\n\n1. Visit the GKE Workloads menu in Google Cloud console.\n\n [Visit the Workloads menu](https://console.cloud.google.com/kubernetes/workload)\n2. From the **Deployment details** screen, click **Expose** . The\n **Expose a deployment** screen appears.\n\n3. In the **Port mapping** section, leave the default port (`80`), and\n click **Done**.\n\n4. For **Service type** , select **Load balancer** . For more information\n on other options, see\n [Publishing services (ServiceTypes)](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)\n in the Kubernetes documentation.\n\n5. Click **Expose** . The **Service details** screen appears.\n GKE on AWS creates a\n [Classic Elastic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html)\n for the Service.\n\n | **Note:** Creating the load balancer takes several minutes. The hostname appears before you are able to access the deployment.\n6. Click on the link for **External Endpoints**. If the load balancer is\n ready, the default NGINX web page appears.\n\n### View your Deployment on Google Cloud console\n\nIf your cluster is\n[connected to Google Cloud console](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/connecting-to-a-cluster),\nyou can view your Deployment in the GKE Workloads page. To view\nyour workload, perform the following steps:\n\n1. In your browser, visit the Google Kubernetes Engine [Workloads page](https://console.cloud.google.com/kubernetes/workload).\n\n [Visit the Google Kubernetes Engine Workloads page](https://console.cloud.google.com/kubernetes/workload)\n\n The list of Workloads appears.\n2. Click the name of your workload, `nginx-1`. The **Deployment details**\n screen appears.\n\n3. From this screen, you can get details on your Deployment; view and edit YAML\n configuration; and take other Kubernetes actions.\n\nFor more information on options available from this page, see\n[Deploying a stateless application](/kubernetes-engine/docs/how-to/stateless-apps#console)\nin the GKE documentation.\n\n### Cleanup\n\nTo delete your NGINX Deployment, use `kubectl delete` or the Google Cloud console. \n\n### kubectl\n\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl delete service nginx-1 &&\\\n kubectl delete deployment nginx-1\n\n### Console\n\n1. Visit the Services and Ingress page menu on the Google Cloud console.\n\n [Visit the Services and Ingress page](https://console.cloud.google.com/kubernetes/discovery)\n2. Find your NGINX Service and click its **Name** . By default, the name\n is `nginx-1-service`. The **Service details** screen appears.\n\n3. Click delete\n **Delete** and confirm that you want to delete the Service.\n GKE on AWS deletes the load balancer.\n\n4. Visit the Google Kubernetes Engine [Workloads page](https://console.cloud.google.com/kubernetes/workload).\n\n [Visit the Google Kubernetes Engine Workloads page](https://console.cloud.google.com/kubernetes/workload)\n\n The list of Workloads appears.\n5. Click the name of your workload, `nginx-1`. The **Deployment details**\n screen appears.\n\n6. Click delete\n **Delete** and confirm that you want to delete the Deployment.\n GKE on AWS deletes the Deployment.\n\nWhat's next?\n------------\n\nCreate an internal or external load balancer using one of the following Services:\n\n- [AWS Classic and Network Load balancer](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/loadbalancer)\n- [AWS Application Load Balancer](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/loadbalancer-alb)\n- [Ingress with Cloud Service Mesh](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/ingress)\n\nYou can use other types of Kubernetes Workloads with GKE on AWS.\nSee the GKE documentation for more information on\n[Deploying workloads](/kubernetes-engine/docs/how-to/deploying-workloads-overview)."]]