This topic shows you how to create a workload on GKE on AWS and expose it internally to your cluster.
Before you begin
Before you start using GKE on AWS, make sure you have performed the following tasks:
- Complete the Prerequisites.
- Install a management service.
- Create a user cluster.
- From your
anthos-aws
directory, useanthos-gke
to switch context to your user cluster.cd anthos-aws env HTTPS_PROXY=http://localhost:8118 \ anthos-gke aws clusters get-credentials CLUSTER_NAME
Replace CLUSTER_NAME with your user cluster name.
You can perform these steps with kubectl
, or with the Google Cloud console if you
have
Authenticated with Connect.
If you are using the Google Cloud console, skip to
Launch an NGINX Deployment.
To connect to your GKE on AWS resources, perform the following steps. Select if you have an existing AWS VPC (or direct connection to your VPC) or created a dedicated VPC when creating your management service.
Existing VPC
If you have a direct or VPN connection to an existing VPC, omit the line
env HTTP_PROXY=http://localhost:8118
from commands in this topic.
Dedicated VPC
When you create a management service in a dedicated VPC, GKE on AWS includes a bastion host in a public subnet.
To connect to your management service, perform the following steps:
Change to the directory with your GKE on AWS configuration. You created this directory when Installing the management service.
cd anthos-aws
To open the tunnel, run the
bastion-tunnel.sh
script. The tunnel forwards tolocalhost:8118
.To open a tunnel to the bastion host, run the following command:
./bastion-tunnel.sh -N
Messages from the SSH tunnel appear in this window. When you are ready to close the connection, stop the process by using Control+C or closing the window.
Open a new terminal and change into your
anthos-aws
directory.cd anthos-aws
Check that you're able to connect to the cluster with
kubectl
.env HTTPS_PROXY=http://localhost:8118 \ kubectl cluster-info
The output includes the URL for the management service API server.
Launch an NGINX Deployment
In this section, you create a
Deployment
of the NGINX webserver named nginx-1
.
kubectl
Use
kubectl create
to create the Deployment.env HTTPS_PROXY=http://localhost:8118 \ kubectl create deployment --image nginx nginx-1
Use
kubectl
to get the status of the Deployment. Note the Pod'sNAME
.env HTTPS_PROXY=http://localhost:8118 \ kubectl get deployment
Console
To launch a NGINX Deployment with the Google Cloud console, perform the following steps:
Visit the GKE Workloads menu in Google Cloud console.
Click Deploy.
Under Edit container, select Existing container image to choose a container image available from Container Registry. Fill Image path with the container image that you want to use and its version. For this quickstart, use
nginx:latest
.Click Done, and then click Continue. The Configuration screen appears.
You can change your Deployment's Application name and Kubernetes Namespace. For this quickstart, you can use the application name
nginx-1
and namespacedefault
From the Cluster drop-down menu, select your user cluster. By default, your first user cluster is named
cluster-0
.Click Deploy. GKE on AWS launches your NGINX Deployment. The Deployment details screen appears.
Exposing your pods
This section shows how to do one of the following:
Expose your Deployment internally in your cluster and confirm it is available with
kubectl port-forward
.Expose your Deployment from the Google Cloud console to the addresses allowed by your node pool security group.
kubectl
Expose port 80 the Deployment to the cluster with
kubectl expose
.env HTTPS_PROXY=http://localhost:8118 \ kubectl expose deployment nginx-1 --port=80
The Deployment is now accessible from within the cluster.
Forward port
80
on the Deployment to port8080
on your local machine withkubectl port-forward
.env HTTPS_PROXY=http://localhost:8118 \ kubectl port-forward deployment/nginx-1 8080:80
Connect to
http://localhost:8080
withcurl
or your web browser. The default NGINX web page appears.curl http://localhost:8080
Console
Visit the GKE Workloads menu in Google Cloud console.
From the Deployment details screen, click Expose. The Expose a deployment screen appears.
In the Port mapping section, leave the default port (
80
), and click Done.For Service type, select Load balancer. For more information on other options, see Publishing services (ServiceTypes) in the Kubernetes documentation.
Click Expose. The Service details screen appears. GKE on AWS creates a Classic Elastic Load Balancer for the Service.
Click on the link for External Endpoints. If the load balancer is ready, the default NGINX web page appears.
View your Deployment on Google Cloud console
If your cluster is connected to Google Cloud console, you can view your Deployment in the GKE Workloads page. To view your workload, perform the following steps:
In your browser, visit the Google Kubernetes Engine Workloads page.
Visit the Google Kubernetes Engine Workloads page
The list of Workloads appears.
Click the name of your workload,
nginx-1
. The Deployment details screen appears.From this screen, you can get details on your Deployment; view and edit YAML configuration; and take other Kubernetes actions.
For more information on options available from this page, see Deploying a stateless application in the GKE documentation.
Cleanup
To delete your NGINX Deployment, use kubectl delete
or the Google Cloud console.
kubectl
env HTTPS_PROXY=http://localhost:8118 \
kubectl delete service nginx-1 &&\
kubectl delete deployment nginx-1
Console
Visit the Services and Ingress page menu on the Google Cloud console.
Find your NGINX Service and click its Name. By default, the name is
nginx-1-service
. The Service details screen appears.Click
Delete and confirm that you want to delete the Service. GKE on AWS deletes the load balancer.Visit the Google Kubernetes Engine Workloads page.
Visit the Google Kubernetes Engine Workloads page
The list of Workloads appears.
Click the name of your workload,
nginx-1
. The Deployment details screen appears.Click
Delete and confirm that you want to delete the Deployment. GKE on AWS deletes the Deployment.
What's next?
Create an internal or external load balancer using one of the following Services:
You can use other types of Kubernetes Workloads with GKE on AWS. See the GKE documentation for more information on Deploying workloads.