This tutorial describes how to upload a container application in an air-gapped Google Distributed Cloud (GDC) air-gapped environment, and run that application on a Kubernetes cluster. A containerized workload runs on a Kubernetes cluster within a project namespace. Clusters are logically separate from projects and from each other to provide different failure domains and isolation guarantees. However, you must ensure your cluster is attached to a project to allow for containerized workloads to be managed within a project.
One of the largest obstacles for deploying a container app is getting the binary for the app to your air-gapped data center. Work with your infrastructure team and administrators to transport the application to your workstation or implement this tutorial directly on your continuous integration and continuous delivery (CI/CD) server.
This tutorial uses a sample web server app included by default in the system artifact registry.
Objectives
- Push a container image to the artifact registry.
- Create a Kubernetes cluster.
- Deploy the sample container app to the cluster.
Costs
Because GDC is designed to run in an air-gapped data center, billing processes and information is confined only to the GDC deployment and is not managed by other Google products.
To generate a cost estimate based on your projected usage, use the pricing calculator.
Use the Projected Cost dashboard to anticipate future SKU costs for your invoices.
To track storage and compute consumption, use the Billing Usage dashboards.
Before you begin
Make sure you have a project to manage your containerized deployments. Create a project if you don't have one.
Ask your Organization IAM Admin to grant you the Namespace Admin role.
Retrieve the GDC version:
gdcloud version
Set the environment variable to the output of
gdcloud version
:export GDC_VERSION=GDC_VERSION
Sign in to the org admin cluster and generate its kubeconfig file with a user identity. Make sure you set the kubeconfig path as an environment variable:
export ORG_ADMIN_CLUSTER_KUBECONFIG=ORG_ADMIN_CLUSTER_KUBECONFIG_PATH
Push the container image to the artifact registry
A preview managed container registry service is accessible to Platform Administrators (PA) or Application Operators (AO) of GDC. If your deployment has not enabled preview features, you must deploy your production container images from an existing registry or deploy your own artifact registry solution.
For this tutorial, you will deploy the nginx
web server app to a Kubernetes
cluster, which is a sample app already deployed to the system artifact registry
by default. The sample app is accessible from the system artifact registry
without elevated permissions by using a registry mirror.
Prepare your environment to access the nginx
sample container app from the
system artifact registry:
Set the system artifact registry URL variable:
export AR_ADDR=gcr.io
The
gcr.io
value is a registry mirror that lets you access the system artifact registry.Confirm that the
AR_ADDR
environment variable has the correct value:echo $AR_ADDR
Create a Kubernetes cluster
Now that you have the nginx
container image stored in the artifact registry
and can access it, create a Kubernetes cluster to run the nginx web server.
Console
In the navigation menu, select Clusters.
Click Create Cluster.
In the Name field, specify a name for the cluster.
Click Attach Project and select a project to attach to your cluster. Then click Save.
Click Create.
Wait for the cluster to be created. When the cluster is available to use, the status
READY
appears next to the cluster name.
API
Create a
Cluster
custom resource and save it as a YAML file, such ascluster.yaml
:apiVersion: cluster.gdc.goog/v1 kind: Cluster metadata: name: CLUSTER_NAME namespace: platform
Replace the
CLUSTER_NAME
value with the name of the cluster.Apply the custom resource to your GDC instance:
kubectl apply -f cluster.yaml --kubeconfig ${ORG_ADMIN_CLUSTER_KUBECONFIG}
Attach a project to your Kubernetes cluster using the GDC console. You cannot attach a project to the cluster using the API at this time.
For more information on creating a Kubernetes cluster, see Create a user cluster.
Deploy the sample container app
You are now ready to deploy the nginx
container image to your Kubernetes
cluster.
Kubernetes represents applications as Pod
resources, which are scalable units
holding one or more containers. The pod is the smallest deployable unit in
Kubernetes. Usually, you deploy pods as a set of replicas that can be scaled and
distributed together across your cluster. One way to deploy a set of replicas is
through a Kubernetes Deployment
.
In this section, you create a Kubernetes Deployment
to run the nginx
container app on your cluster. This Deployment has replicas, or pods. One
Deployment
pod contains only one container: the nginx
container image. You
also create a Service
resource that provides a stable way for clients to send
requests to the pods of your Deployment
.
Deploy the nginx web server to your Kubernetes cluster:
Sign in to the Kubernetes cluster and generate its kubeconfig file with a user identity. Make sure you set the kubeconfig path as an environment variable:
export KUBECONFIG=CLUSTER_KUBECONFIG_PATH
Create and deploy the Kubernetes
Deployment
andService
custom resources:kubectl --kubeconfig ${KUBECONFIG} \ apply -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: $AR_ADDR/library/private-cloud-staging/nginx:$GDC_VERSION args: [] ports: - containerPort: 80 resources: {} --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - port: 80 protocol: TCP type: LoadBalancer EOF
Verify the pods were created by the deployment:
kubectl get pods -l app=nginx
The output is similar to the following:
NAME READY STATUS RESTARTS AGE nginx-deployment-1882529037-6p4mt 1/1 Running 0 1h nginx-deployment-1882529037-p29za 1/1 Running 0 1h nginx-deployment-1882529037-s0cmt 1/1 Running 0 1h
Export the IP address for the
nginx
service:export IP=`kubectl --kubeconfig=${KUBECONFIG} get service nginx-service -ojsonpath='{.status.loadBalancer.ingress[*].ip}'`
Test the
nginx
server IP address usingcurl
:curl http://$IP
Clean up
To avoid incurring charges to your GDC account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
To delete the individual resources, complete the following steps:
Delete the
Service
object for your container app:kubectl delete service nginx-service
Delete the
Deployment
object for your container app:kubectl delete deployment nginx-deployment
If you created a test Kubernetes cluster solely for this tutorial, delete it:
kubectl delete clusters.cluster.gdc.goog/USER_CLUSTER_NAME \ -n platform --kubeconfig ${ORG_ADMIN_CLUSTER_KUBECONFIG}
This deletes the resources that make up the Kubernetes cluster, such as the compute instances, disks, and network resources:
Since the container app you deployed to your Kubernetes cluster is a sample included with the GDC product bundle, there's no need to delete it from the system artifact registry. In cases where you deployed a custom container image to the system artifact registry, you must submit a request to remove it.
What's next
Explore the resource hierarchy and details around resource isolation.
Learn about the cluster architecture.
Read the Kubernetes containers for GDC documentation for information on how to manage containers deployed to your Kubernetes clusters.
Learn how to manage your Kubernetes clusters after your container workloads have been deployed.
Explore best practices for setting up your container workloads and other service resources.