Deploy a containerized web server app


This tutorial describes how to upload a container application in an air-gapped Google Distributed Cloud (GDC) air-gapped environment, and run that application on a Kubernetes cluster. A containerized workload runs on a Kubernetes cluster within a project namespace. Clusters are logically separate from projects and from each other to provide different failure domains and isolation guarantees. However, you must ensure your cluster is attached to a project to allow for containerized workloads to be managed within a project.

One of the largest obstacles for deploying a container app is getting the binary for the app to your air-gapped data center. Work with your infrastructure team and administrators to transport the application to your workstation or implement this tutorial directly on your continuous integration and continuous delivery (CI/CD) server.

This tutorial uses a sample web server app included by default in the system artifact registry.

Objectives

  • Push a container image to the artifact registry.
  • Create a Kubernetes cluster.
  • Deploy the sample container app to the cluster.

Costs

Because GDC is designed to run in an air-gapped data center, billing processes and information is confined only to the GDC deployment and is not managed by other Google products.

To generate a cost estimate based on your projected usage, use the pricing calculator.

Use the Projected Cost dashboard to anticipate future SKU costs for your invoices.

To track storage and compute consumption, use the Billing Usage dashboards.

Before you begin

  1. Make sure you have a project to manage your containerized deployments. Create a project if you don't have one.

  2. Download and install the gdcloud CLI.

  3. Ask your Organization IAM Admin to grant you the Namespace Admin role.

  4. Retrieve the GDC version:

    gdcloud version
    

    Set the environment variable to the output of gdcloud version:

    export GDC_VERSION=GDC_VERSION
    
  5. Sign in to the org admin cluster and generate its kubeconfig file with a user identity. Make sure you set the kubeconfig path as an environment variable:

    export ORG_ADMIN_CLUSTER_KUBECONFIG=ORG_ADMIN_CLUSTER_KUBECONFIG_PATH
    

Push the container image to the artifact registry

A preview managed container registry service is accessible to Platform Administrators (PA) or Application Operators (AO) of GDC. If your deployment has not enabled preview features, you must deploy your production container images from an existing registry or deploy your own artifact registry solution.

For this tutorial, you will deploy the nginx web server app to a Kubernetes cluster, which is a sample app already deployed to the system artifact registry by default. The sample app is accessible from the system artifact registry without elevated permissions by using a registry mirror.

Prepare your environment to access the nginx sample container app from the system artifact registry:

  1. Set the system artifact registry URL variable:

    export AR_ADDR=gcr.io
    

    The gcr.io value is a registry mirror that lets you access the system artifact registry.

  2. Confirm that the AR_ADDR environment variable has the correct value:

    echo $AR_ADDR
    

Create a Kubernetes cluster

Now that you have the nginx container image stored in the artifact registry and can access it, create a Kubernetes cluster to run the nginx web server.

Console

  1. In the navigation menu, select Clusters.

  2. Click Create Cluster.

  3. In the Name field, specify a name for the cluster.

  4. Click Attach Project and select a project to attach to your cluster. Then click Save.

  5. Click Create.

  6. Wait for the cluster to be created. When the cluster is available to use, the status READY appears next to the cluster name.

API

  1. Create a Cluster custom resource and save it as a YAML file, such as cluster.yaml:

    apiVersion: cluster.gdc.goog/v1
    kind: Cluster
    metadata:
      name: CLUSTER_NAME
      namespace: platform
    

    Replace the CLUSTER_NAME value with the name of the cluster.

  2. Apply the custom resource to your GDC instance:

    kubectl apply -f cluster.yaml --kubeconfig ${ORG_ADMIN_CLUSTER_KUBECONFIG}
    
  3. Attach a project to your Kubernetes cluster using the GDC console. You cannot attach a project to the cluster using the API at this time.

For more information on creating a Kubernetes cluster, see Create a user cluster.

Deploy the sample container app

You are now ready to deploy the nginx container image to your Kubernetes cluster.

Kubernetes represents applications as Pod resources, which are scalable units holding one or more containers. The pod is the smallest deployable unit in Kubernetes. Usually, you deploy pods as a set of replicas that can be scaled and distributed together across your cluster. One way to deploy a set of replicas is through a Kubernetes Deployment.

In this section, you create a Kubernetes Deployment to run the nginx container app on your cluster. This Deployment has replicas, or pods. One Deployment pod contains only one container: the nginx container image. You also create a Service resource that provides a stable way for clients to send requests to the pods of your Deployment.

Deploy the nginx web server to your Kubernetes cluster:

  1. Sign in to the Kubernetes cluster and generate its kubeconfig file with a user identity. Make sure you set the kubeconfig path as an environment variable:

    export KUBECONFIG=CLUSTER_KUBECONFIG_PATH
    
  2. Create and deploy the Kubernetes Deployment and Service custom resources:

    kubectl --kubeconfig ${KUBECONFIG} \
    apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: $AR_ADDR/library/private-cloud-staging/nginx:$GDC_VERSION
            args: []
            ports:
            - containerPort: 80
            resources: {}
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
    spec:
      selector:
        app: nginx
      ports:
        - port: 80
          protocol: TCP
      type: LoadBalancer
    EOF
    
  3. Verify the pods were created by the deployment:

    kubectl get pods -l app=nginx
    

    The output is similar to the following:

    NAME                                READY     STATUS    RESTARTS   AGE
    nginx-deployment-1882529037-6p4mt   1/1       Running   0          1h
    nginx-deployment-1882529037-p29za   1/1       Running   0          1h
    nginx-deployment-1882529037-s0cmt   1/1       Running   0          1h
    
  4. Export the IP address for the nginx service:

      export IP=`kubectl --kubeconfig=${KUBECONFIG} get service nginx-service -ojsonpath='{.status.loadBalancer.ingress[*].ip}'`
    
  5. Test the nginx server IP address using curl:

      curl http://$IP
    

Clean up

To avoid incurring charges to your GDC account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

To delete the individual resources, complete the following steps:

  1. Delete the Service object for your container app:

    kubectl delete service nginx-service
    
  2. Delete the Deployment object for your container app:

    kubectl delete deployment nginx-deployment
    
  3. If you created a test Kubernetes cluster solely for this tutorial, delete it:

    kubectl delete clusters.cluster.gdc.goog/USER_CLUSTER_NAME \
        -n platform --kubeconfig ${ORG_ADMIN_CLUSTER_KUBECONFIG}
    

    This deletes the resources that make up the Kubernetes cluster, such as the compute instances, disks, and network resources:

  4. Since the container app you deployed to your Kubernetes cluster is a sample included with the GDC product bundle, there's no need to delete it from the system artifact registry. In cases where you deployed a custom container image to the system artifact registry, you must submit a request to remove it.

What's next