Create a private build with Gemini for Google Cloud assistance

This tutorial shows you how to use Gemini for Google Cloud, an AI-powered collaborator in Google Cloud, to navigate logs and set up a build environment for a set of microservices in Google Kubernetes Engine.

This tutorial is intended for DevOps engineers of any experience level.

Objectives

  • Prompt Gemini for a logging query so that you can observe logs from Pods.
  • Understand how to establish a private build environment by prompting Gemini with context-based questions.
  • Use Gemini to learn how to store container images privately and securely.

Google Cloud products used

This tutorial uses the following billable Google Cloud products. Use the pricing calculator to generate a cost estimate that's based on your projected usage.

  • Google Kubernetes Engine (GKE). GKE is a managed Kubernetes service that lets you deploy and manage containerized applications at scale.

  • Cloud Logging. Logging is a real-time log-management system with storage, search, analysis, and monitoring support.

  • Cloud Build. Cloud Build is a service that executes your builds on Google Cloud infrastructure. Cloud Build can import source code from a variety of repositories or Cloud Storage spaces, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.

  • Artifact Registry. Artifact Registry is a single place for you to manage container images and language packages. It lets you centrally store artifacts and build dependencies as part of an integrated Google Cloud experience.

  • Gemini. Gemini is an always-on collaborator in Google Cloud that offers generative AI-powered assistance to a wide range of users, including developers and data scientists. To provide an integrated assistance experience, Gemini is embedded in many Google Cloud products.

Before you begin

  1. Ensure that Gemini is set up for your Google Cloud user account and project. Also, ensure you have installed the Cloud Code plugin in your preferred IDE. If you prefer to use Cloud Shell Editor or Cloud Workstations as your IDE, then Cloud Code and Gemini Code Assist are already available by default.
  2. Enable the Google Kubernetes Engine API.

    Enable the API

  3. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  4. Create a GKE cluster using the gcloud container clusters create command:

    gcloud container clusters create test --region us-central1
    

    The command takes a few minutes to run. The output is similar to the following:

    Creating cluster test in us-central1... Cluster is being health-checked (master is healthy)...done.
    
    Created https://container.googleapis.com/v1/projects/agmsb-gke-lab/zones/us-central1/clusters/test.
    
    To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1/test?project=agmsb-gke-lab
    kubeconfig entry generated for test.
    
    NAME: test
    LOCATION: us-central1
    MASTER_VERSION: 1.27.3-gke.100
    MASTER_IP: 34.72.99.149
    MACHINE_TYPE: e2-medium
    NODE_VERSION: 1.27.3-gke.100
    NUM_NODES: 9
    STATUS: RUNNING
    
  5. Clone a repository and use kubectl to deploy a set of microservices that make up an ecommerce web app to the GKE cluster:

    git clone https://github.com/GoogleCloudPlatform/microservices-demo && cd microservices-demo
    
    kubectl apply -f ./release/kubernetes-manifests.yaml
    
  6. After a few minutes, get the external IP address to access your web app in a browser:

    kubectl get service frontend-external | awk '{print $4}'
    

Understand logs for workloads in GKE

For the following example, consider that you're a DevOps engineer who is tasked with inheriting an existing set of microservices running in a GKE cluster. To understand the environment, you decide you want to inspect logs from the various microservices.

In your preferred Code Assist-integrated IDE, you can chat with Gemini to get help. Using the Gemini pane, you enter prompts, which are questions or statements that describe the help you want, and Gemini returns responses.

Gemini doesn't use your prompts or its responses as data to train its models. For more information, see How Gemini for Google Cloud uses your data.

You know that the microservices were deployed to the default namespace. To prompt Gemini to help you inspect logs, follow these steps:

  1. In the activity bar of your IDE, click chat_spark Gemini Code Assist to open the Gemini Code Assist pane.

  2. In the Gemini Code Assist pane, enter the following prompt:

    How can I write a Cloud Logging query to search for logs from Pods in
    a namespace called "default" in a Google Kubernetes Engine cluster named "test"?
    

    Gemini returns a response similar to the following:

    To write a Cloud Logging query to search for logs from Pods in a
    namespace called "default" in GKE cluster called "test", you
    can use the following query:
    
    resource.type="k8s_container"
    resource.labels.namespace_name="default"
    resource.labels.cluster_name="test"
    
    This query will search for all logs from Pods in the "default"
    namespace in the "test" GKE cluster. You can also use the
    Cloud Logging console to search for logs from Pods in a specific
    namespace and cluster. To do this, follow these steps:
    
    1. Go to the Cloud Logging console.
    2. In the Query builder pane, click
       Resource.
    3. In the Resource type list, select Kubernetes
       Container.
    4. In the Namespace name field, enter default.
    5. In the Cluster name field, enter test.
    6. Click Add.
    7. In the Query results pane, you will see a list of logs
       from Pods in the "default" namespace in the "test" GKE
       cluster.
    
  3. To inspect these logs, click each log returned from the query. On an individual log, if you click Explain this query, the response is similar to the following:

    Sure. Here are the bullet points that explain the log entry:
    
    • The log entry is about a GET request to the /_healthz path.
    • The request was successful and returned a 200 status code.
    • The request took 0 milliseconds to complete.
    • The request was made by the x-liveness-probe session.
    • The request was made from the frontend-755cdc7957-pv8x5 pod
      in the default namespace.
    

Create a private build environment for container images

Now that you have a deeper understanding of the workloads from observing the logs, you decide to set up infrastructure and build your team's future container images on a set of private workers. In this section, you prompt Gemini to assist you in understanding how to run a build environment so that you can migrate your builds into your Google Cloud project.

  1. In the activity bar of your IDE, click chat_spark Gemini Code Assist to open the Gemini Code Assist pane.

  2. In the Gemini Code Assist pane, enter the following prompt:

    How can I build a private build environment that is not connected to
    the internet in Google Cloud? I want to be able to use Docker to
    build my container images in this environment.
    

    Gemini returns a response similar to the following:

    To build a private build environment that is not connected to the internet
    in Google Cloud, you can use a Cloud Build private pool. A private pool
    is a group of build workers that are not connected to the public internet.
    You can create a private pool in a Virtual Private Cloud (VPC) network,
    which provides an additional layer of security for your build environment.
    
    To create a private pool, you will need to:
    
    1. Create a VPC network.
    2. Create a private pool in the VPC network.
    3. Configure your build config file to use the private pool.
    
    Once you have created a private pool, you can use it to build your
    container images using Docker. To do this, you will need to:
    
    1. Create a Dockerfile.
    2. Create a build config file.
    3. Run the `gcloud builds submit` command.
    
    The `gcloud builds submit` command will build your container image
    using the private pool.
    
    Find more information for your question here:
    
    • https://cloud.google.com/build/docs/private-pools/create-manage-private-pools
    
  3. Cloud Build private pools sounds like a viable path forward, so you ask Gemini for assistance in creating your first private pool. Enter the following prompt:

    Can you give me a gcloud command to create my first Cloud Build
    private pool?
    

    Gemini returns a response similar to the following:

    Sure. To create a private pool, you can use the following gcloud
    command:
    
    gcloud builds worker-pools create PRIVATEPOOL_ID --config-from-file
    PRIVATEPOOL_CONFIG_FILE --region REGION
    
    Where:
    
    • PRIVATEPOOL_ID is a unique identifier for your private pool.
    • PRIVATEPOOL_CONFIG_FILE is the name of your private pool
      config file.
    • REGION is the region where you want to create your private
      pool.
    
    For more information, see the Cloud Build documentation.
    
    Source:
    
    https://cloud.google.com/build/docs/private-pools/create-manage-private-pools
    
  4. From the preceding response, it appears that you need to pass a configuration file to replace the variable PRIVATEPOOL_CONFIG_FILE, so you request Gemini to assist you in creating a configuration file. Enter the following prompt:

    Can you show me an example of a Cloud Build private pool config file?
    

    Gemini returns a response similar to the following:

    Sure. Here is an example of a Cloud Build private pool config file:
    
    privatePoolV1Config:
       networkConfig:
         egressOption: NO_PUBLIC_EGRESS<br>
         peeredNetwork: 'projects/my-project/global/networks/my-network'
         peeredNetworkIpRange: '10.1.2.0/24'
       workerConfig:
         diskSizeGb: '100'
         machineType: 'e2-standard-4'
    
    This config file creates a private pool with the following
    properties:
    
    • The pool is not connected to the public internet.
    • The pool is connected to the `my-network` VPC network.
    • The pool's workers have a `100GB` boot disk and a
      `e2-standard-4` machine type.
    
    For more information, see the Cloud Build documentation.
    
  5. You begin authoring your own configuration file where the build workers don't have access to the public internet. Navigate to the Google Cloud console and launch the Cloud Shell Editor.

  6. In Cloud Shell, run the following command, setting egressOption to NO_PUBLIC_EGRESS`:

    cat << EOF >> cloudbuild-privatepool-config.yaml
    
    privatePoolV1Config:
      networkConfig:
        egressOption: NO_PUBLIC_EGRESS
    
    EOF
    
  7. You're unsure about the rest of the options available in the config file, so you prompt Gemini for assistance:

    Is there a default value for any of these options?
    

    Gemini returns a response similar to the following:

    Yes, there are default values for some of the options in the private
    pool config file. For example, the default value for egressOption is
    ALLOW_PUBLIC_EGRESS, which means that the private pool will have
    access to the public internet. The default value for diskSizeGb is
    100, and the default value for machineType is e2-medium.
    
    For a full list of the default values, see the Cloud Build documentation.
    
  8. You see that Gemini provides some of the default values already set in the config file, and upon reading the documentation, you decide that your config file is good as is. In Cloud Shell, pass the gcloud command that Gemini provided for you earlier in your newly created config file:

    gcloud builds worker-pools create max-pool-test \
        --config-from-file cloudbuild-privatepool-config.yaml \
        --region us-central1
    
  9. After your workers are created, you realize that containers won't have access to public package managers, such as pip. You want to confirm your understanding of hosting private packages. Enter the following prompt:

    If I create a private pool with NO_PUBLIC_EGRESS enabled, can I use
    Artifact Registry to host my packages privately and access them from
    a private pool?
    

    Gemini returns a response similar to the following:

    Yes, you can use Artifact Registry to host your packages privately
    and access them from a private pool. To do this, you will need to
    create a private pool with NO_PUBLIC_EGRESS enabled and then
    configure your Artifact Registry repository to use the private pool.
    For more information, see the Cloud Build documentation.
    
  10. You decide to move forward with creating an Artifact Registry repository. In the Gemini pane, enter the following prompt:

    How can I use gcloud to create a private Docker repository for
    container images in Artifact Registry?
    

    Gemini returns a response similar to the following:

    To create a private Docker repository for container images in
    Artifact Registry, you can use the following command:
    
    gcloud artifacts repositories create REPOSITORY_NAME \\\
        --repository-format=docker \\\
        --location=LOCATION \\\
        --description="DESCRIPTION" \\\
        --kms-key=KMS_KEY
    
    Where:
    
    • REPOSITORY_NAME is the name of your repository.
    • LOCATION is the location of your repository.
    • DESCRIPTION is a description of your repository.
    • KMS_KEY is the name of your Cloud Key Management Service key that
      you want to use to encrypt your repository data.
    
    For more information, see the gcloud artifacts repositories create
    documentation.
    

  11. Seeing the KMS_KEY variable, you realize you did not create a Cloud KMS key yet. Enter the following prompt:

    Do I need a KMS_KEY?
    

    Gemini returns a response similar to the following:

    No, you don't need a KMS_KEY to create a private Docker repository
    for container images in Artifact Registry. However, if you want to
    encrypt your container images, you can use a Cloud Key Management Service key. For more
    information, see the Artifact Registry CMEK documentation.
    
  12. With your questions answered, in Cloud Shell, you run the gcloud command that Gemini provided for you:

    gcloud artifacts repositories create max-docker-repo-test \
        --repository-format=docker \
        --location=us-central1 \
        --description="test container repository"
    

With Gemini assistance, you successfully created the private set of workers and the private repository to build and host your artifacts.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you can delete the Google Cloud project that you created for this tutorial. Alternatively, you can delete the individual resources.

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

What's next