gcloud command line inside a Cloud Run service tutorial


In this tutorial, you create an inventory of Cloud Run services using the gcloud and gsutil command line tools inside a Cloud Run service. You can apply what you learn in this tutorial to your existing Cloud operations scripts or to build a proof-of-concept before using client libraries to build a more robust service.

You use the gcloud and gsutil tools like any shell script inside a web service, for example, as shown in the Shell quickstart. On Cloud Run, both tools work with Google Cloud services by automatically authenticating with the Cloud Run service identity. Any permissions given to the service identity are available to the gcloud CLI.

The gcloud CLI is so broadly capable of information gathering and resource management across Google Cloud that the challenge of using it within a web service is minimizing the risk of a caller misusing these capabilities. Without security controls, you could create risk to other services or resources running in the same project by allowing accidental or intentional malicious activity. Examples of these risks include:

  • Enabling the discovery of IP addresses of private virtual machines
  • Enabling access to private data from a database in the same project
  • Enabling deletion of other running services

Several steps in this tutorial show how to impose controls to minimize risks, such as specifying the gcloud command to be run in the code, instead of leaving it open as a user input.

Scripting with the command line tool inside a Cloud Run service is similar to using the command line locally. The main difference is the additional restrictions you should add around the primary script logic.

Objectives

  • Write and build a custom container with a Dockerfile
  • Write, build, and deploy a Cloud Run service
  • Use the gcloud and gsutil tools safely in a web service
  • Generate a report of Cloud Run services and save to Cloud Storage

Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Make sure that billing is enabled for your Google Cloud project.

  6. Enable the Cloud Run, Cloud Build, and Cloud Storage APIs.

    Enable the APIs

  7. Install and initialize the gcloud CLI.

Required roles

To get the permissions that you need to complete the tutorial, ask your administrator to grant you the following IAM roles on your project:

For more information about granting roles, see Manage access.

You might also be able to get the required permissions through custom roles or other predefined roles.

Setting up gcloud defaults

To configure gcloud with defaults for your Cloud Run service:

  1. Set your default project:

    gcloud config set project PROJECT_ID

    Replace PROJECT_ID with the name of the project you created for this tutorial.

  2. Configure gcloud for your chosen region:

    gcloud config set run/region REGION

    Replace REGION with the supported Cloud Run region of your choice.

Cloud Run locations

Cloud Run is regional, which means the infrastructure that runs your Cloud Run services is located in a specific region and is managed by Google to be redundantly available across all the zones within that region.

Meeting your latency, availability, or durability requirements are primary factors for selecting the region where your Cloud Run services are run. You can generally select the region nearest to your users but you should consider the location of the other Google Cloud products that are used by your Cloud Run service. Using Google Cloud products together across multiple locations can affect your service's latency as well as cost.

Cloud Run is available in the following regions:

Subject to Tier 1 pricing

  • asia-east1 (Taiwan)
  • asia-northeast1 (Tokyo)
  • asia-northeast2 (Osaka)
  • europe-north1 (Finland) leaf icon Low CO2
  • europe-southwest1 (Madrid)
  • europe-west1 (Belgium) leaf icon Low CO2
  • europe-west4 (Netherlands)
  • europe-west8 (Milan)
  • europe-west9 (Paris) leaf icon Low CO2
  • me-west1 (Tel Aviv)
  • us-central1 (Iowa) leaf icon Low CO2
  • us-east1 (South Carolina)
  • us-east4 (Northern Virginia)
  • us-east5 (Columbus)
  • us-south1 (Dallas)
  • us-west1 (Oregon) leaf icon Low CO2

Subject to Tier 2 pricing

  • africa-south1 (Johannesburg)
  • asia-east2 (Hong Kong)
  • asia-northeast3 (Seoul, South Korea)
  • asia-southeast1 (Singapore)
  • asia-southeast2 (Jakarta)
  • asia-south1 (Mumbai, India)
  • asia-south2 (Delhi, India)
  • australia-southeast1 (Sydney)
  • australia-southeast2 (Melbourne)
  • europe-central2 (Warsaw, Poland)
  • europe-west10 (Berlin)
  • europe-west12 (Turin)
  • europe-west2 (London, UK) leaf icon Low CO2
  • europe-west3 (Frankfurt, Germany) leaf icon Low CO2
  • europe-west6 (Zurich, Switzerland) leaf icon Low CO2
  • me-central1 (Doha)
  • me-central2 (Dammam)
  • northamerica-northeast1 (Montreal) leaf icon Low CO2
  • northamerica-northeast2 (Toronto) leaf icon Low CO2
  • southamerica-east1 (Sao Paulo, Brazil) leaf icon Low CO2
  • southamerica-west1 (Santiago, Chile) leaf icon Low CO2
  • us-west2 (Los Angeles)
  • us-west3 (Salt Lake City)
  • us-west4 (Las Vegas)

If you already created a Cloud Run service, you can view the region in the Cloud Run dashboard in the Google Cloud console.

Retrieving the code sample

To retrieve the code sample for use:

  1. Clone the sample app repository to your local machine:

    git clone https://github.com/GoogleCloudPlatform/cloud-run-samples.git

    Alternatively, you can download the sample as a zip file and extract it.

  2. Change to the directory that contains the Cloud Run sample code:

    cd cloud-run-samples/gcloud-report/

Reviewing the code

Generating a report and uploading to Cloud Storage

This shell script generates a report of Cloud Run services in the current project and region and uploads the result to Cloud Storage. It lists services whose name contains the provided string search argument.

The script uses the gcloud run services list command, gcloud advanced format options, and gsutil streaming transfer copy mode.

set -eo pipefail

# Check for required environment variables.
requireEnv() {
  test "${!1}" || (echo "gcloud-report: '$1' not found" >&2 && exit 1)
}
requireEnv GCLOUD_REPORT_BUCKET

# Prepare formatting: Default search term to include all services.
search=${1:-'.'}
limits='spec.template.spec.containers.resources.limits.flatten("", "", " ")'
format='table[box, title="Cloud Run Services"](name,status.url,metadata.annotations.[serving.knative.dev/creator],'${limits}')'

# Create a specific object name that will not be overridden in the future.
obj="gs://${GCLOUD_REPORT_BUCKET}/report-${search}-$(date +%s).txt"

# Write a report containing the service name, service URL, service account or user that
# deployed it, and any explicitly configured service "limits" such as CPU or Memory.
gcloud run services list \
  --format "${format}" \
  --filter "metadata.name~${search}" | gsutil -q cp -J - "${obj}"

# /dev/stderr is sent to Cloud Logging.
echo "gcloud-report: wrote to ${obj}" >&2
echo "Wrote report to ${obj}"

This script is safe to run as a service because repeated invocations of it update the report without further costly churn. Other scripts using the gcloud CLI can be more costly when invoked repeatedly, such as creating new Cloud resources or performing expensive tasks. Idempotent scripts, which yield the same result on repeated invocations, are safer to run as a service.

Invoking the script on HTTP request

This Go code sets up a web service that runs a shell script to generate a report. Since the search query is user input, the code validates it to ensure that it only contains letters, numbers, or hyphens to prevent malicious commands as input. This set of characters is narrow enough to prevent command injection attacks.

The web service passes the search parameter as an argument to the shell script.


// Service gcloud-report is a Cloud Run shell-script-as-a-service.
package main

import (
	"log"
	"net/http"
	"os"
	"os/exec"
	"regexp"
)

func main() {
	http.HandleFunc("/", scriptHandler)

	// Determine port for HTTP service.
	port := os.Getenv("PORT")
	if port == "" {
		port = "8080"
		log.Printf("defaulting to port %s", port)
	}

	// Start HTTP server.
	log.Printf("listening on port %s", port)
	if err := http.ListenAndServe(":"+port, nil); err != nil {
		log.Fatal(err)
	}
}

func scriptHandler(w http.ResponseWriter, r *http.Request) {
	search := r.URL.Query().Get("search")
	re := regexp.MustCompile(`^[a-z]+[a-z0-9\-]*$`)
	if !re.MatchString(search) {
		log.Printf("invalid search criteria %q, using default", search)
		search = "."
	}

	cmd := exec.CommandContext(r.Context(), "/bin/bash", "script.sh", search)
	cmd.Stderr = os.Stderr
	out, err := cmd.Output()
	if err != nil {
		log.Printf("Command.Output: %v", err)
		http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)
		return
	}
	w.Write(out)
}

A go.mod file declares the application dependencies in a go module:

module github.com/GoogleCloudPlatform/cloud-run-samples/gcloud-report

go 1.19

Defining the container environment

The Dockerfile defines how the environment is put together for the service. It is similar to the Dockerfile from the helloworld-shell quickstart, except that the final container image is based on the gcloud Google Cloud CLI image. This allows our service to use gcloud and gsutil without custom installation and configuration steps for the Google Cloud CLI.


# Use the official golang image to create a binary.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.20-buster as builder

# Create and change to the app directory.
WORKDIR /app

# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download

# Copy local code to the container image.
COPY invoke.go ./

# Build the binary.
RUN go build -mod=readonly -v -o server

# Use a gcloud image based on debian:buster-slim for a lean production container.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:slim

WORKDIR /app

# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /app/server
COPY *.sh /app/
RUN chmod +x /app/*.sh

# Run the web service on container startup.
CMD ["/app/server"]

Setting up the Cloud Storage bucket

Create a Cloud Storage bucket for uploading reports, where REPORT_ARCHIVE_BUCKET is a globally unique bucket name:

gsutil mb gs://REPORT_ARCHIVE_BUCKET

Setting up the service identity

In order to limit the privileges that the service has to other infrastructure, you create a service identity and customize the specific IAM permissions necessary to do the work.

In this case, the required privileges are permission to read Cloud Run services and permission to read from and write to the Cloud Storage bucket.

  1. Create a service account:

    gcloud iam service-accounts create gcloud-report-identity

  2. Grant the service account permission to read Cloud Run services:

    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member=serviceAccount:gcloud-report-identity@PROJECT_ID.iam.gserviceaccount.com \
      --role roles/run.viewer
  3. Grant the service account permission to read from and write to the Cloud Storage bucket:

    gsutil iam ch \
      serviceAccount:gcloud-report-identity@PROJECT_ID.iam.gserviceaccount.com:objectViewer,objectCreator \
      gs://REPORT_ARCHIVE_BUCKET

The limited access of this customized service identity prevents the service from accessing other Google Cloud resources.

Shipping the service

Shipping code consists of three steps:

  • Building a container image with Cloud Build
  • Uploading the container image to Container Registry
  • Deploying the container image to Cloud Run.

To ship your code:

  1. Build your container and publish on Container Registry:

    gcloud builds submit --tag gcr.io/PROJECT_ID/gcloud-report

    Where PROJECT_ID is your Google Cloud project ID, and gcloud-report is the name of your service.

    Upon success, a SUCCESS message displays the ID, creation time, and image name. The image is stored in Container Registry and can be re-used if desired.

  2. Run the following command to deploy your service:

    gcloud run deploy gcloud-report \
       --image gcr.io/PROJECT_ID/gcloud-report \
       --update-env-vars GCLOUD_REPORT_BUCKET=REPORT_ARCHIVE_BUCKET \
       --service-account gcloud-report-identity \
       --no-allow-unauthenticated

    Replace PROJECT_ID with your Google Cloud project ID. Note that gcloud-report is part of the container name and the name of the service. The container image is deployed to the service and region (Cloud Run) that you configured previously under Setting up gcloud.

    The --no-allow-unauthenticated flag restricts unauthenticated access to the service. By keeping the service private you can rely on Cloud Run's built-in authentication to block unauthorized requests. For more details about authentication that is based on Identity and Access Management (IAM), see Managing access using IAM.

    Wait until the deployment is complete: this can take about half a minute. On success, the command line displays the service URL that you will use later to replace SERVICE_URL in the next section.

  3. If you want to deploy a code update to the service, repeat the previous steps. Each deployment to a service creates a new revision and automatically starts serving traffic when ready.

See Managing access using IAM for how to grant Google Cloud users access to invoke this service. Project editors and owners automatically have this access.

Trying it out

Let's generate a report of Cloud Run services.

  1. Use curl to send an authenticated request:

    curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" SERVICE_URL

    Replace SERVICE_URL with the URL provided by Cloud Run after completing deployment.

    If you created a new project and followed this tutorial, the output will be similar to:

    Wrote report to gs://REPORT_ARCHIVE_BUCKET/report-.-DATE.txt

    The . in the file name is the default search argument as mentioned in the source code.

    To use the search feature, add a search argument to the request:

    curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" SERVICE_URL?search=gcloud

    This query will return output similar to:

    Wrote report to gs://REPORT_ARCHIVE_BUCKET/report-gcloud-DATE.txt
  2. Retrieve the file using the gsutil tool locally:

    gsutil cp gs://REPORT_FILE_NAME .

    The . in the command means the current working directory.

    Replace REPORT_FILE_NAME with the Cloud Storage object name output in the previous step.

Open the file to see the report. It should look like this:

Screenshot of the list of Cloud Run services in the project with columns for four service attributes.
The four columns are pulled from the service description. They include Name of the service, URL assigned on first deployment, the initial Creator of the service, and the service Limits of maximum CPU and memory.

Improving robustness for the future

If you intend to further develop this service, consider rewriting in a more robust programming language and using the Cloud Run Admin API and the Cloud Storage client library.

You can examine the API calls being made (and see some authentication details) by adding --log-http to gcloud commands and -D to gsutil commands.

Automating this operation

Now that the report of Cloud Run services can be triggered by an HTTP request, use automation to generate reports when you need them:

Clean up

If you created a new project for this tutorial, delete the project. If you used an existing project and wish to keep it without the changes added in this tutorial, delete resources created for the tutorial.

Deleting the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Deleting tutorial resources

  1. Delete the Cloud Run service you deployed in this tutorial:

    gcloud run services delete SERVICE-NAME

    Where SERVICE-NAME is your chosen service name.

    You can also delete Cloud Run services from the Google Cloud console.

  2. Remove the gcloud default region configuration you added during tutorial setup:

     gcloud config unset run/region
    
  3. Remove the project configuration:

     gcloud config unset project
    
  4. Delete other Google Cloud resources created in this tutorial:

What's next