Testing a Cloud Run service locally

During development, you can run and test your container image locally, prior to deploying. You can use Cloud Code or Docker installed locally to run and test locally, including running locally with access to Google Cloud services.

Cloud Code emulator

The Cloud Code plugin for VS Code and JetBrains IDEs lets you locally run and debug your container image in a Cloud Run emulator within your IDE. The emulator allows you configure an environment that is representative of your service running on Cloud Run.

You can configure properties like CPU and memory allocation, specify environment variables, and set Cloud SQL database connections.

  1. Install Cloud Code for VS Code or a JetBrains IDE.
  2. Follow the instructions for locally developing and debugging within your IDE.
  3. VS Code: Locally developing and debugging
  4. IntelliJ: Locally developing and debugging

Cloud SDK

Cloud SDK contains a local development environment for emulating Cloud Run that can build a container from source, run the container on your local machine, and automatically rebuild the container upon source code changes.

To start the local development environment:

  1. Change directory to the directory containing the source code of your service.

  2. Invoke the commmand:

    gcloud beta code dev

If a Dockerfile is present in the local directory, it is used to build the container. If no Dockerfile is present, the container is built with Google Cloud Buildpacks.

To see your service running, visit http://localhost:8080/ in your browser. If you specified a custom port with the --local-port option, remember to open your browser to that port.

To stop the local server:

  • Mac OS and Linux: Control-C
  • Windows: Control-Break

Customizing the service configuration

You can customize the Cloud Run configuration of the service running locally using a YAML file. The YAML format is the same that can be used to deploy a Cloud Run service, but only supports a subset of the Cloud Run service settings. gcloud beta code dev looks for and uses any file ending in *.service.dev.yaml in the current directory. If none are found, it will use any file ending with *.service.yaml.

You can configure the following settings for local development:

The container image field is not required for local development, because the image is built and provided to the service when the command is run.

You can use the following example service.dev.yaml file for local development:

  apiVersion: serving.knative.dev/v1
  kind: Service
  spec:
    template:
      spec:
        containers:
        - env:
          - name: FOO
            value: bar
        resources:
          limits:
            memory: 128Mi
            cpu: 1

Using credentials

To give permission to the container to use Google Cloud services, you must give the container an access credential.

  • To give the container access to a credential using your own account, log in using gcloud and use the --application-default-credential flag:

    gcloud auth application-default login
    gcloud beta code dev --dockerfile=PATH_TO_DOCKERFILE --application-default-credential

  • To give the application credentials as a service account, use the --service-account flag:

    gcloud beta code dev --dockerfile=PATH_TO_DOCKERFILE --service-account=SERVICE_ACCOUNT_EMAIL

    The --service-account flag causes a service account key to be downloaded and cached locally. The user is responsible for keeping the key secure and deleting it when it is no longer needed.

Docker

To test your container image locally using Docker:

  1. Use the Docker command:

    PORT=8080 && docker run -p 9090:${PORT} -e PORT=${PORT} IMAGE_URL

    Replace IMAGE_URL with a reference to the container image, for example, us-docker.pkg.dev/cloudrun/container/hello:latest.

    The PORT environment variable specifies the port your application will use to listen for HTTP or HTTPS requests. This is a requirement from the Container runtime contract. In this example, we use port 8080.

  2. Open http://localhost:9090 in your browser.

If you are new to working with containers, you may want to review the Docker Getting Started guide. To learn more about Docker commands, refer to the Docker documentation.

Docker with GCP Access

If you are using Google Cloud client libraries to integrate your application with Google Cloud services, and have not yet secured those services to control external access, you can set up your local container to authenticate with Google Cloud services using Application Default Credentials.

To run locally:

  1. Refer to Getting Started with Authentication for instructions on generating, retrieving, and configuring your Service Account credentials.

  2. The following Docker run flags inject the credentials and configuration from your local system into the local container:

    1. Use the --volume (-v) flag to inject the credential file into the container (assumes you have already set your GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
      -v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
    2. Use the --environment (-e) flag to set the GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
      -e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
  3. Optionally, use this fully configured Docker run command:

    PORT=8080 && docker run \
    -p 9090:${PORT} \
    -e PORT=${PORT} \
    -e K_SERVICE=dev \
    -e K_CONFIGURATION=dev \
    -e K_REVISION=dev-00001 \
    -e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
    -v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \
    IMAGE_URL

    Note that the path

    /tmp/keys/FILE_NAME.json
    shown in the example above is a reasonable location to place your credentials inside the container.

    However, other directory locations will also work. The crucial requirement is that the GOOGLE_APPLICATION_CREDENTIALS environment variable must match the bind mount location inside the container.

    Note also, that with some Google Cloud services, you may want to use an alternate configuration to isolate local troubleshooting from production performance and data.

What's next