Create jobs

This page describes how to create and update Cloud Run jobs from an existing container image. Unlike a Cloud Run service, which listens for and serves requests, a Cloud Run job only runs its tasks and exits when finished. A job does not listen for or serve requests.

After you create or update a job, you can:

You can structure a job as a single task or as multiple, independent tasks (up to 10,000 tasks) that can be executed in parallel. Each task runs one container instance and can be configured to retry in case of failure. Each task is aware of its index, which is stored in the CLOUD_RUN_TASK_INDEX environment variable. The overall count of tasks is stored in the CLOUD_RUN_TASK_COUNT environment variable. If you are processing data in parallel, your code is responsible for determining which task handles which subset of the data.

You can set timeouts on tasks and specify the number of retries in case of task failure. If any task exceeds its maximum number of retries, that task is marked as failed and the job execution is marked as failed after all tasks have run.

By default each task runs for a maximum of 10 minutes: you can change this to a shorter time or a longer time up to 168 hours (7 days). Support for timeouts greater than 24 hours is available in Preview. You do this by changing the task timeout setting.

There is no explicit timeout for a job execution: after all tasks are complete, the job execution is complete.

Jobs use the second generation execution environment.

Required roles

To get the permissions that you need to create Cloud Run jobs, ask your administrator to grant you the following IAM roles:

For a list of IAM roles and permissions that are associated with Cloud Run, see Cloud Run IAM roles and Cloud Run IAM permissions. If your Cloud Run job interfaces with Google Cloud APIs, such as Cloud Client Libraries, see the service identity configuration guide. For more information about granting roles, see deployment permissions and manage access.

Supported container registries and images

You can directly use container images stored in Artifact Registry, or Docker Hub. Google recommends the use of Artifact Registry.

You can use container images from other public or private registries (like JFrog Artifactory, Nexus, or GitHub Container Registry), by setting up an Artifact Registry remote repository.

You should only consider Docker Hub for deploying popular container images such as Docker Official Images or Docker Sponsored OSS images. For higher availability, Google recommends deploying these Docker Hub images via an Artifact Registry remote repository.

Cloud Run does not support container image layers larger than 9.9 GB when deploying from Docker Hub or an Artifact Registry remote repository with an external registry.

Create a new job

You can create a new job using the Google Cloud console, Google Cloud CLI, YAML, or Terraform.

Console

To create a new job:

  1. In the Google Cloud console, go to the Cloud Run page:

    Go to Cloud Run

  2. Click Deploy container and select Job to display the Create job form.

    1. In the form, specify the container image containing the job code or select from a list of containers previously deployed.
    2. The job name is automatically generated from the container image. You can edit or change the job name as needed. A job name cannot be changed after the job is created.
    3. Select the region where you want your job located. The region selector highlights regions with the lowest carbon impact.
    4. Specify the number of tasks that you want to run in the job. All of the tasks must succeed for the job to succeed. By default, the tasks execute in parallel.
  3. Click Container(s), volumes, networking, security to set additional job properties.

    • Under Task capacity:
    1. In the Memory menu, specify the amount of memory required. The default is the minimum required, 512MiB.
    2. In the CPU menu, specify the amount of CPU required. The default is the minimum required, 1 CPU.
    3. Under Task timeout, specify the maximum amount of time in seconds that the task can run, up to 168 hours (7 days). Support for timeouts greater than 24 hours is available in Preview. Each task must complete within this time. The default is 10 minutes (600 seconds).

    4. Under Number of retries per failed task, specify the number of retries in case of task failures. The default is 3 retries.

    • Under Parallelism:

      1. In most cases you can select Run as many tasks concurrently as possible.
      2. If you need to set a lower limit due to scaling constraints on resources your job accesses, select Limit the maximum number of concurrent tasks and specify the number of concurrent tasks in the Custom parallelism limit field.
  4. Optionally, configure other settings in the appropriate tabs:

  5. When you are finished configuring your job, click Create to create the job in Cloud Run.

  6. To execute the job see execute jobs or execute jobs on a schedule.

gcloud

To use the command line, you need to have already set up the gcloud CLI.

To create a new job:

  1. Run the command:

    gcloud run jobs create JOB_NAME --image IMAGE_URL OPTIONS
    Alternatively, use the deploy command:
    gcloud run jobs deploy JOB_NAME --image IMAGE_URL OPTIONS

    • Replace JOB_NAME with the name of the job you want to create. You can omit this parameter, but you will be prompted for the job name if you omit it.
    • Replace IMAGE_URL with a reference to the container image, for example, us-docker.pkg.dev/cloudrun/container/job:latest.
    • Optionally, replace OPTIONS with any of the following options:

      Option Description
      --tasks Accepts integers greater or equal to 1. Defaults to 1; maximum is 10,000. Each task is provided the environment variables CLOUD_RUN_TASK_INDEX with a value between 0 and the number of tasks minus 1, along with CLOUD_RUN_TASK_COUNT, which is the number of tasks
      --max-retries The number of times a failed task is retried. Once any task fails beyond this limit, the entire job is marked as failed. For example, if set to 1, a failed task will be retried once, for a total of two attempts. The default is 3. Accepts integers from 0 to 10.
      --task-timeout Accepts a duration like "2s". Defaults to 10 minutes; maximum is 168 hours (7 days). Support for timeouts greater than 24 hours is available in Preview.
      --parallelism The maximum number of tasks that can execute in parallel. By default, tasks will be started as quickly as possible in parallel. Refer to Parallelism for the range of values.
      --execute-now If set, immediately after the job is created, a job execution is started. Equivalent to calling gcloud run jobs create followed by gcloud run jobs execute.

      In addition to the above options, you also specify more configuration such as environment variables or memory limits.

      For a full list of available options when creating a job, refer to the gcloud run jobs create command line documentation.

  2. Wait for the job creation to finish. You'll see a success message upon a successful completion.

  3. To execute the job see execute jobs or execute jobs on a schedule.

YAML

You can store your job specification in a YAML file and then deploy it using the gcloud CLI.

  1. Create a new job.yaml file with this content:

    apiVersion: run.googleapis.com/v1
    kind: Job
    metadata:
      name: JOB
    spec:
      template:
        spec:
          template:
            spec:
              containers:
              - image: IMAGE

    Replace

    • JOB with the name of your Cloud Run job. Job names must be 49 characters or less and must be unique per region and project.
    • IMAGE with the URL of job container image.

    You can also specify more configuration such as environment variables or memory limits.

  2. Deploy the new job using the following command:

    gcloud run jobs replace job.yaml

Terraform

To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.

To create a new Cloud Run job, use google_cloud_run_v2_job resource and modify your main.tf file as shown in the following snippet.

resource "google_cloud_run_v2_job" "default" {
  name     = "cloud-run-job"
  location = "us-central1"

  deletion_protection = false # set to "true" in production

  template {
    template {
      containers {
        image = "us-docker.pkg.dev/cloudrun/container/job:latest"
      }
    }
  }
}

Client libraries

To create a job from code:

REST API

To create a job, send a POST HTTP request to request to the Cloud Run Admin API jobs endpoint.

For example, using curl:

curl -H "Content-Type: application/json" \
  -H "Authorization: Bearer ACCESS_TOKEN" \
  -X POST \
  -d '{template: {template: {containers: [{image: "IMAGE_URL"}]}}}' \
  https://run.googleapis.com/v2/projects/PROJECT_ID/locations/REGION/jobs?jobId=JOB_NAME

Replace:

  • ACCESS_TOKEN with a valid access token for an account that has the IAM permissions to create jobs. For example, if you are logged into gcloud, you can retrieve an access token using gcloud auth print-access-token. From within a Cloud Run container instance, you can retrieve an access token using the container instance metadata server.
  • JOB_NAME with the name of the job you want to create.
  • IMAGE_URL with the URL of job container image, for example, us-docker.pkg.dev/cloudrun/container/job:latest.
  • REGION with the Google Cloud region of the job.
  • PROJECT_ID with the Google Cloud project ID.

Cloud Run locations

Cloud Run is regional, which means the infrastructure that runs your Cloud Run services is located in a specific region and is managed by Google to be redundantly available across all the zones within that region.

Meeting your latency, availability, or durability requirements are primary factors for selecting the region where your Cloud Run services are run. You can generally select the region nearest to your users but you should consider the location of the other Google Cloud products that are used by your Cloud Run service. Using Google Cloud products together across multiple locations can affect your service's latency as well as cost.

Cloud Run is available in the following regions:

Subject to Tier 1 pricing

  • asia-east1 (Taiwan)
  • asia-northeast1 (Tokyo)
  • asia-northeast2 (Osaka)
  • asia-south1 (Mumbai, India)
  • europe-north1 (Finland) leaf icon Low CO2
  • europe-southwest1 (Madrid) leaf icon Low CO2
  • europe-west1 (Belgium) leaf icon Low CO2
  • europe-west4 (Netherlands) leaf icon Low CO2
  • europe-west8 (Milan)
  • europe-west9 (Paris) leaf icon Low CO2
  • me-west1 (Tel Aviv)
  • us-central1 (Iowa) leaf icon Low CO2
  • us-east1 (South Carolina)
  • us-east4 (Northern Virginia)
  • us-east5 (Columbus)
  • us-south1 (Dallas) leaf icon Low CO2
  • us-west1 (Oregon) leaf icon Low CO2

Subject to Tier 2 pricing

  • africa-south1 (Johannesburg)
  • asia-east2 (Hong Kong)
  • asia-northeast3 (Seoul, South Korea)
  • asia-southeast1 (Singapore)
  • asia-southeast2 (Jakarta)
  • asia-south2 (Delhi, India)
  • australia-southeast1 (Sydney)
  • australia-southeast2 (Melbourne)
  • europe-central2 (Warsaw, Poland)
  • europe-west10 (Berlin) leaf icon Low CO2
  • europe-west12 (Turin)
  • europe-west2 (London, UK) leaf icon Low CO2
  • europe-west3 (Frankfurt, Germany) leaf icon Low CO2
  • europe-west6 (Zurich, Switzerland) leaf icon Low CO2
  • me-central1 (Doha)
  • me-central2 (Dammam)
  • northamerica-northeast1 (Montreal) leaf icon Low CO2
  • northamerica-northeast2 (Toronto) leaf icon Low CO2
  • southamerica-east1 (Sao Paulo, Brazil) leaf icon Low CO2
  • southamerica-west1 (Santiago, Chile) leaf icon Low CO2
  • us-west2 (Los Angeles)
  • us-west3 (Salt Lake City)
  • us-west4 (Las Vegas)

If you already created a Cloud Run service, you can view the region in the Cloud Run dashboard in the Google Cloud console.

When you create a new job, the Cloud Run service agent needs to be able to access the container, which is the case by default.

Update an existing job

Changing any configuration settings requires you to update the job, even if there is no change to the container image. Note that for any unchanged settings, the previous settings continue to be used.

You can update an existing job using the Google Cloud console, Google Cloud CLI, YAML, or Terraform.

Console

To update an existing job:

  1. In the Google Cloud console, go to the Cloud Run page:

    Go to Cloud Run

  2. Click the Jobs tab to display the list of jobs.

  3. Click the job to display the Job details page.

  4. Click Edit.

  5. If you made changes to your job code, specify the new container image digest.

  6. Optionally, change the number of tasks that are in the job if needed.

  7. Optionally, click Container(s), volumes, networking, security to update any additional job properties:

    • Under Task capacity:
    1. In the Memory menu, specify the amount of memory required. The default is the minimum required, 512MiB.
    2. In the CPU menu, specify the amount of CPU required. The default is the minimum required, 1 CPU.
    3. Under Task timeout, specify the maximum amount of time in seconds that the task can run, up to 168 hours (7 days). Support for timeouts greater than 24 hours is available in Preview. Each task must complete within this time. The default is 10 minutes (600 seconds).
    4. Under Number of retries per failed task, specify the number of retries in case of task failures. The default is 3 retries.
    • Under Parallelism:

      1. In most cases you can select Run as many tasks concurrently as possible.
      2. If you need to set a lower limit due to scaling constraints on resources your job accesses, select Limit the number of concurrent tasks and specify the maximum number of concurrent tasks in the Custom parallelism limit field.
  8. Optionally, configure other settings in the appropriate tabs:

  9. When you are finished configuring your job, click Save to create the job in Cloud Run and wait for the job creation to finish.

  10. To execute the job see execute jobs or execute jobs on a schedule.

gcloud

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Run the command:

    gcloud run jobs update JOB_NAME

    Replace:

    • JOB_NAME with the name of the job you want to update.
    • Optionally, replace OPTIONS with the following options:

      Option Description
      --tasks Accepts integers equal or greater than 1. Defaults to 1; maximum is 10,000. Each task is provided the environment variables CLOUD_RUN_TASK_INDEX with a value between 0 and the number of tasks minus 1, along with CLOUD_RUN_TASK_COUNT, which is the number of tasks
      --max-retries The number of times a failed task is retried. Once any task fails beyond this limit, the entire job is marked as failed. For example, if set to 1, a failed task will be retried once, for a total of two attempts. The default is 3. Accepts integers from 0 to 10.
      --task-timeout Accepts a duration like "2s". Defaults to 10 minutes; maximum is 168 hours (7 days). Support for timeouts greater than 24 hours is available in Preview.
      --parallelism The maximum number of tasks that can execute in parallel. By default, tasks will be started as quickly as possible, in parallel. Refer to Parallelism for the range of values.

    In addition to the above options, you can set other optional configuration settings:

    For a full list of available options when creating a job, refer to the gcloud run jobs create command line documentation.

  3. Wait for the job update to finish. Upon successful completion, a success message is displayed, similar to the following:

    Job [JOB_NAME] has been successfully updated.
    View details about this job by running `gcloud run jobs describe JOB_NAME`.
    See logs for this execution at: https://console.cloud.google.com/logs/viewer?project=PROJECT_ID&resource=cloud_run_revision/service_name/JOB_NAME
  4. To execute the job see execute jobs or execute jobs on a schedule.

YAML

If you need to download or view the configuration of an existing job, use the following command to save results to a YAML file:

gcloud run jobs describe JOB --format export > job.yaml

From a job configuration YAML file, modify any spec.template child attributes as desired to update configuration settings, then redeploy:

  1. Update the existing job configuration:

    gcloud run jobs replace job.yaml
  2. To execute the job see execute jobs or execute jobs on a schedule.

Terraform

Make changes to your job configuration in your main.tf file using the terraform apply command. Detailed Terraform instructions are available for:

For more information, refer to the terraform apply command line options.

Client libraries

To update an existing job from code:

REST API

To update a job, send a PATCH HTTP request to request to the Cloud Run Admin API jobs endpoint.

For example, using curl:

curl -H "Content-Type: application/json" \
  -H "Authorization: Bearer ACCESS_TOKEN" \
  -X PATCH \
  -d '{template: {template: {containers: [{image: "IMAGE_URL"}]}}}' \
  https://run.googleapis.com/v2/projects/PROJECT_ID/locations/REGION/jobs/JOB_NAME

Replace:

  • ACCESS_TOKEN with a valid access token for an account that has the IAM permissions to update jobs. For example, if you are logged into gcloud, you can retrieve an access token using gcloud auth print-access-token. From within a Cloud Run container instance, you can retrieve an access token using the container instance metadata server.
  • JOB_NAME with the name of your job you want to update.
  • IMAGE_URL with the URL of job container image, for example, us-docker.pkg.dev/cloudrun/container/job:latest.
  • REGION with the Google Cloud region of the job.
  • PROJECT_ID with the Google Cloud project ID.

Sample code

For code samples showing simple jobs, see the language-specific quickstarts.

What's next

After you create or update a job, you can do the following: