Build and create a Go job in Cloud Run

Learn how to create a simple Cloud Run job, then deploy from source, which automatically packages your code into a container image, uploads the container image to Artifact Registry, and then deploys to Cloud Run. You can use other languages in addition to the ones shown.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Install the Google Cloud CLI.
  5. To initialize the gcloud CLI, run the following command:

    gcloud init
  6. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  7. Make sure that billing is enabled for your Google Cloud project.

  8. Install the Google Cloud CLI.
  9. To initialize the gcloud CLI, run the following command:

    gcloud init

Writing the sample job

To write a job in Go:

  1. Create a new directory named jobs and change directory into it:

    mkdir jobs
    cd jobs
    
  2. In the same directory, create a main.go file for the actual job code. Copy the following sample lines into it:

    package main
    
    import (
    	"fmt"
    	"log"
    	"math/rand"
    	"os"
    	"strconv"
    	"time"
    )
    
    type Config struct {
    	// Job-defined
    	taskNum    string
    	attemptNum string
    
    	// User-defined
    	sleepMs  int64
    	failRate float64
    }
    
    func configFromEnv() (Config, error) {
    	// Job-defined
    	taskNum := os.Getenv("CLOUD_RUN_TASK_INDEX")
    	attemptNum := os.Getenv("CLOUD_RUN_TASK_ATTEMPT")
    	// User-defined
    	sleepMs, err := sleepMsToInt(os.Getenv("SLEEP_MS"))
    	failRate, err := failRateToFloat(os.Getenv("FAIL_RATE"))
    
    	if err != nil {
    		return Config{}, err
    	}
    
    	config := Config{
    		taskNum:    taskNum,
    		attemptNum: attemptNum,
    		sleepMs:    sleepMs,
    		failRate:   failRate,
    	}
    	return config, nil
    }
    
    func sleepMsToInt(s string) (int64, error) {
    	sleepMs, err := strconv.ParseInt(s, 10, 64)
    	return sleepMs, err
    }
    
    func failRateToFloat(s string) (float64, error) {
    	// Default empty variable to 0
    	if s == "" {
    		return 0, nil
    	}
    
    	// Convert string to float
    	failRate, err := strconv.ParseFloat(s, 64)
    
    	// Check that rate is valid
    	if failRate < 0 || failRate > 1 {
    		return failRate, fmt.Errorf("Invalid FAIL_RATE value: %f. Must be a float between 0 and 1 inclusive.", failRate)
    	}
    
    	return failRate, err
    }
    
    func main() {
    	config, err := configFromEnv()
    	if err != nil {
    		log.Fatal(err)
    	}
    
    	log.Printf("Starting Task #%s, Attempt #%s ...", config.taskNum, config.attemptNum)
    
    	// Simulate work
    	if config.sleepMs > 0 {
    		time.Sleep(time.Duration(config.sleepMs) * time.Millisecond)
    	}
    
    	// Simulate errors
    	if config.failRate > 0 {
    		if failure := randomFailure(config); failure != nil {
    			log.Fatalf("%v", failure)
    		}
    	}
    
    	log.Printf("Completed Task #%s, Attempt #%s", config.taskNum, config.attemptNum)
    }
    
    // Throw an error based on fail rate
    func randomFailure(config Config) error {
    	rand.Seed(time.Now().UnixNano())
    	randomFailure := rand.Float64()
    
    	if randomFailure < config.failRate {
    		return fmt.Errorf("Task #%s, Attempt #%s failed.", config.taskNum, config.attemptNum)
    	}
    	return nil
    }
    

    Cloud Run jobs allows users to specify the number of tasks the job is to execute. This sample code shows how to use the built-in CLOUD_RUN_TASK_INDEX environment variable. Each task represents one running copy of the container. Note that tasks are usually executed in parallel. Using multiple tasks is useful if each task can independently process a subset of your data.

    Each task is aware of its index, stored in the CLOUD_RUN_TASK_INDEX environment variable. The built-in CLOUD_RUN_TASK_COUNT environment variable contains the number of tasks supplied at job execution time via the --tasks parameter.

    The code shown also shows how to retry tasks, using the built-in CLOUD_RUN_TASK_ATTEMPT environment variable, which contains the number of times this task has been retried, starting at 0 for the first attempt and incrementing by 1 for every successive retry, up to --max-retries.

    The code also lets you generate failures as a way to test retries and to generate error logs so you can see what those look like.

  3. Create a go.mod file with the following contents:

    module github.com/GoogleCloudPlatform/golang-samples/run/jobs
    
    go 1.19
    

Your code is complete and ready to be packaged in a container.

Build jobs container, send it to Artifact Registry and deploy to Cloud Run

Important: This quickstart assumes that you have owner or editor roles in the project you are using for the quickstart. Otherwise, refer to Cloud Run deployment permissions, Cloud Build permissions, and Artifact Registry permissions for the permissions required.

This quickstart uses deploy from source, which builds the container, uploads it to Artifact Registry, and deploys the job to Cloud Run:

gcloud run jobs deploy job-quickstart \
    --source . \
    --tasks 50 \
    --set-env-vars SLEEP_MS=10000 \
    --set-env-vars FAIL_RATE=0.1 \
    --max-retries 5 \
    --region REGION \
    --project=PROJECT_ID

where PROJECT_ID is your project ID and REGION is your region, for example, us-central1. Note that you can change the various parameters to whatever values you want to use for your testing purposes. SLEEP_MS simulates work and FAIL_RATE causes X% of tasks to fail so you can experiment with parallelism and retrying failing tasks.

Execute a job in Cloud Run

To execute the job you just created:

gcloud run jobs execute job-quickstart --region REGION

Replace REGION with the region you used when you created and deployed the job, for example us-central1.

What's next

For more information on building a container from code source and pushing to a repository, see: