Batch Processing with Google Compute Engine Autoscaler

In this tutorial, you'll use several Google Cloud Platform features—managed instance groups, autoscaling, HTTP load balancing, and object change notification—to set up an application that takes images as input and creates thumbnail versions on the fly. The concepts you'll learn in this tutorial are broadly applicable to any application that needs to process incoming assets dynamically based on a variable workload. Example scenarios where these concepts would apply include:

  • Processing audio, video, and images
  • Scaling server-side batch operations

The following diagram describes the architecture of the example application.

The architecture of the example application
Figure 1: Application architecture

The detailed application flow is as follows:

  1. The input Google Cloud Storage bucket uses object change notification to alert a Google App Engine notification processing application when a new file has been added.
  2. The App Engine application processes the object change notification, creates a corresponding task, and places it in a task queue.
  3. The task queue dispatches the task to the HTTP load balancer as an HTTP POST request.
  4. The HTTP load balancer chooses an available virtual machine instance to accept the task and begin processing the input image. When the incoming load on the load balancer reaches a certain percentage of the load balancer's overall serving capacity, the Compute Engine Autoscaler creates new Compute Engine virtual machine instances within its target managed instance group. Each new instance contains a copy of the image processing application.
  5. For each image in the input bucket, a copy of the image processing application creates a thumbnail and places it in an output bucket.

Objectives

  • Create a group of managed Google Compute Engine virtual machine instances
  • Set up HTTP load balancing for Compute Engine
  • Set up Compute Engine autoscaler for automatic scaling
  • Combine everything to create a load-balanced, autoscaling group of instances
  • Deploy and run the demo application

Create and configure a new Cloud Platform Console project

  1. Create or use an existing project in the Google Cloud Platform Console.

    To avoid potential clashes with other deployed applications, particularly in App Engine, it is recommended that you use a new project for this tutorial.

  2. Enable the API

Set up the Google Cloud SDK

This tutorial uses the Google Cloud SDK to configure the project and example application. Follow the Cloud SDK quick start instructions to install the Cloud SDK and authorize it to work with your account.

Next, choose a zone for the resources you will create. For this tutorial, you will use the us-central1-f zone. Set a variable for this zone:

$ ZONE="us-central1-f"

Create a managed instance group

A managed instance group is a managed group of identical instances that is configured via a common template. In this tutorial, the autoscaler provisions virtual machine instances within a managed instance group, creating or deleting them based on the amount of load on the load balancer.

To create a managed instance group, begin by creating a new instance template, imagemagick-go-template:

gcloud compute instance-templates create imagemagick-go-template \
    --description "A pool of machines running our ImageMagick service." \
    --image debian-7 \
    --machine-type n1-standard-1 \
    --metadata goprog="http://storage.googleapis.com/imagemagick/compute/web-process-image.go" \
    startup-script-url="gs://imagemagick/compute/scripts/startup-test-go.sh" \
    --boot-disk-size 200GB \
    --scopes storage-full \
    --tags http-lb

This template provides a basic configuration for an included instance by setting the following flags:

  • --description: A description of the template.
  • --image: The operating system image to use.
  • --machine-type: The Google Cloud Platform machine type to use.
  • --boot-disk-size: The size of the boot disk.
  • --scopes: Gives the instances the ability to make calls to the Google Cloud Storage APIs.
  • --tags: Used to label instances so that they can be operated on as a group. In this tutorial, you'll be applying a firewall rule to instances tagged http-lb.
  • --metadata: Takes a series of key-value pairs to define various metadata attributes. Here, the following attributes are defined:
    • goprog: The Google Cloud Storage location of the application script that each instance will run.
    • startup-script-url: The URL for the startup script used to provision each instance.

Next, create a managed instance group (imagemagick-go) that uses your newly-created instance template as the template for all new virtual machine instances that are added to the group:

gcloud compute instance-groups managed create imagemagick-go \
    --base-instance-name imagemagick-go \
    --size 1 \
    --template imagemagick-go-template \
    --zone $ZONE

In addition to supplying the template name, the above command specifies the the zone in which your instance group will run, the base instance name that will be used to name the instances, and the number of instances to create initially.

Create the HTTP load balancer

This tutorial uses an HTTP load balancer to direct incoming traffic to your managed instance group. After setting up the load balancer, you can configure the group to autoscale based on a target percentage of an HTTP load balancer's total serving capacity. The autoscaler will start additional instances when that percentage is exceeded, and shut down unneeded instances when load balancer utilization returns below that percentage.

The load balancer comprises a series of resources that route incoming requests to the appropriate backend, as shown in the following diagram:

Overview of the HTTP load balancer
Overview of the HTTP load balancer (click to
    enlarge)

Incoming traffic is routed to a target HTTP proxy via a global forwarding rule. This proxy uses a URL map to determine the backend service to which the traffic should be routed. The backend service then distributes traffic to one or more attached backends based on each backend's region, CPU load, and request rate constraints.

Create a health check

Your load balancer must have a HTTP health check resource that the backend service can use to determine the health of its backends. Health checks ensure that the backend service only forwards new connections to instances that are up and ready to receive them.

Create a health check and define its request path as "/healthcheck":

gcloud compute http-health-checks create imagemagick-check \
    --request-path "/healthcheck"

Create and configure a backend service

After creating an HTTP health check, create a backend service and set its required HTTP health check as the resource you just created:

gcloud compute backend-services create imagemagick-backend-service \
    --http-health-check imagemagick-check

Add the imagemagick-go instance group to your new backend service as a backend. This will allow the backend service to direct traffic to it:

gcloud compute backend-services add-backend imagemagick-backend-service \
    --instance-group imagemagick-go \
    --balancing-mode UTILIZATION \
    --max-utilization 0.8 \
    --zone $ZONE

Note that, in the above command, the balancing mode of the new backend has been set to UTILIZATION and the maximum CPU utilization of the backend has been set to 0.8, or 80%. Because the load balancer only has a single backend, the maximum serving capacity of the load balancer is equivalent to the maximum CPU utilization of the backend. As such, when you add the autoscaler, you will configure it to scale based on a percentage of your backend's maximum CPU utilization.

Complete the load balancer

The HTTP load balancer requires four more components to function properly:

  • A URL map, which routes requests to backend services according to host rules and URL pattern matching.
  • A target HTTP proxy, which routes incoming HTTP traffic from a global forwarding rule to a URL map.
  • A global forwarding rule, which routes incoming traffic to a specified target (in this case, an HTTP proxy).
  • A firewall rule, which allows incoming traffic on a given port or port range.

First, create a new URL map that directs all incoming requests to your backend service:

gcloud compute url-maps create imagemagick-map \
    --default-service imagemagick-backend-service

Next, create a target HTTP proxy to route requests to your URL map:

gcloud compute target-http-proxies create imagemagick-proxy \
    --url-map imagemagick-map

Create a global forwarding rule to handle and route incoming requests:

gcloud compute forwarding-rules create imagemagick-rule --global \
    --target-http-proxy imagemagick-proxy --port-range 80

Finally, create a firewall rule to allow access to port 80 for each instance tagged with label http-lb:

gcloud compute firewall-rules create http-lb-rule \
    --target-tags http-lb --allow tcp:80

You now have a fully-functional HTTP load balancer sitting in front of a managed group of virtual machine instances, each of which is running a copy of the application.

Create and attach the autoscaler

Run the following command to create the autoscaler:

gcloud compute instance-groups managed set-autoscaling imagemagick-go \
    --max-num-replicas 23 \
    --min-num-replicas 2 \
    --target-load-balancing-utilization 0.5 \
    --zone $ZONE

This command defines several important parameters, each of which is required for the autoscaler to work properly:

  • --target: The target managed instance group that should be scaled automatically along with the maximum and minimum number of instances.
  • --min-num-replicas: The minimum number of replicas the autoscaler must maintain.
  • --max-num-replicas: The maximum number of replicas the autoscaler can maintain.
  • --target-load-balancing-utilization: The target HTTP load balancer utilization the autoscaler should maintain. In this example, the autoscaler will add new instances to imagemagick-go when the load balancer backend service reaches half (0.5) of its maximum utilization.

Create and configure the Google Cloud Storage buckets

This example uses Google Cloud Storage buckets to store both incoming images and processed thumbnails.

You'll need to create two buckets: an input bucket to store uploaded images and an output bucket to store the processed thumbnails. Because the names of buckets must be unique across the whole of Google Cloud Storage, it's good practice to use your unique project ID as a namespace when naming them, as demonstrated here:

gsutil mb gs://${PROJECT_ID}-input-bucket gs://${PROJECT_ID}-output-bucket

Build and deploy the Google App Engine application

The App Engine application provided in this example functions as a simple endpoint that will receive notifications of new images available to be processed. Download the example code here and unzip it to a local directory. If you prefer, you can clone the GitHub project repository instead by running the following:

$ git clone https://github.com/GoogleCloudPlatform/httplb-autoscaling-go.git

Create a new environment variable to point to the local root directory of the downloaded repository:

$ export EXAMPLE_DIR=local/path/to/httplb-autoscaling-go

Next, you'll need to make some configuration changes. Begin by obtaining the IP address of the load balancer forwarding rule:

gcloud compute forwarding-rules list --global
NAME              REGION_IP_ADDRESS     IP_PROTOCOL   TARGET
imagemagick-rule  <ip_address>          TCP           imagemagick-proxy

Navigate to $EXAMPLE_DIR/appengine and open main.go for editing. Make the following changes to the file:

  • Replace the value of processingPoolIp with the IP address you just obtained.
  • Replace the value of saveToBucketName with the name of output bucket you specified earlier (<project_id>-output-bucket).

Save the file, then deploy the application as follows:

gcloud preview app deploy $EXAMPLE_DIR/appengine/app.yaml $EXAMPLE_DIR/appengine/queue.yaml

Set up object change notifications

To ensure that any new images that are uploaded to the input bucket are automatically processed, this example uses object change notifications. Object change notifications fire an event to a notification channel every time that an object is stored, updated, or deleted from a bucket.

Creating a service account

To set up object change notifications, you need to create a service account. Service accounts allow one Google service to authenticate to another Google service. To create a new service account:

  1. Create a service account key in the Cloud Platform Console.

  2. Click New Service Account and provide a name. Make a note of the email address: you will use this later.

  3. Click Create to download the JSON service account key.

  4. Activate the service account within the Cloud SDK.

    gcloud auth activate-service-account --key-file <path_to_key_file> <service_account_email>
    

Setting up push notifications

To allow your object change notifications to be pushed to App Engine, you need to register for push notifications as well:

  1. Navigate to Google Webmaster Tools in your browser.
  2. Click Add a site.
  3. Enter the App Engine address for your project:

    https://<project_id>.appspot.com
    
  4. Click Continue.

  5. Download the verification file when prompted.
  6. Copy the downloaded file to $EXAMPLE_DIR/appengine.

In the same directory, open app.yaml for editing. Look for the following lines:

- url: /.*
  script: _go_app

Add the following before those lines, substituting your own verification file location:

handlers:
- url: /<your-downloaded-verification-file>
  static_files: <your-downloaded-verification-file>
  upload: <your-downloaded-verification-file>

Now redeploy the application:

gcloud preview app deploy $EXAMPLE_DIR/appengine/app.yaml $EXAMPLE_DIR/appengine/queue.yaml

Return to Webmaster Tools and click Verify. You should see a page confirming that your site has been verified.

Whitelisting your domain

For your object change notification requests to succeed, you need to whitelist your domain in your Cloud Platform Console project. To whitelist your domain:

  1. In the Domain Verfication page of the Cloud Platform Console, click Add Domain.
  2. In the dialog that appears, enter your whitelisted domain:

    https://<project_id>.appspot.com
    
  3. Click Add domain to save the whitelist.

Finally, register for object change notifications on the Cloud Storage input bucket you created earlier by running the following command in your local terminal.

gsutil notification watchbucket https://${PROJECT_ID}.appspot.com \
    gs://${PROJECT_ID}-input-bucket

Test and monitor the demo application

Now that everything is working, you can test the demo configuration. This tutorial tests the configuration by having you copy a large number of images into your input bucket at such a rate that the HTTP load balancer's total CPU utilization goes above 50%, triggering the autoscaler.

The startup script for each instance installs a Go script, generate_files.go, that will add 10,000 identical images to a bucket you specify. To test the application using this script, begin by setting your SDK account back to your default user:

gcloud config set account <user_email>

Next, create a temporary Cloud Storage bucket. This bucket will be used to hold the images generated by the script.

gsutil mb gs://${PROJECT_ID}-tmp-bucket

SSH into one of your instances:

gcloud compute ssh <instance>

In your SSH terminal, set the GOPATH and PATH environment variables so the script can run:

user@instance~:$ export GOPATH=/usr/local
user@instance~:$ export PATH=$PATH:/usr/local/go/bin

Run the script to copy the images to your new temporary bucket:

user@instance~:$ go run /tmp/generate_files.go <project_id>-tmp-bucket /tmp/eiffel.jpg

After the script finishes running, type exit to close your SSH connection.

With your temporary bucket in place, you can start generating some load for your load balancer and autoscaler. Run the following to copy the images from your temporary bucket to your input bucket:

gsutil -m cp -R gs://${PROJECT_ID}-tmp-bucket/* \
    gs://${PROJECT_ID}-input-bucket

You can observe the copy process by viewing the instance groups in the Cloud Platform Console. This page provides information about your managed instance group, including how many instances it has. You can also click on the instance group name to see the actual instances.

To see information about your load balancer, view the HTTP load balancing page in the console. This page displays high-level information about the load balancer you configured. Clicking the first link in the table, imagemagick-map, shows you more detailed information, including the number of instances associated with the backend service.

Because each new instance must download and install the binaries and scripts defined in the instance template, it can take a couple of minutes for a new instance to start up. New instances created by the autoscaler might show a PROVISIONING or STAGING state during this period.

Summary

In this tutorial, you learned several powerful concepts that can be used to design a server-side batch processing flow, including:

  • How to set up Compute Engine Autoscaler to scale instances within a managed instance group when the incoming workload on your HTTP load balancer hits a specified threshold.
  • How to set up an HTTP load balancer to direct incoming traffic to one or more backends.
  • How to use object change notification to notify applications when new content has been uploaded to a Google Cloud Storage bucket.

This tutorial demonstrated how to use autoscaling in a batch processing flow, but the concepts you learned can be extended to any situation where autoscaling and load balancing are important to your application. You can also build on the application you created here, reusing or extending it as needed.

Clean up the example application

Send feedback about...

Compute Engine Documentation