Serverless load balancing with Terraform: The hard way
Ahmet Alp Balkan
Senior Developer Advocate
Earlier this year, we announced Cloud Load Balancer support for Cloud Run. You might wonder, aren't Cloud Run services already load-balanced? Yes, each *.run.app endpoint load balances traffic between an autoscaling set of containers. However, with the Cloud Balancing integration for serverless platforms, you can now fine tune lower levels of your networking stack. In this article, we will explain the use cases for this type of set up and build an HTTPS load balancer from ground up for Cloud Run using Terraform.
Why use a Load Balancer for Cloud Run?
Every Cloud Run service comes with a load-balanced *.run.app endpoint that’s secured with HTTPS. Furthermore, Cloud Run also lets you map your custom domains to your services. However, if you want to customize other details about how your load balancing works, you need to provision a Cloud HTTP load balancer yourself.
Here are a few reasons to run your Cloud Run service behind a Cloud Load Balancer:
- Serving static assets with CDN since Cloud CDN integrates with Cloud Load Balancing
- Serving traffic from multiple regions since Cloud Run is a regional service but you can provision a load balancer with a global anycast IP and route users to the closest available region.
- Serve content from mixed backends, for example your /static path can be served from a storage bucket, /api can go to a Kubernetes cluster.
- Bring your own TLS certificates, such as wildcard certificates you might have purchased.
- Customize networking settings, such as TLS versions and ciphers supported.
- Authenticating and enforcing authorization for specific users or groups with Cloud IAP (this does not work yet with Cloud Run, however, stay tuned)
- Configure WAF or DDoS protection with Cloud Armor.
The list goes on, Cloud HTTP Load Balancing has quite a lot of features.
Why use Terraform for this?
The short answer is that a Cloud HTTP Load Balancer consists of many networking resources that you need to create and connect to each other. There’s no single "load balancer" object in GCP APIs.
To understand the upcoming task, let's take a look at the resources involved:
- global IP address for your load balancer
- Google-managed SSL certificate (or bring your own)
- forwarding rules to associate IP address with backends
- target HTTPS proxy to terminate your HTTPS traffic
- target HTTP proxy to receive HTTP traffic and redirect to HTTPS
- URL maps to specify routing rules for URL path patterns.
- backend service to keep track of eligible backends
- network endpoint group allowing you to register serverless apps as backends.
As you might imagine, it is very tedious to provision and connect these resources just to achieve a simple task like enabling CDN.
You could write a bash script with the gcloud command-line tool to create these resources; however, it will be cumbersome to check corner cases like if a resource already exists, or modified manually later. You would also need to write a cleanup script to delete what you provisioned.
This is where Terraform shines. It lets you declaratively configure cloud resources and create/destroy your stack in different GCP projects efficiently with just a few commands.
Building a load balancer: The hard way
The goal of this article is to intentionally show you the hard way for each resource involved in creating a load balancer using Terraform configuration language.
We'll start with a few Terraform variables:
- var.name: used for naming the load balancer resources
- var.project: GCP project ID
- var.region: region to deploy the Cloud Run service
- var.domain: a domain name for your managed SSL certificate
First, let's define our Terraform providers:
Then, let's deploy a new Cloud Run service named "hello" with the sample image, and allow unauthenticated access to it:
If you manage your Cloud Run deployments outside Terraform, that’s perfectly fine: You can still import the equivalent data source to reference that service in your configuration file.
Next, we’ll reserve a global IPv4 address for our global load balancer:
Next, let's create a managed SSL certificate that's issued and renewed by Google for you:
If you want to bring your own SSL certificates, you can create your own google_compute_ssl_certificate resource instead.
Then, make a network endpoint group (NEG) out of your serverless service:
Now, let's create a backend service that'll keep track of these network endpoints:
If you want to configure load balancing features such as CDN, Cloud Armor or custom headers, the google_compute_backend_service resource is the right place.
Then, create an empty URL map that doesn't have any routing rules and sends the traffic to this backend service we created earlier:
Next, configure an HTTPS proxy to terminate the traffic with the Google-managed certificate and route it to the URL map:
Finally, configure a global forwarding rule to route the HTTPS traffic on the IP address to the target HTTPS proxy:
After writing this module, create an output variable that lists your IP address:
When you apply these resources and set your domain’s DNS records to point to this IP address, a huge machinery starts rolling its wheels.
Soon, Google Cloud will verify your domain name ownership and start to issue a managed TLS certificate for your domain. After the certificate is issued, the load balancer configuration will propagate to all of Google’s edge locations around the globe. This might take a while, but once it starts working.
Astute readers will notice that so far this setup cannot handle the unencrypted HTTP traffic. Therefore, any requests that come over port 80 are dropped, which is not great for usability. To mitigate this, you need to create a new set of URL map, target HTTP proxy, and a forwarding rule with these:
As we are nearing 150 lines of Terraform configuration, you probably have realized by now, this is indeed the hard way to get a load balancer for your serverless applications.
If you like to try out this example, feel free to obtain a copy of this Terraform configuration file from this gist and adopt it for your needs.
Building a load balancer: The easy way
To address the complexity in this experience, we have been designing a new Terraform module specifically to skip the hard parts of deploying serverless applications behind a Cloud HTTPS Load Balancer.
Stay tuned for the next article where we take a closer look at this new Terraform module and show you how easier this can get.