Setting up Cloud CDN with Cloud Run, Cloud Functions, or App Engine

This page shows you how to create an external HTTP(S) load balancer to route requests to serverless backends. Here the term serverless refers to the following serverless compute products: App Engine, Cloud Functions, and Cloud Run (fully managed).

Serverless NEGs allow you to use Google Cloud serverless apps with external HTTP(S) Load Balancing. After you configure a load balancer with the serverless NEG backend, requests to the load balancer are routed to the serverless app backend.

If you want to learn more about Serverless NEGs, read the Serverless NEGs overview.

Before you begin

  1. Deploy a App Engine, Cloud Functions, or Cloud Run (fully managed) service.
  2. If you haven't already done so, install Google Cloud SDK.
  3. Configure permissions.
  4. Add an SSL certificate resource.

Deploy an App Engine, Cloud Functions, or Cloud Run (fully managed) service

The instructions on this page assume you already have a Cloud Run (fully managed), Cloud Functions, or App Engine service running.

For the example on this page, we have used the Cloud Run (fully managed) Python quickstart to deploy a Cloud Run (fully managed) service in the us-central1 region. The rest of this page shows you how to set up an external HTTP(S) load balancer that uses a serverless NEG backend to route requests to this service.

If you haven't already deployed a serverless app, or if you want to try a serverless NEG with a sample app, use one of the following quickstarts. You can create a serverless app in any region, but you must use the same region later on to create the serverless NEG and load balancer.

Cloud Run (fully managed)

To create a simple Hello World application, package it into a container image, and then deploy the container image to Cloud Run (fully managed), see Quickstart: Build and Deploy.

If you already have a sample container uploaded to the Container Registry, see Quickstart: Deploy a Prebuilt Sample Container.

Cloud Functions

See Cloud Functions: Python Quickstart.

App Engine

See the following App Engine quickstart guides for Python 3:

Install Google Cloud SDK

Install the gcloud command-line tool. See gcloud Overview for conceptual and installation information about the tool.

If you haven't run the gcloud command-line tool previously, first run gcloud init to initialize your gcloud directory.

Configure permissions

To follow this guide, you need to create a serverless NEG and create an external HTTP(S) load balancer in a project. You should be either a project owner or editor, or you should have the following Compute Engine IAM roles:

Task Required Role
Create load balancer and networking components Network Admin
Create and modify NEGs Compute Instance Admin
Create and modify SSL certificates Security Admin

Reserving an external IP address

Now that your services are up and running, set up a global static external IP address that your customers use to reach your load balancer.

Console

  1. Go to the External IP addresses page in the Google Cloud Console.
    Go to the External IP addresses page
  2. Click Reserve static address to reserve an IPv4 address.
  3. Assign a Name of example-ip.
  4. Set the Network tier to Premium.
  5. Set the IP version to IPv4.
  6. Set the Type to Global.
  7. Click Reserve.

gcloud

gcloud compute addresses create example-ip \
    --ip-version=IPV4 \
    --global

Note the IPv4 address that was reserved:

gcloud compute addresses describe example-ip \
    --format="get(address)" \
    --global

Creating an SSL certificate resource

To create an HTTPS load balancer, you must add an SSL certificate resource to the load balancer's front end. Create an SSL certificate resource using either a Google-managed SSL certificate or a self-managed SSL certificate.

  • Google-managed certificates. Using Google-managed certificates is recommended because Google Cloud obtains, manages, and renews these certificates automatically. To create a Google-managed certificate, you must have a domain and the DNS records for that domain in order for the certificate to be provisioned. If you don't already have a domain, you can get one from Google Domains. Additionally, you will need to update the domain's DNS A record to point to the load balancer's IP address created in the previous step (example-ip). For detailed instructions, see Using Google-managed certificates.

  • Self-signed certificates. If you do not want to set up a domain at this time, you can use a self-signed SSL certificate for testing.

This example assumes that you have already created an SSL certificate resource.

If you want to test this process without creating an SSL certificate resource (or a domain as required by Google-managed certificates), you can still use the instructions on this page to set up an HTTP load balancer instead.

Creating the external HTTP(S) load balancer

In the following diagram, the load balancer uses a serverless NEG backend to direct requests to a serverless Cloud Run (fully managed) service. For this example, we have used the Cloud Run (fully managed) Python quickstart to deploy a Cloud Run (fully managed) service.

HTTPS Load Balancing for a Cloud Run app (click to enlarge)
HTTPS Load Balancing for a Cloud Run app

Because health checks are not supported for backend services with serverless NEG backends, you don't need to create a firewall rule allowing health checks if the load balancer has only serverless NEG backends.

Console

Naming your load balancer

  1. Go to the Load balancing page in the Google Cloud Console.
    Go to the Load balancing page
  2. Under HTTP(S) Load Balancing, click Start configuration.
  3. Under Internet facing or internal only, select From Internet to my VMs.
  4. Click Continue.
  5. For the Name of the load balancer, enter serverless-lb.
  6. Keep the window open to continue.

Configuring the backend services

  1. Click Backend configuration.
  2. In the Create or select a backend service drop-down menu, hold the mouse pointer over Backend services, and then select Create a backend service.
  3. Enter a Name.
  4. Under Backend type, select Serverless network endpoint group.
  5. Leave Protocol unchanged. This parameter is ignored.
  6. Under Backends, in the New backend window, select Create Serverless network endpoint group.
  7. Enter a Name.
  8. Under Region, select us-central1. Then select Cloud Run.
  9. Select Select service name.
  10. From the Service dropdown, select the Cloud Run (fully managed) service that you want to create a load balancer for.
  11. Click Create.
  12. Under the New backend window, click Done.
  13. Select Enable Cloud CDN. Retain the default cache mode and TTL settings.
  14. Click Create.

Configuring host rules and path matchers

Host rules and path matchers are configuration components of an external HTTP(S) load balancer's URL map.

  1. Click Host and path rules.
  2. Retain the default hosts and paths. For this example, all requests go to the backend service created in the previous step.

Configuring the frontend

  1. Click Frontend configuration.
  2. Enter a Name.
  3. To create an HTTPS load balancer, you must have an SSL certificate (gcloud compute ssl-certificates list). We recommend using a Google-managed certificate as described previously. To configure an external HTTP(S) load balancer, fill in the fields as follows.

    Verify the following options are configured with these values:

    Property Value (type a value or select an option as specified)
    Protocol HTTPS
    Network Service Tier Premium
    IP version IPv4
    IP address example-ip
    Port 443
    Certificate Select an existing SSL certificate or create a new certificate.

    To create an HTTPS load balancer, you must have an SSL certificate resource to use in the HTTPS proxy. You can create an SSL certificate resource using either a Google-managed SSL certificate or a self-managed SSL certificate.
    To create a Google-managed certificate, you must have a domain. The domain’s A record must resolve to the IP address of the load balancer (in this example, example-ip). Using Google-managed certificates is recommended because Google Cloud obtains, manages, and renews these certificates automatically. If you do not have a domain, you can use a self-signed SSL certificate for testing.

  4. Click Done.

Reviewing the configuration

  1. Click Review and finalize.
  2. Review the Backend, Host and Path rules, and Frontend.
  3. Click Create.
  4. Wait for the load balancer to be created.
  5. Click the name of the load balancer (serverless-lb).
  6. Note the IP address of the load balancer for the next task. It's referred to as IP_ADDRESS.

gcloud

  1. Create a serverless NEG for your serverless app. To create a serverless NEG with a Cloud Run (fully managed) service:

    gcloud compute network-endpoint-groups create SERVERLESS_NEG_NAME \
        --region=us-central1 \
        --network-endpoint-type=serverless  \
        --cloud-run-service=CLOUD_RUN_SERVICE_NAME
    
    For more options, see the gcloud reference guide for gcloud compute network-endpoint-groups create.
  2. Create a backend service.
    gcloud compute backend-services create BACKEND_SERVICE_NAME \
        --global \
        --enable-cdn \
        --cache-mode=CACHE_MODE \
        --custom-response-header='Cache-Status: {cdn_cache_status}' \
        --custom-response-header='Cache-ID: {cdn_cache_id}'
    

    Replace CACHE_MODE with one of the following:

    • CACHE_All_STATIC (default): Automatically caches static content. Responses that are marked as uncacheable (private, no-store, or no-cache directives in Cache-Control response headers) aren't cached. To cache dynamic content, the content must have valid caching headers. This is the default behavior for all new Cloud CDN-enabled backends.

    • USE_ORIGIN_HEADERS: Requires the origin to set valid caching headers to cache content. Responses without these headers aren't cached at Google's edge and require a full trip to the origin on every request, potentially impacting performance and increasing load on the origin server. This is the default behavior for all existing Cloud CDN-enabled backends.

    • FORCE_CACHE_ALL: Caches all content, ignoring any private, no-store, or no-cache directives in Cache-Control response headers. This might result in caching of private, per-user (user identifiable) content. You should only enable this on backends that are not serving private or dynamic content, such as Cloud Storage buckets.

    For information about the cache directives that Cloud CDN understands and what's not cached by Cloud CDN, see Cacheable content and Non-cacheable content.

  3. Add the serverless NEG as a backend to the backend service:

    gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \
        --global \
        --network-endpoint-group=SERVERLESS_NEG_NAME \
        --network-endpoint-group-region=us-central1
    
  4. Create a URL map to route incoming requests to the backend service:
    gcloud compute url-maps create URL_MAP_NAME \
        --default-service BACKEND_SERVICE_NAME
    

    This example URL map only targets one backend service representing a single serverless app, so we don’t need to set up host rules or path matchers. If you have more than one backend service, you can use host rules to direct requests to different services based on the host name, and you can set up path matchers to direct requests to different services based on the request path.

  5. To create an HTTPS load balancer, you must have an SSL certificate resource to use in the HTTPS proxy. You can create an SSL certificate resource using either a Google-managed SSL certificate or a self-managed SSL certificate. Using Google-managed certificates is recommended because Google Cloud obtains, manages, and renews these certificates automatically.
    To create a Google-managed certificate, you must have a domain. If you do not have a domain, you can use a self-signed SSL certificate for testing.

    To create a Google-managed SSL certificate resource called www-ssl-cert:

    gcloud compute ssl-certificates create www-ssl-cert \
        --domains DOMAIN
    

    To create a self-managed SSL certificate resource called www-ssl-cert:

    gcloud compute ssl-certificates create www-ssl-cert \
        --certificate CRT_FILE_PATH \
        --private-key KEY_FILE_PATH
    

  6. Create a target HTTP(S) proxy to route requests to your URL map.

    For an HTTPS load balancer, create an HTTPS target proxy. The proxy is the portion of the load balancer that holds the SSL certificate for HTTPS Load Balancing, so you also load your certificate in this step.

    gcloud compute target-https-proxies create TARGET_HTTPS_PROXY_NAME \
        --ssl-certificates=www-ssl-cert \
        --url-map=URL_MAP_NAME
    

  7. Create a global forwarding rule to route incoming requests to the proxy.

    For an HTTPS load balancer:

    gcloud compute forwarding-rules create HTTPS_FORWARDING_RULE_NAME \
        --address=example-ip \
        --target-https-proxy=TARGET_HTTPS_PROXY_NAME \
        --global \
        --ports=443
    

Updating your domain's DNS records to use the load balancer's IP address

If you haven't already done so, update your domain's DNS A or AAAA record to point to the load balancer's IP address so that traffic sent to the existing custom domain URLs is routed through the load balancer instead.

For example, if you have a custom domain of example.com and all your services are mapped to this domain, you should update the DNS A or AAAA record for example.com to point to the load balancer's IP address.

Before updating the DNS records, you can test your configuration locally by forcing local DNS resolution of the custom domain to the load balancer's IP address. To test locally, either modify the /etc/hosts/ file on your local machine to point example.com to the load balancer's IP address, or, use the curl --resolve flag to force curl to use the load balancer's IP address for the request.

When the DNS record for example.com resolves to the HTTPS load balancer’s IP address, requests sent to example.com begin to be routed via the load balancer. The load balancer dispatches them to the relevant backend service according to its URL map. Additionally, if the backend service is configured with a URL mask, the serverless NEG uses the mask to route the request to the appropriate Cloud Run (fully managed), Cloud Functions, or App Engine service.

If you use Google-managed certificates, migrating an existing service to an external HTTP(S) load balancer may incur some downtime, typically less than an hour. This is because the SSL certificate for your external HTTP(S) load balancer won’t be provisioned until you update your DNS records to point to the load balancer’s IP address.

Testing the load balancer

Now that you have configured your load balancer, you can start sending traffic to the load balancer's IP address. If you configured a domain, you can send traffic to the domain name as well. However, DNS propagation can take time to complete so you can start by using the IP address for testing.

  1. Go to the Load balancing page in the Google Cloud Console.
    Go to the Load balancing page
  2. Click on the load balancer you just created.
  3. Note the IP Address of the load balancer.
  4. For an HTTPS load balancer, you can test your load balancer using a web browser by going to https://IP_ADDRESS. Replace IP_ADDRESS with the load balancer's IP address. You should be directed to the helloworld service homepage.

    If that does not work and you are using a Google-managed certificate, confirm that your certificate resource's status is ACTIVE. For more information, see Google-managed SSL certificate resource status.

    If you used a self-signed certificate for testing, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate. Click through the warning to see the actual page.

  5. To verify cache responses, use curl from your local machine's command line. Replace IP_ADDRESS with the load balancer's IPv4 address.

    curl -v -o/dev/null https://IP_ADDRESS
    

    If you are using a Google-managed certificate, test the domain pointing to the load balancer's IP address. For example:

    curl -v -o/dev/null -k -s 'https://DOMAIN:443' --connect-to DOMAIN:443:IP_ADDRESS:443
    

    If you are using a self-signed certificate, you must also specify the -k flag. The curl -k option allows curl to work if you have a self-signed certificate. You should only use the -k parameter for testing your own site. Under normal circumstances, a valid certificate is an important security measure and certificate warnings should not be ignored.

    The output should contain the custom headers Cache-ID and Cache-Status you have configured to indicate whether the response was served from the cache:

    HTTP/2 200
    cache-status: hit
    cache-id: SEA-b9fa975e
    

    The output contains the response headers that indicate there was a cache hit, meaning the static asset in the serverless app was served to the user from a Cloud CDN edge cache.

    The cache-status header says disabled value for the responses that are not cached in Cloud CDN. For cached responses, the cache-status header value is hit, miss, or revalidated.

Disabling Cloud CDN

Console

Disable Cloud CDN for a single backend service

  1. In the Google Cloud Console, go to the Cloud CDN page.

    Go to the Cloud CDN page

  2. On the right side of the origin row, click Menu and then select Edit.
  3. Clear the checkboxes of any backend services that you want to stop from using Cloud CDN.
  4. Click Update.

Remove Cloud CDN for all backend services for an origin

  1. In the Cloud Console, go to the Cloud CDN page.

    Go to the Cloud CDN page

  2. On the right side of the origin row, click Menu and then select Remove.
  3. To confirm, click Remove.

gcloud

gcloud compute backend-services update BACKEND_SERVICE_NAME \
    --no-enable-cdn

Disabling Cloud CDN does not invalidate or purge caches. If you turn Cloud CDN off and back on again, most or all of your cached content might still be cached. To prevent content from being used by the caches, you must invalidate that content.

What's next

  • Refer to the Cloud CDN documentation to learn more about cache modes and determine cacheability of your responses.