This page shows you how to create an external Application Load Balancer to route requests to serverless backends. Here the term serverless refers to the following serverless compute products:
- App Engine
- Cloud Run functions
- Cloud Run
The integration of external Application Load Balancers with API Gateway enables your serverless backends to take advantage of all the features provided by Cloud Load Balancing. To configure an external Application Load Balancer to route traffic to an API Gateway, see Getting started with an external Application Load Balancer for API Gateway. External Application Load Balancer support for API Gateway is in Preview.
Serverless NEGs allow you to use Google Cloud serverless apps with external Application Load Balancers. After you configure a load balancer with the serverless NEG backend, requests to the load balancer are routed to the serverless app backend.
To learn more about serverless NEGs, read the Serverless NEGs overview.
If you are an existing user of the classic Application Load Balancer, make sure that you review Migration overview when you plan a new deployment with the global external Application Load Balancer.Before you begin
- Deploy an App Engine, Cloud Run functions, or Cloud Run service.
- If you haven't already done so, install the Google Cloud CLI.
- Configure permissions.
- Add an SSL certificate resource.
Deploy an App Engine, Cloud Run functions, or Cloud Run service
The instructions on this page assume you already have a Cloud Run, Cloud Run functions, or App Engine service running.
For the example on this page, we have used the Cloud Run Python
quickstart to deploy a Cloud Run
service in the us-central1
region. The rest of this page shows you how to set
up an external Application Load Balancer that uses a serverless NEG backend to route requests to
this service.
If you haven't already deployed a serverless app, or if you want to try a serverless NEG with a sample app, use one of the following quickstarts. You can create a serverless app in any region, but you must use the same region later on to create the serverless NEG and load balancer.
Cloud Run
To create a simple Hello World application, package it into a container image, and then deploy the container image to Cloud Run, see Quickstart: Build and Deploy.
If you already have a sample container uploaded to the Container Registry, see Quickstart: Deploy a Prebuilt Sample Container.
Cloud Run functions
See Cloud Run functions: Python Quickstart.
App Engine
See the following App Engine quickstart guides for Python 3:
Install the Google Cloud CLI
Install the Google Cloud CLI. See gcloud Overview for conceptual and installation information about the tool.
If you haven't run the gcloud CLI previously, first run
gcloud init
to initialize your gcloud directory.
Configure permissions
To follow this guide, you need to create a serverless NEG and create an external HTTP(S) load balancer in a project. You should be either a project owner or editor, or you should have the following Compute Engine IAM roles:
Task | Required Role |
---|---|
Create load balancer and networking components | Network Admin |
Create and modify NEGs | Compute Instance Admin |
Create and modify SSL certificates | Security Admin |
Reserve an external IP address
Now that your services are up and running, set up a global static external IP address that your customers use to reach your load balancer.
Console
In the Google Cloud console, go to the External IP addresses page.
Click Reserve external static IP address.
For Name, enter
example-ip
.For Network service tier, select Premium.
For IP version, select IPv4.
For Type, select Global.
Click Reserve.
gcloud
gcloud compute addresses create example-ip \ --network-tier=PREMIUM \ --ip-version=IPV4 \ --global
Note the IPv4 address that was reserved:
gcloud compute addresses describe example-ip \ --format="get(address)" \ --global
Create an SSL certificate resource
To create an HTTPS load balancer, you must add an SSL certificate resource to the load balancer's front end. Create an SSL certificate resource using either a Google-managed SSL certificate or a self-managed SSL certificate.
Google-managed certificates. Using Google-managed certificates is recommended because Google Cloud obtains, manages, and renews these certificates automatically. To create a Google-managed certificate, you must have a domain and the DNS records for that domain in order for the certificate to be provisioned. Additionally, you need to update the domain's DNS A record to point to the load balancer's IP address created in the previous step (
example-ip
). For detailed instructions, see Using Google-managed certificates.Self-signed certificates. If you do not want to set up a domain at this time, you can use a self-signed SSL certificate for testing.
This example assumes that you have already created an SSL certificate resource.
If you want to test this process without creating an SSL certificate resource (or a domain as required by Google-managed certificates), you can still use the instructions on this page to set up an HTTP load balancer instead.
Create the load balancer
In the following diagram, the load balancer uses a serverless NEG backend to direct requests to a serverless Cloud Run service. For this example, we have used the Cloud Run Python quickstart to deploy a Cloud Run service.
Because health checks are not supported for backend services with serverless NEG backends, you don't need to create a firewall rule allowing health checks if the load balancer has only serverless NEG backends.
Console
Start your configuration
In the Google Cloud console, go to the Load balancing page.
- Click Create load balancer.
- For Type of load balancer, select Application Load Balancer (HTTP/HTTPS) and click Next.
- For Public facing or internal, select Public facing (external) and click Next.
- For Global or single region deployment, select Best for global workloads and click Next.
- For Load balancer generation, select Global external Application Load Balancer and click Next.
- Click Configure.
Basic configuration
- For the name of the load balancer, enter
serverless-lb
. - Keep the window open to continue.
Frontend configuration
- Click Frontend configuration.
- For Name, enter a name.
-
To create an HTTPS load balancer, you must have
an SSL certificate
(
gcloud compute ssl-certificates list
).We recommend using a Google-managed certificate as described previously.
- Click Done.
To configure an external Application Load Balancer, fill in the fields as follows.
Verify the following options are configured with these values:
Property | Value (type a value or select an option as specified) |
---|---|
Protocol | HTTPS |
Network Service Tier | Premium |
IP version | IPv4 |
IP address | example-ip |
Port | 443 |
Optional: HTTP keepalive timeout | Enter a timeout value from 5 to 1200 seconds. The default value is 610 seconds. |
Certificate | Select an existing SSL certificate or create a new certificate. To create an HTTPS load balancer, you must have an SSL certificate resource to use in the HTTPS proxy. You can create an SSL certificate resource using either a Google-managed SSL certificate or a self-managed SSL certificate. To create a Google-managed certificate, you must have a domain. The domain's A record must resolve to the IP address of the load balancer (in this example, example-ip). Using Google-managed certificates is recommended because Google Cloud obtains, manages, and renews these certificates automatically. If you do not have a domain, you can use a self-signed SSL certificate for testing. |
Optional: Enable HTTP to HTTPS Redirect |
Use this checkbox to enable HTTP to HTTPS redirects.
Enabling this checkbox creates an additional partial HTTP load balancer that uses the same IP address as your HTTPS load balancer and redirects HTTP requests to your load balancer's HTTPS frontend. This checkbox can only be selected when the HTTPS protocol is selected and a reserved IP address is used. |
If you want to test this process without setting up an SSL certificate resource (or a domain as required by Google-managed certificates), you can set up an HTTP load balancer.
To create an HTTP load balancer, verify that the following options are configured with these values:
Property | Value (type a value or select an option as specified) |
---|---|
Protocol | HTTP |
Network Service Tier | Premium |
IP version | IPv4 |
IP address | example-ip |
Port | 80 |
Optional: HTTP keepalive timeout | Enter a timeout value from 5 to 1200 seconds. The default value is 610 seconds. |
Backend configuration
- Click Backend configuration.
- In the Backend services & backend buckets list, click Create a backend service.
- For Name, enter a name.
- In Backend type, select Serverless network endpoint group.
- Leave Protocol unchanged. This parameter is ignored.
- In the Backends section, for New backend, select Create Serverless network endpoint group.
- For Name, enter a name.
- In Region, select us-central1, and then select Cloud Run.
- Select Select service name.
- In the Service list, select the Cloud Run service that you want to create a load balancer for.
- Click Create.
- In the New backend section, click Done.
- Click Create.
Routing rules
Routing rules determine how your traffic is directed. To configure routing, you'll set up host rules and path matchers, which are configuration components of an external Application Load Balancer's URL map.
Click Routing rules.
- Retain the default hosts and paths. For this example, all requests go to the backend service created in the previous step.
Reviewing the configuration
- Click Review and finalize.
- Review all the settings.
- Optional: Click Equivalent Code to view the REST API request that will be used to create the load balancer.
- Click Create.
- Wait for the load balancer to be created.
- Click the name of the load balancer (serverless-lb).
- Note the IP address of the load balancer for the next task. It's
referred to as
IP_ADDRESS
.
gcloud
- Create a serverless NEG for your serverless app.
To create a serverless NEG with a Cloud Run service:
gcloud compute network-endpoint-groups create SERVERLESS_NEG_NAME \ --region=us-central1 \ --network-endpoint-type=serverless \ --cloud-run-service=CLOUD_RUN_SERVICE_NAME
For more options, see thegcloud
reference guide forgcloud compute network-endpoint-groups create
. - Create a backend service.
gcloud compute backend-services create BACKEND_SERVICE_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --global
- Add the serverless NEG as a backend to the backend service:
gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \ --global \ --network-endpoint-group=SERVERLESS_NEG_NAME \ --network-endpoint-group-region=us-central1
- Create a URL map to route incoming requests to the backend service:
gcloud compute url-maps create URL_MAP_NAME \ --default-service BACKEND_SERVICE_NAME
This example URL map only targets one backend service representing a single serverless app, so we don't need to set up host rules or path matchers. If you have more than one backend service, you can use host rules to direct requests to different services based on the host name, and you can set up path matchers to direct requests to different services based on the request path.
-
To create an HTTPS load balancer, you must have an
SSL certificate
resource to use in the HTTPS target proxy.
You can create an SSL
certificate resource using either a Google-managed SSL certificate or
a self-managed SSL certificate. Using Google-managed certificates
is recommended because Google Cloud obtains, manages, and renews
these certificates automatically.
To create a Google-managed certificate, you must have a domain. If you do not have a domain, you can use a self-signed SSL certificate for testing.
To create a Google-managed SSL certificate resource:gcloud compute ssl-certificates create SSL_CERTIFICATE_NAME \ --domains DOMAIN
To create a self-managed SSL certificate resource:gcloud compute ssl-certificates create SSL_CERTIFICATE_NAME \ --certificate CRT_FILE_PATH \ --private-key KEY_FILE_PATH
-
Create a target HTTP(S) proxy to route requests to your URL map.
For an HTTP load balancer, create an HTTP target proxy:
gcloud compute target-http-proxies create TARGET_HTTP_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --url-map=URL_MAP_NAME
For an HTTPS load balancer, create an HTTPS target proxy. The proxy is the portion of the load balancer that holds the SSL certificate for HTTPS Load Balancing, so you also load your certificate in this step.
gcloud compute target-https-proxies create TARGET_HTTPS_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --ssl-certificates=SSL_CERTIFICATE_NAME \ --url-map=URL_MAP_NAME
Replace the following:
TARGET_HTTP_PROXY_NAME
: the name of the target HTTP proxy.TARGET_HTTPS_PROXY_NAME
: the name of the target HTTPS proxy.HTTP_KEEP_ALIVE_TIMEOUT_SEC
: an optional field used to specify the client HTTP keepalive timeout. The timeout value must be from 5 to 1200 seconds. The default value is 610 seconds.SSL_CERTIFICATE_NAME
: the name of the SSL certificate.URL_MAP_NAME
: the name of the URL map.
- Create a forwarding rule to route incoming requests to the proxy.
For an HTTP load balancer:
gcloud compute forwarding-rules create HTTP_FORWARDING_RULE_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=PREMIUM \ --address=example-ip \ --target-http-proxy=TARGET_HTTP_PROXY_NAME \ --global \ --ports=80
For an HTTPS load balancer:
gcloud compute forwarding-rules create HTTPS_FORWARDING_RULE_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=PREMIUM \ --address=example-ip \ --target-https-proxy=TARGET_HTTPS_PROXY_NAME \ --global \ --ports=443
Connect your domain to your load balancer
After the load balancer is created, note the IP address that is associated with
the load balancer—for example, 30.90.80.100
. To point your domain to your
load balancer, create an A
record by using your domain registration service. If
you added multiple domains to your SSL certificate, you must add an A
record
for each one, all pointing to the load balancer's IP address. For example, to
create A
records for www.example.com
and example.com
, use the following:
NAME TYPE DATA www A 30.90.80.100 @ A 30.90.80.100
If you use Cloud DNS as your DNS provider, see Add, modify, and delete records.
Test the load balancer
Now that you have configured your load balancer, you can start sending traffic to the load balancer's IP address. If you configured a domain, you can send traffic to the domain name as well. However, DNS propagation can take time to complete so you can start by using the IP address for testing.
In the Google Cloud console, go to the Load balancing page.
Click on the load balancer you just created.
Note the IP Address of the load balancer.
For an HTTP load balancer, you can test your load balancer using a web browser by going to
http://IP_ADDRESS
. ReplaceIP_ADDRESS
with the load balancer's IP address. You should be directed to thehelloworld
service homepage.
For an HTTPS load balancer, you can test your load balancer using a web browser by going to
https://IP_ADDRESS
. ReplaceIP_ADDRESS
with the load balancer's IP address. You should be directed to thehelloworld
service homepage.
If that does not work and you are using a Google-managed certificate, confirm that your certificate resource's status is ACTIVE. For more information, see Google-managed SSL certificate resource status.
If you used a self-signed certificate for testing, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate. Click through the warning to see the actual page.
Additional configuration options
This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
Set up multi-region load balancing
In the example described earlier on this page, we have only one
Cloud Run service serving as the backend in the
us-central1
region. Because the serverless NEG can point to only one endpoint
at a time, load balancing is not performed across multiple regions. The
external Application Load Balancer serves as the frontend only, and it proxies traffic to the
specified helloworld
app endpoint. However, you might want to serve your
Cloud Run app from more than one region to improve end-user latency.
If a backend service has several serverless NEGs attached to it, the load balancer balances traffic by forwarding requests to the serverless NEG in the closest available region. However, backend services can only contain one serverless NEG per region. To make your Cloud Run service available from multiple regions, you need to set up cross-region routing. You should be able to use a single URL scheme that works anywhere in the world, yet serves user requests from the region closest to the user.
To set up multi-region serving, you will need to use the Premium network tier to ensure that all the regional Cloud Run deployments are compatible and ready to serve traffic from any region.
To set up a multi-region load balancer:
- Set up two Cloud Run services in different regions. Let's assume that you have deployed two Cloud Run services: one to a region in the US, and another to a region in Europe.
- Create an external Application Load Balancer with the following setup:
- Set up a global backend service with two serverless NEGs:
- Create the first NEG in the same region as the Cloud Run service deployed in the US.
- Create the second NEG in the same region as the Cloud Run service deployed in Europe.
- Set up your frontend configuration with the Premium Network Service Tier.
- Set up a global backend service with two serverless NEGs:
The resulting setup is shown in the following diagram.
This section builds on the load-balancer setup described earlier on this page,
in which you created one serverless NEG in the us-central1
region that points
to a Cloud Run service in the same region. It also
assumes that you've created a second Cloud Run service in
the europe-west1
region. The second serverless NEG that you create will point
to this Cloud Run service in the europe-west1
region.
In this example, you will complete the following steps:
- Create a second serverless NEG in the
europe-west1
region. - Attach the second serverless NEG to the backend service.
To add a second serverless NEG to an existing backend service, follow these steps.
Console
In the Google Cloud console, go to the Load balancing page.
Click the name of the load balancer whose backend service you want to edit.
On the Load balancer details page, click
Edit.On the Edit global external Application Load Balancer page, click Backend configuration.
On the Backend configuration page, click
Edit for the backend service that you want to modify.In the Backends section, click Add a backend.
In the Serverless network endpoint groups list, select Create Serverless network endpoint group.
Enter a name for the serverless NEG.
For Region, select
europe-west1
.For Serverless network endpoint group type, select Cloud Run, and then do the following:
- Select the Select Service option.
- In the Service list, select the Cloud Run service that you want to create a load balancer for.
Click Create.
On the New backend page, click Done.
Click Save.
To update the backend service, click Update.
To update the load balancer, on the Edit global external Application Load Balancer page, click Update.
gcloud
Create a second serverless NEG in the same region in which the Cloud Run service is deployed.
gcloud compute network-endpoint-groups create SERVERLESS_NEG_NAME_2 \ --region=europe-west1 \ --network-endpoint-type=SERVERLESS \ --cloud-run-service=CLOUD_RUN_SERVICE_2
Replace the following:
SERVERLESS_NEG_NAME_2
: the name of the second serverless NEGCLOUD_RUN_SERVICE_2
: the name of the Cloud Run service
Add the second serverless NEG as a backend to the backend service.
gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \ --global \ --network-endpoint-group=SERVERLESS_NEG_NAME_2 \ --network-endpoint-group-region=europe-west1
Replace the following:
BACKEND_SERVICE_NAME
: the name of the backend serviceSERVERLESS_NEG_NAME_2
: the name of the second serverless NEG
Use an authenticated Pub/Sub push subscription with a multi-region Cloud Run deployment
For authenticated push requests, Cloud Run expects a region-specific audience field by default. In case of a multi-region Cloud Run deployment, if the push request is routed to a Cloud Run service in a different region, the JWT token verification fails due to an audience mismatch.
To work around this region-specific restriction:
- Configure a custom audience that's the same for service deployments in different regions.
- Configure the Pub/Sub push messages to use the custom audience as the audience in the JWT token.
Set up regional routing
A common reason for serving applications from multiple regions is to meet data locality requirements. For example, you might want to ensure that requests made by European users are always served from a region located in Europe. To set this up you need a URL schema with separate URLs for EU and non-EU users, and direct your EU users to the EU URLs.
In such a scenario, you would use the URL map to route requests from specific URLs to their corresponding regions. With such a setup, requests meant for one region are never delivered to a different region. This provides isolation between regions. On the other hand, when a region fails, requests are not routed to a different region. So this setup does not increase your service's availability.
To set up regional routing, you will need to use the Premium network tier so that you can combine different regions in a single forwarding rule.
To set up a load balancer with regional routing:
- Set up two Cloud Run services in different regions. Let's assume you have deployed two Cloud Run services: hello-world-eu to a region in Europe, and hello-world-us to a region in the US.
- Create an external Application Load Balancer with the following setup:
- Set up a backend service with a serverless NEG in Europe. The serverless NEG must be created in the same region as the Cloud Run service deployed in Europe.
- Set up a second backend service with another serverless NEG in the US. This serverless NEG must be created in the same region as the Cloud Run service deployed in the US.
- Set up your URL map with the appropriate host and path rules so that one set of URLs routes to the European backend service while all the requests route to the US backend service.
- Set up your frontend configuration with the Premium network tier.
The rest of the setup can be the same as described previously. Your resulting setup should look like this:
Use a URL mask
When creating a serverless NEG, instead of selecting a specific Cloud Run service, you can use a URL mask to point to multiple services serving at the same domain. A URL mask is a template of your URL schema. The serverless NEG will use this template to extract the service name from the incoming request's URL and map the request to the appropriate service.
URL masks are particularly useful if your service is mapped to a custom domain rather than the default address that Google Cloud provides for the deployed service. A URL mask allows you to target multiple services and versions with a single rule even when your application is using a custom URL pattern.
If you haven't already done so, make sure you read Serverless NEGs overview: URL Masks.
Construct a URL mask
To construct a URL mask for your load balancer, start with the URL of your
service. For this example, we will use a sample serverless app running at
https://example.com/login
. This is the URL where the app's login
service
will be served.
- Remove the
http
orhttps
from the URL. You are left withexample.com/login
. - Replace the service name with a placeholder for the URL mask.
- Cloud Run: Replace the Cloud Run service name with the
placeholder
<service>
. If the Cloud Run service has a tag associated with it, replace the tag name with the placeholder<tag>
. In this example, the URL mask you are left with is,example.com/<service>
. - Cloud Run functions: Replace the function name with the placeholder
<function>
. In this example, the URL mask you are left with is,example.com/<function>
. - App Engine: Replace the service name with the placeholder
<service>
. If the service has a version associated with it, replace the version with the placeholder<version>
. In this example, the URL mask you are left with is,example.com/<service>
. - API Gateway: Replace the gateway name with the placeholder
<gateway>
. In this example, the URL mask you are left with is,example.com/<gateway>
.
- Cloud Run: Replace the Cloud Run service name with the
placeholder
(Optional) If the service name (or function, version, or tag) can be extracted from the path portion of the URL,the domain can be omitted. The path part of the URL mask is distinguished by the first
/
character. If a/
is not present in the URL mask, the mask is understood to represent the host only. Therefore, for this example, the URL mask can be reduced to/<service>
,/<gateway>
, or/<function>
.Similarly, if the service name can be extracted from the host part of the URL, you can omit the path altogether from the URL mask.
You can also omit any host or subdomain components that come before the first placeholder as well as any path components that come after the last placeholder. In such cases, the placeholder captures the required information for the component.
Here are a few more examples that demonstrate these rules:
Cloud Run
This table assumes that you have a custom domain called example.com
and
all your Cloud Run services are being mapped
to this domain by using an external Application Load Balancer.
Service, Tag name | Custom domain URL | URL mask |
---|---|---|
service: login | https://login-home.example.com/web | <service>-home.example.com |
service: login | https://example.com/login/web | example.com/<service> or /<service> |
service: login, tag: test | https://test.login.example.com/web | <tag>.<service>.example.com |
service: login, tag: test | https://example.com/home/login/test | example.com/home/<service>/<tag> or /home/<service>/<tag> |
service: login, tag: test | https://test.example.com/home/login/web | <tag>.example.com/home/<service> |
Cloud Run functions
This table assumes that you have a custom domain called example.com
and
all your Cloud Run functions services are being mapped
to this domain.
Function Name | Custom domain URL | URL Mask |
---|---|---|
login | https://example.com/login | /<function> |
login | https://example.com/home/login | /home/<function> |
login | https://login.example.com | <function>.example.com |
login | https://login.home.example.com | <function>.home.example.com |
App Engine
This table assumes that you have a custom domain called example.com
and
all your App Engine services are being mapped
to this domain.
Service name, version | Custom domain URL | URL mask |
---|---|---|
service: login | https://login.example.com/web | <service>.example.com |
service: login | https://example.com/home/login/web | example.com/home/<service>, or /home/<service> |
service: login, version: test | https://test.example.com/login/web | <version>.example.com/<service> |
service: login, version: test | https://example.com/login/test | example.com/<service>/<version> |
API Gateway
This table assumes that you have a custom domain called example.com
and
all your API Gateway
services are being mapped to this domain.
Gateway name | API Gateway(Preview) custom domain URL | URL mask |
---|---|---|
login | https://example.com/login | /<gateway> |
login | https://example.com/home/login | /home/<gateway> |
login | https://login.example.com | <gateway>.example.com |
login | https://login.home.example.com | <gateway>.home.example.com |
Create a serverless NEG with a URL mask
Console
For a new load balancer, you can use the same end-to-end process as described previously in this topic. When configuring the backend service, instead of selecting a specific service, enter a URL mask.
If you have an existing load balancer, you can edit the backend configuration and have the serverless NEG point to a URL mask instead of a specific service.
To add a URL mask-based serverless NEG to an existing backend service:
- Go to the Load balancing page in the Google Cloud console.
Go to the Load balancing page - Click the name of the load balancer whose backend service you want to edit.
- On the Load balancer details page, click Edit .
- On the Edit global external Application Load Balancer page, click Backend configuration.
- On the Backend configuration page, click Edit for the backend service you want to modify.
- Click Add backend.
- Select Create Serverless network endpoint group.
- For the Name, enter
helloworld-serverless-neg
. - Under Region, select us-central1.
- Under Serverless network endpoint group type, select the
platform where your serverless apps (or services or functions)
were created.
- Select Use URL Mask.
- Enter a URL mask. For instructions on how to create a URL mask, see Constructing a URL mask.
- Click Create.
- For the Name, enter
- In the New backend section, click Done.
- Click Update.
gcloud: Cloud Run
To create a Serverless NEG with a sample URL mask of example.com/<service>
:
gcloud compute network-endpoint-groups create helloworld-neg-mask \ --region=us-central1 \ --network-endpoint-type=serverless \ --cloud-run-url-mask="example.com/<service>"
gcloud: Cloud Run functions
To create a Serverless NEG with a sample URL mask of example.com/<service>
:
gcloud compute network-endpoint-groups create helloworld-neg-mask \ --region=us-central1 \ --network-endpoint-type=serverless \ --cloud-function-url-mask="example.com/<service>"
gcloud: App Engine
To create a Serverless NEG with a sample URL mask of
example.com/<service>
:
gcloud compute network-endpoint-groups create helloworld-neg-mask \ --region=us-central1 \ --network-endpoint-type=serverless \ --app-engine-url-mask="example.com/<service>"
gcloud: API Gateway
To create a Serverless NEG with a sample URL mask of
example.com/<gateway>
:
gcloud beta compute network-endpoint-groups create helloworld-neg-mask \ --region=us-central1 \ --network-endpoint-type=serverless \ --serverless-deployment-platform=apigateway.googleapis.com \ --serverless-deployment-resource=my-gateway \ --serverless-deployment-url-mask="example.com/<gateway>"
To learn how the load balancer handles issues with URL mask mismatches, see Troubleshooting issues with serverless NEGs.
Move your custom domain to be served by the external Application Load Balancer
If your serverless compute apps are being mapped to custom domains, you might want to update your DNS records so that traffic sent to the existing Cloud Run, Cloud Run functions, API Gateway, or App Engine custom domain URLs is routed through the load balancer instead.
For example, if you have a custom domain called example.com
and all your Cloud
Run services are mapped to this domain, you should update the DNS record for
example.com
to point to the load balancer's IP address.
Before updating the DNS records, you can test your configuration locally by
forcing local DNS resolution of the custom domain to the load balancer's IP
address. To test locally, either modify the /etc/hosts/
file on
your local machine to point example.com
to the load balancer's IP address, or,
use the curl --resolve
flag to force curl
to use the load balancer's IP
address for the request.
When the DNS record for example.com
resolves to the HTTP(S) load balancer's IP
address, requests sent to example.com
begin to be routed via the load
balancer. The load balancer dispatches them to the relevant backend service
according to its URL map. Additionally, if the backend service is configured with
a URL mask, the serverless NEG uses the mask to route the request to the
appropriate Cloud Run, Cloud Run functions,
API Gateway, or App Engine service.
Enable Cloud CDN
Enabling Cloud CDN for your Cloud Run service allows you to optimize content delivery by caching content close to your users.
You can enable Cloud CDN on backend services used by
global external Application Load Balancers by using the gcloud compute
backend-services update
command.
gcloud compute backend-services update helloworld-backend-service
--enable-cdn
--global
Cloud CDN is supported for backend services with Cloud Run, Cloud Run functions, API Gateway, and App Engine backends.
Enable IAP on the external Application Load Balancer
Note: IAP isn't compatible with Cloud CDN.You can configure IAP to be
enabled or disabled (default). If enabled, you must provide values for
oauth2-client-id
and oauth2-client-secret
.
To enable IAP, update the backend service
to include the --iap=enabled
flag with the oauth2-client-id
and
oauth2-client-secret
.
gcloud compute backend-services update BACKEND_SERVICE_NAME \ --iap=enabled,oauth2-client-id=ID,oauth2-client-secret=SECRET \ --global
Optionally, you can enable IAP for a Compute Engine resource by using the Google Cloud console, gcloud CLI, or API.
Enable Google Cloud Armor
Google Cloud Armor is a security product that provides protection against distributed denial-of-service (DDoS) attacks to all GCLB proxy load balancers. Google Cloud Armor also provides configurable security policies to services accessed through an external Application Load Balancer. To learn about Google Cloud Armor security policies for external Application Load Balancers, see Google Cloud Armor security policy overview.
If you are using Cloud Run functions, you can ensure that requests sent to
default URLs are blocked by using the internal-and-gclb
ingress
setting.
If you are using Cloud Run, you can ensure that requests sent to default URLs or any other custom domain set up through Cloud Run are blocked by restricting ingress to "internal and cloud load balancing".
If you are using App Engine, you can use ingress controls so that your app only receives requests sent from the load balancer (and the VPC if you use it).
Without the proper ingress settings, users can use your serverless application's default URL to bypass the load balancer, Google Cloud Armor security policies, SSL certificates, and private keys that are passed through the load balancer.
Optional: Configure a default backend security policy. The default security policy throttles traffic over a user-configured threshold. For more information about default security policies, see the Rate limiting overview.
- To opt out of the Google Cloud Armor default security policy, select
None
in the backend security policy list menu. - In the Security section, select Default security policy.
- In the Policy name field, accept the automatically generated name or enter a name for your security policy.
- In the Request count field, accept the default request
count or enter an integer between
1
and10,000
. - In the Interval field, select an interval.
- In the Enforce on key field, choose one of the following values: All, IP address, or X-Forwarded-For IP address. For more information about these options, see Identifying clients for rate limiting.
Enable logging and monitoring
You can enable, disable, and view logs for an external Application Load Balancer
backend service. When using the Google Cloud console, logging is enabled by default
for backend services with serverless NEG backends. You can use gcloud
to
disable logging for each backend service as needed. For instructions, see
Logging.
The load balancer also exports monitoring data to Cloud Monitoring. Monitoring metrics can be used to evaluate a load balancer's configuration, usage, and performance. Metrics can also be used to troubleshoot problems and improve resource utilization and user experience. For instructions, see Monitoring.
Update client HTTP keepalive timeout
The load balancer created in the previous steps has been configured with a default value for the client HTTP keepalive timeout.To update the client HTTP keepalive timeout, use the following instructions.
Console
In the Google Cloud console, go to the Load balancing page.
- Click the name of the load balancer that you want to modify.
- Click Edit.
- Click Frontend configuration.
- Expand Advanced features. For HTTP keepalive timeout, enter a timeout value.
- Click Update.
- To review your changes, click Review and finalize, and then click Update.
gcloud
For an HTTP load balancer, update the target HTTP proxy by using the
gcloud compute target-http-proxies update
command:
gcloud compute target-http-proxies update TARGET_HTTP_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --global
For an HTTPS load balancer, update the target HTTPS proxy by using the
gcloud compute target-https-proxies update
command:
gcloud compute target-https-proxies update TARGET_HTTPS_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --global
Replace the following:
TARGET_HTTP_PROXY_NAME
: the name of the target HTTP proxy.TARGET_HTTPS_PROXY_NAME
: the name of the target HTTPS proxy.HTTP_KEEP_ALIVE_TIMEOUT_SEC
: the HTTP keepalive timeout value from 5 to 600 seconds.
Enable outlier detection
You can enable outlier detection on global backend services to identify unhealthy serverless NEGs and reduce the number the requests sent to the unhealthy serverless NEGs.
Outlier detection is enabled on the backend service by using one of the following methods:
- The
consecutiveErrors
method (outlierDetection.consecutiveErrors
), in which a5xx
series HTTP status code qualifies as an error. - The
consecutiveGatewayFailure
method (outlierDetection.consecutiveGatewayFailure
), in which only the502
,503
, and504
HTTP status codes qualify as an error.
Use the following steps to enable outlier detection for an existing backend
service. Note that even after enabling outlier detection, some requests can be
sent to the unhealthy service and return a 5xx
status code to
the clients. To further reduce the error rate, you can configure more aggressive
values for the outlier detection parameters. For more information, see the
outlierDetection
field.
Console
In the Google Cloud console, go to the Load balancing page.
Click the name of the load balancer whose backend service you want to edit.
On the Load balancer details page, click
Edit.On the Edit global external Application Load Balancer page, click Backend configuration.
On the Backend configuration page, click
Edit for the backend service that you want to modify.Scroll down and expand the Advanced configurations section.
In the Outlier detection section, select the Enable checkbox.
Click
Edit to configure outlier detection.Verify that the following options are configured with these values:
Property Value Consecutive errors 5 Interval 1000 Base ejection time 30000 Max ejection percent 50 Enforcing consecutive errors 100 In this example, the outlier detection analysis runs every one second. If the number of consecutive HTTP
5xx
status codes received by an Envoy proxy is five or more, the backend endpoint is ejected from the load-balancing pool of that Envoy proxy for 30 seconds. When the enforcing percentage is set to 100%, the backend service enforces the ejection of unhealthy endpoints from the load-balancing pools of those specific Envoy proxies every time the outlier detection analysis runs. If the ejection conditions are met, up to 50% of the backend endpoints from the load-balancing pool can be ejected.Click Save.
To update the backend service, click Update.
To update the load balancer, on the Edit global external Application Load Balancer page, click Update.
gcloud
Export the backend service into a YAML file.
gcloud compute backend-services export BACKEND_SERVICE_NAME \ --destination=BACKEND_SERVICE_NAME.yaml --global
Replace
BACKEND_SERVICE_NAME
with the name of the backend service.Edit the YAML configuration of the backend service to add the fields for outlier detection as highlighted in the following YAML configuration, in the
outlierDetection
section:In this example, the outlier detection analysis runs every one second. If the number of consecutive HTTP
5xx
status codes received by an Envoy proxy is five or more, the backend endpoint is ejected from the load-balancing pool of that Envoy proxy for 30 seconds. When the enforcing percentage is set to 100%, the backend service enforces the ejection of unhealthy endpoints from the load-balancing pools of those specific Envoy proxies every time the outlier detection analysis runs. If the ejection conditions are met, up to 50% of the backend endpoints from the load-balancing pool can be ejected.name: BACKEND_SERVICE_NAME backends: - balancingMode: UTILIZATION capacityScaler: 1.0 group: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/networkEndpointGroups/SERVERLESS_NEG_NAME - balancingMode: UTILIZATION capacityScaler: 1.0 group: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/networkEndpointGroups/SERVERLESS_NEG_NAME_2 outlierDetection: baseEjectionTime: nanos: 0 seconds: 30 consecutiveErrors: 5 enforcingConsecutiveErrors: 100 interval: nanos: 0 seconds: 1 maxEjectionPercent: 50 port: 80 selfLink: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendServices/BACKEND_SERVICE_NAME sessionAffinity: NONE timeoutSec: 30 ...
Replace the following:
BACKEND_SERVICE_NAME
: the name of the backend servicePROJECT_ID
: the ID of your projectREGION_A
andREGION_B
: the regions where the load balancer has been configured.SERVERLESS_NEG_NAME
: the name of the first serverless NEGSERVERLESS_NEG_NAME_2
: the name of the second serverless NEG
Update the backend service by importing the latest configuration.
gcloud compute backend-services import BACKEND_SERVICE_NAME \ --source=BACKEND_SERVICE_NAME.yaml --global
Outlier detection is now enabled on the backend service.
Delete a serverless NEG
A network endpoint group cannot be deleted if it is attached to a backend service. Before you delete a NEG, ensure that it is detached from the backend service.
Console
- To make sure the serverless NEG you want to delete is not currently
in use by any backend service, go to the Backend services tab in the
Load balancing advanced menu.
Go to the Backend services tab - If the serverless NEG is currently in use:
- Click the name of the backend service using the serverless NEG.
- Click Edit .
- From the list of Backends, click to remove the serverless NEG backend from the backend service.
- Click Save.
- Go to the Network endpoint group page in the Google Cloud console.
Go to the Network Endpoint Group page - Select the checkbox for the serverless NEG you want to delete.
- Click Delete.
- Click Delete again to confirm.
gcloud
To remove a serverless NEG from a backend service, you must specify
the region where the NEG was created. You must also specify the --global
flag because helloworld-backend-service
is a global resource.
gcloud compute backend-services remove-backend helloworld-backend-service \ --network-endpoint-group=helloworld-serverless-neg \ --network-endpoint-group-region=us-central1 \ --global
To delete the serverless NEG:
gcloud compute network-endpoint-groups delete helloworld-serverless-neg \ --region=us-central1
What's next
- Using logging and monitoring
- Troubleshooting serverless NEGs issues
- Clean up the load balancer setup
- Using a Terraform module for an external HTTPS load balancer with a Cloud Run backend