This document provides instructions for configuring Internal HTTP(S) Load Balancing on a Shared VPC network.
If you don't want to use a Shared VPC network, see Setting up Internal HTTP(S) Load Balancing.
Before you begin
- Read Shared VPC overview.
- Read Internal HTTP(S) Load Balancing overview, including the Shared VPC architectures and Limitations sections.
Permissions
Setting up internal HTTP(S) load balancer for Shared VPC requires some up-front setup and provisioning by an administrator. Once this is done, a service project owner can deploy the load balancer and backends using the resources provisioned by the administrator.
This section summarizes the permissions required to follow this guide to set up an internal HTTP(S) load balancer on a Shared VPC network.
Set up Shared VPC
The following roles are required to:
- Perform one-off administrative tasks such as setting up the Shared VPC and enabling a host project.
- Perform administrative tasks that must be repeated every time you want to onboard a new service project. This includes attaching the service project, provisioning and configuring networking resources, and granting access to the service project administrator.
These tasks must be performed in the Shared VPC host project. We recommend that the Shared VPC Admin also be the owner of the Shared VPC host project. This automatically grants the Network Admin and Security Admin roles.
Task | Required role |
---|---|
Set up Shared VPC, enable host project, and grant access to service project administrators | Shared VPC Admin |
Create subnets in the Shared VPC and grant access to service project administrators | Network Admin |
Add and remove firewall rules | Security Admin |
Once the subnets have been provisioned, the host project owner must grant the Network User role in the host project to anyone (typically service project administrators or developers) who needs to use these resources.
Task | Required role |
---|---|
Use VPCs and subnets belonging to the host project | Network User |
This role can be granted on the project level or for individual subnets. We recommend that you grant the role on individual subnets. Granting the role on the project provides access to all current and future subnets in the VPC of the host project.
Deploy and update internal HTTP(S) load balancer and backends
Service project administrators need the following roles in the service project to create load balancing resources and backends. The permissions granted by these roles are granted automatically if you are a service project owner or editor.
Task | Required role |
---|---|
Create load balancer components | Network Admin |
Create instances | Instance Admin |
Create and modify SSL certificates | Security Admin |
For more information, see the following guides:
Setup overview
As shown in the diagram, this example creates an internal HTTP(S) load balancer on a Shared VPC network deployment.
The internal HTTP(S) load balancer's networking resources such as the proxy-only subnet and the subnet for the backend instances are created in the host project. The firewall rules for the backend instances are also created in the host project.
The load balancer's forwarding rule, target proxy, URL map, backend service, and backend instances are created in the service project.
Prerequisites
The steps in this section do not need to be performed every time you want to create an internal HTTP(S) load balancer. However, you must ensure that you have access to the resources described here before you proceed to creating the load balancer.
Setting up Shared VPC with a host and service project
The rest of these instructions assume that you have already set up Shared VPC. This involves setting up IAM policies for your organization, and designating the host and service projects.
Do not proceed until you have set up Shared VPC and enabled the host and service projects.
Configuring the network and subnets in the host project
You need a Shared VPC network with two subnets: one for the load balancer's frontend and backends, and the other for the load balancer's proxies.
This example uses the following network, region, and subnets:
Network. The network is named
lb-network
.Subnet for load balancer frontend and backends. A subnet named
lb-frontend-and-backend-subnet
in theus-west1
region uses10.1.2.0/24
for its primary IP range.Subnet for proxies. A subnet named
proxy-only-subnet
in theus-west1
region uses10.129.0.0/23
for its primary IP range.
Configure the subnet for the load balancer's frontend and backends
This step does not need to be performed every time you want to create an internal HTTP(S) load balancer. You only need to ensure that the service project has access to a subnet in the Shared VPC network (in addition to the proxy-only subnet).
Cloud Console
- In the Google Cloud Console, go to the VPC networks page.
Go to the VPC networks page - Click Create VPC network.
- For the Name, enter
lb-network
. - In the Subnets section:
- Set the Subnet creation mode to Custom.
- In the New subnet section, enter the following information:
- Name:
lb-frontend-and-backend-subnet
- Region:
us-west1
- IP address range:
10.1.2.0/24
- Name:
- Click Done.
- Click Create.
gcloud
Create a VPC network with the
gcloud compute networks create
command:gcloud compute networks create lb-network --subnet-mode=custom
Create a subnet in the
lb-network
network in theus-west1
region:gcloud compute networks subnets create lb-frontend-and-backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-west1
Configure the proxy-only subnet
The proxy-only subnet is used by all internal HTTP(S) load balancers in the us-west1
region,
in the lb-network
VPC network. There can only be one active
proxy-only subnet per region, per network.
Do not perform this step if there is already a proxy-only
subnet reserved for internal HTTP(S) load balancers in the us-west1
region in this network.
Cloud Console
- In the Cloud Console, go to the VPC networks page.
Go to the VPC networks page - Click the name of the Shared VPC network:
lb-network
. - Click Add subnet.
- For the Name, enter
proxy-only-subnet
. - For the Region, select
us-west1
. - Set Reserve for Internal HTTP(S) Load Balancing to On.
- For the IP address range, enter
10.129.0.0/23
. - Click Add.
gcloud
Create the proxy-only subnet with the gcloud compute networks subnets
create
command.
gcloud compute networks subnets create proxy-only-subnet \ --purpose=INTERNAL_HTTPS_LOAD_BALANCER \ --role=ACTIVE \ --region=us-west1 \ --network=lb-network \ --range=10.129.0.0/23
Give service project admins access to the backend subnet
Service project administrators require access to the lb-frontend-and-backend-subnet
subnet so they can choose a frontend IP address for the load balancer
and provision the load balancer's backends.
A Shared VPC Admin must grant access to the backend subnet to service project administrators (or developers who will deploy resources and backends that use the subnet). For instructions, see Service project admins for some subnets.
Configuring firewall rules in the host project
This step does not need to be performed every time you want to create an internal HTTP(S) load balancer. This is a one-off step that must be performed by the host project administrator, in the host project.
This example uses the following firewall rules:
fw-allow-ssh
. An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port22
from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the system from which you initiate SSH sessions. This example uses the target tagallow-ssh
to identify the virtual machines (VMs) to which the firewall rule applies.fw-allow-health-check
. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems (in130.211.0.0/22
and35.191.0.0/16
). This example uses the target tagload-balanced-backend
to identify the instances to which it should apply.fw-allow-proxies
. An ingress rule, applicable to the instances being load balanced, that allows TCP traffic on ports80
,443
, and8080
from the internal HTTP(S) load balancer's managed proxies. This example uses the target tagload-balanced-backend
to identify the instances to which it should apply.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
Cloud Console
- In the Cloud Console, go to the Firewall rules page.
Go to the Firewall rules page - Click Create firewall rule to create the rule to allow incoming
SSH connections:
- Name:
fw-allow-ssh
- Network:
lb-network
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-ssh
- Source filter:
IP ranges
- Source IP ranges:
0.0.0.0/0
- Protocols and ports:
- Choose Specified protocols and ports.
- Check
tcp
and type22
for the port number.
- Name:
- Click Create.
- Click Create firewall rule a second time to create the rule to allow
Google Cloud health checks:
- Name:
fw-allow-health-check
- Network:
lb-network
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
load-balanced-backend
- Source filter:
IP ranges
- Source IP ranges:
130.211.0.0/22
and35.191.0.0/16
- Protocols and ports:
- Choose Specified protocols and ports.
- Check
tcp
and enter80
. As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you usetcp:80
for the protocol and port, Google Cloud can use HTTP on port80
to contact your VMs, but it cannot use HTTPS on port443
to contact them.
- Name:
- Click Create.
- Click Create firewall rule a third time to create the rule to allow
the load balancer's proxy servers to connect the backends:
- Name:
fw-allow-proxies
- Network:
lb-network
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
load-balanced-backend
- Source filter:
IP ranges
- Source IP ranges:
10.129.0.0/23
- Protocols and ports:
- Choose Specified protocols and ports.
- Check
tcp
and type80, 443, 8080
for the port numbers.
- Name:
- Click Create.
gcloud
Create the
fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tagallow-ssh
. When you omitsource-ranges
, Google Cloud interprets the rule to mean any source.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-allow-health-check
rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers; however, you can configure a narrower set of ports to meet your needs.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=load-balanced-backend \ --rules=tcp
Create the
fw-allow-proxies
rule to allow the internal HTTP(S) load balancer's proxies to connect to your backends.gcloud compute firewall-rules create fw-allow-proxies \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.129.0.0/23 \ --target-tags=load-balanced-backend \ --rules=tcp:80,tcp:443,tcp:8080
Configuring the internal HTTP(S) load balancer in the service project
This section shows you how to set up the load balancer and backends. These steps should be carried out by the service project administrator (or a developer operating within the service project) and do not require involvement from the host project administrator. The steps in this section are largely similar to the standard steps to set up Internal HTTP(S) Load Balancing.
This section shows the configuration required to set up load balancing for services running on either Compute Engine VMs or on pods in a Google Kubernetes Engine cluster. Clients connect to the IP address and port that you configure in the load balancer's forwarding rule. When clients send traffic to this IP address and port, their requests are forwarded to the backend instances (Compute Engine VMs or GKE pods) according to the internal HTTP(S) load balancer's URL map.
The example on this page explicitly sets a reserved internal IP address for the internal HTTP(S) load balancer's forwarding rule, rather than allowing an ephemeral internal IP address to be allocated. As a best practice, we recommend reserving IP addresses for forwarding rules.
Creating a managed instance group
This section shows how to create a template and a managed instance group. The managed instance group provides VM instances running the backend servers of an example internal HTTP(S) load balancer. Traffic from clients is load balanced to these backend servers. For demonstration purposes, backends serve their own hostnames.
Cloud Console
- In the Cloud Console, go to the Instance groups page.
Go to the Instance groups page - Click Create instance group.
- Choose New managed instance group on the left.
- For the Name, enter
l7-ilb-backend-example
. - Under Location, select Single zone.
- For the Region, select
us-west1
. - For the Zone, select
us-west1-a
. Under Instance template, select Create a new instance template.
- For the Name, enter
l7-ilb-backend-template
. - Ensure that the Boot disk is set to a Debian image, such as
Debian GNU/Linux 9 (stretch). These instructions use commands that
are only available on Debian, such as
apt-get
. If you need to change the Boot disk, click Change.- Under Operating System, select Debian.
- Under Version, select one of the available Debian images such as Debian GNU/Linux 9 (stretch).
- Click Select.
Under Management, security, disks, networking, sole tenancy, on the Management tab, insert the following script into the Startup script field.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Under Networking, select Networks shared with me (from host project: HOST_PROJECT_ID).
Select
lb-network
as the Network, and for the Subnet, selectlb-frontend-and-backend-subnet
.Add the following network tags:
allow-ssh
andload-balanced-backend
.Click Save and continue.
- For the Name, enter
Specify the number of instances that you want to create in the group.
For this example, under Autoscaling mode, you can select:
- Don't autoscale
- Under Number of instances, enter
2
Optionally, in the Autoscaling section of the UI, you can configure the instance group to automatically add or remove instances based on instance CPU usage.
Click Create to create the new instance group.
gcloud
The gcloud
instructions in this guide assume that you are using Cloud
Shell or another environment with bash installed.
Create a VM instance template with HTTP server with the
gcloud compute instance-templates create
command.gcloud compute instance-templates create l7-ilb-backend-template \ --region=us-west1 \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --tags=allow-ssh,load-balanced-backend \ --image-family=debian-9 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2' \ --project SERVICE_PROJECT_ID
Create a managed instance group in the zone with the
gcloud compute instance-groups managed create
command.gcloud compute instance-groups managed create l7-ilb-backend-example \ --zone=us-west1-a \ --size=2 \ --template=l7-ilb-backend-template \ --project SERVICE_PROJECT_ID
Configuring the load balancer
This example shows you how to create the following internal HTTP(S) load balancer resources:
- HTTP health check
- Backend service with a managed instance group as the backend
- A URL map
- SSL certificate (required only for HTTPS)
- Target proxy
- Forwarding rule
Proxy availability
Depending on the number of service projects that are using the same Shared VPC network, you might reach quotas or limits more quickly than in the network deployment model where each Google Cloud project hosts its own network.
For example, sometimes Google Cloud regions don't have enough proxy capacity for a new internal HTTP(S) load balancer. If this happens, the Cloud Console provides a proxy availability warning message when you are creating your load balancer. To resolve this issue, you can do one of the following:
- Wait for the capacity issue to be resolved.
Contact your Google Cloud sales team to increase these limits.
Cloud Console
Select a load balancer type
- In the Cloud Console, go to the Load balancing page.
Go to the Load balancing page - Under HTTP(S) Load Balancing, click Start configuration.
- Select Only between my VMs. This setting means that the load balancer is internal.
- Click Continue.
Prepare the load balancer
- For the Name of the load balancer, enter
l7-ilb-shared-vpc
. - For the Region, select
us-west1
. - For the Network, select Networks shared with me (from host project: HOST_PROJECT_ID).
- From the dropdown, select
lb-network
.
If you see a Proxy-only subnet required in Shared VPC network warning, confirm that the host project admin has created theproxy-only-subnet
in theus-west1
region in thelb-network
Shared VPC network. Load balancer creation will succeed even if you do not have permission to view the proxy-only subnet on this page.
- From the dropdown, select
- Keep the window open to continue.
Configure the backend service
- Click Backend configuration.
- From the Create or select backend services menu, select Create a backend service.
- Set the Name of the backend service to
l7-ilb-backend-service
. - Set the Backend type to Instance groups.
- In the New backend section:
- Set the Instance group to
l7-ilb-backend-example
. - Set the Port numbers to
80
. - Set the Balancing mode to Utilization.
- Click Done.
- Set the Instance group to
- In the Health check section, choose Create a health check with the
following parameters:
- Name:
l7-ilb-basic-check
- Protocol:
HTTP
- Port:
80
- Name:
- Click Save and Continue.
- Click Create.
Configure the URL map
Click Routing rules. Ensure that the l7-ilb-backend-service
is the only backend service for any unmatched host and any unmatched
path.
For information about traffic management, see Setting up traffic management.
Configure the frontend
For HTTP:
- Click Frontend configuration.
- Click Add frontend IP and port.
- Set the Name to
l7-ilb-forwarding-rule
. - Set the Protocol to
HTTP
. - Set the Subnetwork to
lb-frontend-and-backend-subnet
.
Don't select the proxy-only subnet for the frontend even if it is an option in the dropdown list. - Under Internal IP, select Reserve a static internal IP address.
- In the panel that appears, provide the following details:
- Name:
l7-ilb-ip
- In the Static IP address section, select Let me choose.
- In the Custom IP address section, enter
10.1.2.99
. - Click Reserve.
- Name:
- Set the Port to
80
. - Click Done.
For HTTPS:
If you are using HTTPS between the client and the load balancer, you need one or more SSL certificate resources to configure the proxy. For information about how to create SSL certificate resources, see SSL certificates. Google-managed certificates aren't currently supported with internal HTTP(S) load balancers.
- Click Frontend configuration.
- Click Add frontend IP and port.
- In the Name field, enter
l7-ilb-forwarding-rule
. - In the Protocol field, select
HTTPS (includes HTTP/2)
. - Set the Subnetwork to
lb-frontend-and-backend-subnet
.
Don't select the proxy-only subnet for the frontend even if it is an option in the dropdown list. - Under Internal IP, select Reserve a static internal IP address.
- In the panel that appears provide the following details:
- Name:
l7-ilb-ip
- In the Static IP address section, select Let me choose.
- In the Custom IP address section, enter
10.1.2.99
. - Click Reserve.
- Name:
- Ensure that the Port is set to
443
, to allow HTTPS traffic. - Click the Certificate drop-down list.
- If you already have a self-managed SSL certificate resource you want to use as the primary SSL certificate, select it from the drop-down menu.
- Otherwise, select Create a new certificate.
- Fill in a Name of
l7-ilb-cert
. - In the appropriate fields upload your PEM-formatted files:
- Public key certificate
- Certificate chain
- Private key
- Click Create.
- Fill in a Name of
- To add certificate resources in addition to
the primary SSL certificate resource:
- Click Add certificate.
- Select a certificate from the Certificates list or click Create a new certificate and follow the instructions above.
- Click Done.
Review and finalize the configuration
Click Create.
gcloud
Define the HTTP health check with the
gcloud compute health-checks create http
command.gcloud compute health-checks create http l7-ilb-basic-check \ --region=us-west1 \ --use-serving-port \ --project SERVICE_PROJECT_ID
Define the backend service with the
gcloud compute backend-services create
command.gcloud compute backend-services create l7-ilb-backend-service \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --health-checks=l7-ilb-basic-check \ --health-checks-region=us-west1 \ --region=us-west1 \ --project SERVICE_PROJECT_ID
Add backends to the backend service with the
gcloud compute backend-services add-backend
command.gcloud compute backend-services add-backend l7-ilb-backend-service \ --balancing-mode=UTILIZATION \ --instance-group=l7-ilb-backend-example \ --instance-group-zone=us-west1-a \ --region=us-west1 \ --project SERVICE_PROJECT_ID
Create the URL map with the
gcloud compute url-maps create
command.gcloud compute url-maps create l7-ilb-map \ --default-service=l7-ilb-backend-service \ --region=us-west1 \ --project SERVICE_PROJECT_ID
Create the target proxy.
For HTTP:
For an internal HTTP load balancer, create the target proxy with the
gcloud compute target-http-proxies create
command.gcloud compute target-http-proxies create l7-ilb-proxy \ --url-map=l7-ilb-map \ --url-map-region=us-west1 \ --region=us-west1 \ --project SERVICE_PROJECT_ID
For HTTPS:
For information about how to create SSL certificate resources, see SSL certificates. Google-managed certificates aren't currently supported with internal HTTP(S) load balancers.
Assign your filepaths to variable names.
export LB_CERT=path to PEM-formatted file
export LB_PRIVATE_KEY=path to PEM-formatted file
Create a regional SSL certificate using the
gcloud compute ssl-certificates create
command.gcloud compute ssl-certificates create l7-ilb-cert \ --certificate=$LB_CERT \ --private-key=$LB_PRIVATE_KEY \ --region=us-west1
Use the regional SSL certificate to create a target proxy with the
gcloud compute target-https-proxies create
command.gcloud compute target-https-proxies create l7-ilb-proxy \ --url-map=l7-ilb-map \ --region=us-west1 \ --ssl-certificates=l7-ilb-cert \ --project SERVICE_PROJECT_ID
Create the forwarding rule.
For custom networks, you must reference the subnet in the forwarding rule.
For the forwarding rule's IP address, use the
lb-frontend-and-backend-subnet
. If you try to use the proxy-only subnet, forwarding rule creation fails.For HTTP:
Use the
gcloud compute forwarding-rules create
command with the correct flags.gcloud compute forwarding-rules create l7-ilb-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --address=10.1.2.99 \ --ports=80 \ --region=us-west1 \ --target-http-proxy=l7-ilb-proxy \ --target-http-proxy-region=us-west1 \ --project SERVICE_PROJECT_ID
For HTTPS:
Create the forwarding rule with the
gcloud compute forwarding-rules create
command with the correct flags.gcloud compute forwarding-rules create l7-ilb-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --address=10.1.2.99 \ --ports=443 \ --region=us-west1 \ --target-https-proxy=l7-ilb-proxy \ --target-https-proxy-region=us-west1 \ --project SERVICE_PROJECT_ID
Testing
Creating a VM instance to test connectivity
Clients can be located in either the host project or any connected service project. In this example, you test that the load balancer is working by deploying a client VM in a service project. The client must use the same Shared VPC network and be in the same region as the load balancer.
gcloud compute instances create l7-ilb-client-us-west1-a \ --image-family=debian-9 \ --image-project=debian-cloud \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --zone=us-west1-a \ --tags=allow-ssh \ --project SERVICE_PROJECT_ID
Testing the load balancer
Log in to the instance that you just created and test that HTTP(S) services on the backends are reachable via the internal HTTP(S) load balancer's forwarding rule IP address, and traffic is being load balanced across the backend instances.
Connecting via SSH to each client instance
gcloud compute ssh l7-ilb-client-us-west1-a \ --zone=us-west1-a
Verifying that the IP is serving its hostname
curl 10.1.2.99
For HTTPS testing, replace curl
with:
curl -k -s 'https//:10.1.2.99:443'
The -k
flag causes curl to skip certificate validation.
Running 100 requests and confirming that they are load balanced
For HTTP:
{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl --silent 10.1.2.99)" done echo "***" echo "*** Results of load-balancing to 10.1.2.99: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo }
For HTTPS:
{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl -k -s 'https://:10.1.2.99:443')" done echo "***" echo "*** Results of load-balancing to 10.1.2.99: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo }
What's next
- To manage the proxy-only subnet resource required by Internal HTTP(S) Load Balancing, see Proxy-only subnet for internal HTTP(S) load balancers.
- To see how to troubleshoot issues with an internal HTTP(S) load balancer, see Troubleshooting Internal HTTP(S) Load Balancing.
- Clean up the load balancer setup.